Skip to main content
Instant

Can the EU Act Shape or Shake Business and Innovation?

Artificial Intelligence
Published on:

On May 21, 2024 the Europe voted in the EU AI Act. A year later, HEC Professor Pablo Baquero reflects on its impact in a masterclass on AI legislation and execution.

Pablo Baquero talking at a Masterclass at HEC

Key Findings
•    The AI Act defines obligations by four tiers of risk
•    High-risk AI systems must comply with strict transparency and safety rules
•    Certain AI practices, such as predictive policing, are prohibited
•    Firms must assess their AI risk exposure before launching in the EU
•    Penalties are steep for those who violate the AI Act’s requirements

 

What is the purpose of the European Union’s Artificial Intelligence Act?

The AI Act was designed in the mold of a product safety legislation. It creates requirements that must be fulfilled for certain risky AI systems, or products & services containing them, to be launched into the market. Furthermore, it establishes monitoring requirements for their post-commercial launch. 

More generally, the EU AI act seeks to harmonize the rules governing artificial intelligence among the EU member states. This facilitates the circulation of products and services between these states. 

 

At the same time that the Act promotes commerce and innovation, it also protects the fundamental rights of EU citizens.

 

At the same time that the Act promotes commerce and innovation, it also protects the fundamental rights of EU citizens that could be endangered by the misuse of AI technologies. Finally, the Act seeks to balance innovation with fundamental rights. 

The rules of the AI act are addressed primarily to the providers  or developers of AI technologies. But it also addresses those who deploy the technologies in the market such as importers.

Since the definition of AI is so broad, are we not overregulating?

A good question, but it’s important to understand that the AI Act addresses this concern by establishing a risk-based approach. AI systems are regulated by the Act according to the level of risk that they pose to society and individuals. Under this risk-based approach, the AI Act created four tiers of risk

-    First, there are practices which are banned or prohibited by the Act. For instance, the use of AI to predict the risk of someone reoffending, committing crimes again in the future. There was an AI-based program which used face recognition in real-time to monitor accessible public spaces, except in very exceptional conditions, among other situations. This was forbidden under the AI Act. 
-    The second tier of risk involves high-risk AI systems, those that can be launched in the market, but that must comply with certain requirements beforehand. A significant part of the EU AI Act centers on explaining what these requirements are and how high-risk AI systems must comply to enter the market. 
-    The third tier focuses on AI systems which pose minimal risk. These must comply with basic transparency requirements, that is, informing the users interacting with them that they are dealing with AI-powered systems. 
-    In the fourth tier, there are AI systems presenting non-existent levels of risk. These are not covered by the EU AI Act and thus are not subject to any restrictive EU AI rules. There can be codes of best practices regarding those, established by the industry, but these would represent voluntary, non-mandatory rules. 

Can you discuss the touchstone of the AI Act, which is the high-risk AI systems?

Sure! These systems must comply with data protection requirements, for instance, by ensuring that the data used to train the algorithms was collected in a lawful way and that it does not use sensitive personal information. The systems must not be biased, must not discriminate against individuals based on certain protected characteristics, such as sex, ethnicity, political views, religion, sexual orientation, among others. They also must be transparent and explainable, understandable by those that are affected by them. One must be able to comprehend how an algorithm has produced a certain result or prediction. 

Furthermore, the systems must be designed in such a way to be effectively overseen by humans, who should be able to avert or minimize potential risks they may pose. And the systems must be accurate, robust and cybersecure, avoiding mistakes that could lead to significant harm to individuals, as was witnessed in the scandal involving the Fuji Horizon software and the UK postal services. 

This section of the Act includes AI used for education and vocational purposes. For instance, it safeguards against the use of AI to determine whether someone can access a certain university; hiring algorithms that select candidates to be called for an interview; and algorithms determining whether someone will receive a public benefit. All these systems must comply with different requirements.

What kind of penalties do companies incur if they violate the EU AI Act’s tiered obligations?

In the main, those firms found guilty of violating the Act are fined. The amount of the fines depends on the type of obligation violated under the act. For companies, it is crucial they understand the level of risk created by the AI technologies they are incorporating, and the potential obligations to which they may be subject under the EU AI Act. 

However, there is still much to be defined regarding the contours of the specific obligations. The general lines of the regulation seek to find a balance between protecting the fundamental rights of individuals potentially affected by AI systems while not overregulating the subject and discouraging innovation.
 

 

Pablo Baquero is an Assistant Professor in Law at HEC Paris and a Chairholder at the Hi! Paris Center.

Related content on Artificial Intelligence

Artificial Intelligence
Making AI Understandable for Managers
Artificial Intelligence
Inside CDL: How Mentorship Fuels Startup Success
Artificial Intelligence
Students Massively Adopt AI Tools for Schoolwork
Artificial Intelligence

The AI Divide Is Human, Not Technological

Artificial Intelligence
AI Startups Succeed Faster with Research-Led Mentoring
four persons talking around a table and a computer

Image created with ChatGPT

Artificial Intelligence

How AI Imaginaries Align and Divide Innovators

Subscribe button for Knowledhe@HEC newsletter

Newsletter knowledge

A monthly brief in your email box and 3 issues of the book per year.

follow us

Insights @HECParis School of #Management

Follow Us

Support Research

Our articles are produced thanks to our reader's support