Skip to main content
Article

Rethinking AI Ethics Beyond Compliance

Rethinking AI Ethics Beyond Compliance
Artificial Intelligence
Published on:

As AI reshapes industry and society, ethical oversight lags behind. Seven of HEC’s top scholars provide some answers.

Hi! PARIS Center’s Meet Up roundtable on AI, Ethics & Regulations

Hi! PARIS Center’s Meet Up roundtable on “AI, Ethics & Regulations”, with, from the left to the right, speakers David Restrepo Amariles (HEC Paris), Thomas Le Goff and Tiphaine Viard of Télécom Paris, and moderator Anne Laure Sellier (HEC Paris), October 17, 2024. Photo: Ciprian Olteanu - Waverline.

Ethical AI is not merely a technical challenge, it is a societal imperative. As artificial intelligence advances at breakneck speed, concerns around equity, responsibility, and governance are rising just as quickly. Yet laws remain ambiguous, and corporate strategies are often reactive rather than principled.

At the Hi! PARIS Center’s roundtables on AI, Ethics & Regulations, HEC Paris faculty and research fellows from across disciplines are helping define what meaningful ethics in AI actually looks like. Their insights span liability, labor, advertising, platform power, and the systemic inequalities embedded in AI development—pushing us to see ethics not as a compliance checkbox, but as a foundation for trust and inclusion. 

What are the risks posed by unregulated adoption?  

To build trustworthy AI policies, businesses must address three key challenges:
•    Understanding AI’s rapid evolution
•    Managing premature AI deployments
•    Adapting to new AI labor dynamics

To do so, companies need to better estimate the swift advancements in AI technologies to align their strategies, which is crucial for competitiveness. They also should ensure AI solutions are thoroughly tested and refined before implementation, instead of releasing underdeveloped AI tools that can lead to operational inefficiencies.

Finally, the shift from in-house development to integrating external AI providers requires new skills and governance frameworks.

David Restrepo Amariles, HEC Associate Professor of Artificial Intelligence and Law,
Fellow at Hi! PARIS, and member of the Royal Academy of Sciences, Letters, and Fine Arts of Belgium.

 

Who is liable when AI harms?

The EU’s revised Product Liability Directive (PLD) partially addresses AI-related harms, such as those caused by personal injury, property damage, or data loss. It allows claims for certain types of software defects.

However, abuses like discrimination, violations of personality rights, or breaches of fundamental rights fall outside the PLD’s scope. EU law offers no remedy for individuals harmed by these AI abuses, leaving the matter to be regulated by individual Member States. This adds to the EU’s regulatory complexity.

Pablo Baquero, Assistant Professor in Law at HEC Paris and Fellow at the Hi! PARIS Center

 

Can advertising balance between AI and privacy?

AI in advertising introduces multiple privacy risks, including:
•    The collection of vast amounts of personal data
•    Invasive profiling
•    Risks of unintentional discrimination

Through automated processes, users may not fully understand how their data is tracked or used, raising concerns about consent and transparency.

EU regulations such as the GDPR and the upcoming AI Act propose stricter data protection measures and increased transparency requirements, and accountability frameworks. These regulations aim to ensure fair and lawful practices, granting individuals greater control over their data while fostering trust in AI-driven advertising.

Klaus Miller, Assistant Professor in Marketing at HEC Paris

 

Are startups acting more ethically than large firms?

All firms face ethical issues, but larger firms are better resourced to deal with those issues. Yet, our research finds that many high-tech startups have ethical AI policies and take actions that support ethical behaviors despite no regulatory or legal requirements to do so.
Startups are more likely to adopt ethical AI policies when they have a data-sharing relationship with technology firms like Amazon, Google, or Microsoft.

Challenges remain, however. Ethical AI development policies may still lack oversight boards or audits that entice their employees to follow the policy.

Michael Impink, Assistant Professor in Strategy and Business Policy at HEC Paris

Michael Impink at a Hi! PARIS Meet Up

On April 22, 2024 at a Hi! PARIS Meet Up on AI & Ethics hosted by Capgemini, Michael Impink presented his work to business representatives and researchers. Photo: Ciprian Olteanu - Waverline.

 

Why are digital platforms regulators?

Digital platforms function as private regulators in society by setting rules that impact broader markets and societal outcomes. For example, they control social movements by mediating resource mobilization for social movements, potentially restricting collective action. Furthermore, digital platforms like Airbnb often fail to ensure compliance on external platforms, although they enforce rules within their ecosystems. As private regulators, platforms must adopt transparent practices that align with societal values.

Digital platforms function as private regulators in society by setting rules that impact broader markets and societal outcomes.

Madhulika Kaul completed her PhD at HEC and is now an Assistant Professor in Strategy at Bayes Business School, in London, England. Her research was funded by Hi! PARIS and received a HEC Foundation Award for the best thesis 2024. 

Madhulika Kaul HEC Foundation Award 2024

On March 10, 2025 at the HEC Alumni office in Paris, Madhulika Kaul received the PhD Thesis Award from Johan Hombert, Associate Dean of the PhD program, and Claire Calmejane, Jury President and member of the HEC Foundation Research Committee. She was accompanied by her supervisor, Professor Olivier Chatain. Photo by HEC Paris.

 

How can game models tackle bias and manipulation?

Using a game theory model to analyze interactions between strategic agents and AI algorithms, researchers found that algorithms are often influenced by data inputs from self-interested individuals, which creates unreliable outcomes.

This research shows that ethical AI requires not only technical solutions but also a deeper understanding of human behavior and incentives.

Atulya Jain completed his PhD at HEC and is now a postdoctoral researcher at the Institute for Microeconomics at the University of Bonn, in Germany. His research was funded by Hi! PARIS.

 

How can we work for diversity and systemic change in AI governance?

Surveys show that women are 20% less likely than men to adopt tools like ChatGPT. Other surveys suggest these gaps could be the result of unmet training needs for women in the workplace and workplace restrictions.

Surveys show that women are 20% less likely than men to adopt tools like ChatGPT.

Such discrepancies start young. Before AI is deployed in the public domain or workplace, it needs to be designed by someone who can write a code program. Yet, twice more men than women aged 16–24 knew how to program across the European Union in 2023.

The social biases that create social barriers to entry for girls learning to code are thus built into the AI systems themselves. As the use of these AI systems proliferates across sectors at high speed, we run the risk of deploying a powerful tool that exacerbates inequalities.

In our view, weaving sector-specific and cultural perspectives into policy and system design is just as crucial as passing new regulations to ensure AI solutions are truly fair for everyone.

Marcelle Laliberté, HEC Chief Diversity, Equity, and Inclusion Officer. Marcelle Laliberté contributed to the UN’s 2024 AI governance report as a nominated expert, with the collective position paper “Governing AI’s Future: Indigenous Insights, DEI Imperatives, and Holistic Strategies – a cross disciplinary view,” cowritten with HEC PhD graduate Claudia Schulz and Olivia Sackett, Fellow at Swinburne University of Technology, published in 2024.

 

Related topics:
Artificial Intelligence

Related content on Artificial Intelligence

Artificial Intelligence
Can the EU Act Shape or Shake Business and Innovation?
Artificial Intelligence
Inside CDL: How Mentorship Fuels Startup Success
Artificial Intelligence
Making AI Understandable for Managers
Artificial Intelligence

The AI Divide Is Human, Not Technological

Artificial Intelligence
AI Startups Succeed Faster with Research-Led Mentoring
Artificial Intelligence
Students Massively Adopt AI Tools for Schoolwork