Skip to main content
About HEC About HEC
Summer School Summer School
Faculty & Research Faculty & Research
Master’s programs Master’s programs
Bachelor Programs Bachelor Programs
MBA Programs MBA Programs
PhD Program PhD Program
Executive Education Executive Education
HEC Online HEC Online
About HEC
Overview Overview
Who
We Are
Who
We Are
Egalité des chances Egalité des chances
HEC Talents HEC Talents
International International
Campus
Life
Campus
Life
Sustainability Sustainability
Diversity
& Inclusion
Diversity
& Inclusion
Stories Stories
The HEC
Foundation
The HEC
Foundation
Summer School
Youth Programs Youth Programs
Summer programs Summer programs
Online Programs Online Programs
Faculty & Research
Overview Overview
Faculty Directory Faculty Directory
Departments Departments
Centers Centers
Chairs Chairs
Grants Grants
Knowledge@HEC Knowledge@HEC
Master’s programs
Master in
Management
Master in
Management
Master's
Programs
Master's
Programs
Double Degree
Programs
Double Degree
Programs
Bachelor
Programs
Bachelor
Programs
Summer
Programs
Summer
Programs
Exchange
students
Exchange
students
Student
Life
Student
Life
Our
Difference
Our
Difference
Bachelor Programs
Overview Overview
Course content Course content
Admissions Admissions
Fees and Financing Fees and Financing
MBA Programs
MBA MBA
Executive MBA Executive MBA
TRIUM EMBA TRIUM EMBA
PhD Program
Overview Overview
HEC Difference HEC Difference
Program details Program details
Research areas Research areas
HEC Community HEC Community
Placement Placement
Job Market Job Market
Admissions Admissions
Financing Financing
Executive Education
Home Home
About us About us
Management topics Management topics
Open Programs Open Programs
Custom Programs Custom Programs
Events/News Events/News
Contacts Contacts
HEC Online
Overview Overview
Degree Program Degree Program
Executive certificates Executive certificates
MOOCs MOOCs
Summer Programs Summer Programs
Youth programs Youth programs
Sustainability & Organizations Institute

Are you sure your AI is working as it should? Insights from the front lines

Do we have the tools to ensure that AI is working as it was meant to and, if not, fix it? The third panel of the ‘AI, Robotics & Work’ Conference at HEC Paris on March 12, ‘Organizational structure & Governance’, intended to answer these organizational stakes.

By Paul-Marie Carfantan 

Panel 3 - AI, robotics and work - Gratadour

Do we have the tools to ensure that AI is working as it was meant to and, if not, fix it? The third panel of the ‘AI, Robotics & Work’ Conference at HEC Paris on March 12, ‘Organizational structure & Governance’, intended to answer these organizational stakes. 

When companies transition from POCs (Proof of Concepts) to industrializing AI, technical questions are gradually augmented by organizational issues. How do you ensure the adopted solution does not carry problematic biases, both towards clients and employees (remember Amazon?)? How do you ensure the solution does not create user under-reliance or over-reliance and divert value propositions? How do you ensure the values of the company are maintained over time with a black-boxed system? 

The moderator, Jean-Remi Gratadour, Executive Director of HEC Paris IDEA Center and Coordinator of the MBA Digital Specialization, invited Nicolas de Bellefonds, Partner and Managing director at BCG and the leader of BCG Gamma in France, and Dr. Guillaume Chasalot, Researcher at Université Paris-Est and curator of the website AlgoTransparency.org, to shed light on these issues. 

Nicolas de Bellefonds and Guillaume Chasalot both presented their ideas and solutions to the problems related to the deployment of AI.

AI and Change Management

Nicolas first described how the long-term fears around AI (killer robots, massive unemployment, etc.) are far from current realities that he observes on a daily basis. According to a BCG report, the disconnection between the perception of AI adoption across industries and its reality is far more striking: only 20% of companies have adopted AI at scale. This number as big as it was two years ago, noted Nicolas. Worst, it seems that France is truly lagging behind other developed countries (e.g. Canada, US, UK, Spain). Why? Nicolas shared that most French businesses get lost in the experiment phase (POCs) and are not able to really focus on the fundamental organisational change that is needed to include questions of algorithmic bias and the required oversight. Nicolas outlined that, on the one hand, managers have a crucial role to play to make organizational change keep up with AI advances but that, on the other hand, there is a lack of the necessary training needed to bridge the gap between change management and AI. Nicolas illustrated his point with different use cases. One that struck a chord is about a US-based retail company that followed the recommendation of their customer demand prediction AI and automatically increased prices in an area with no competitors. The customers took their anger to the media and the company received a severe backlash. Nicolas’ advice here is very relevant: their manager could have indeed forecasted the contradiction and provided some oversight in the deployment of the solution to prevent failure. 

AI and Knowledge

Guillaume also highlighted in his presentation that with AI, knowledge is power. As a former Microsoft researcher and YouTube engineer, he focused on the long-term impact of process optimisation. Guillaume took the case of YouTube’s recommender system, an AI that suggests videos to user based on their (inferred) preferences. This system seems to create filter bubbles by trying to increase for customer watch time. According to users’ characteristics, the algorithm suggests more and more extreme videos to keep them on the platform. Guillaume presented the cases of flat-earth videos, political conspiracy videos and videos degrading the image of women, reinforcing the users’ stereotypes. Guillaume argues that these feedback loops directly reinforces filter bubbles and fake news mechanisms. Interestingly, YouTube has recognised since February the issue but did not present a solution. The insights presented could be generalised not only to any recommender system being deployed but also to any company trying to optimise internal or external processes: the lack of oversight on short-term optimisation is likely to hinder the quality of the optimised process on the long term. 

When compared together, Nicolas’s and Guillaume’s presentations are complementary. There are some resources right before our eyes to avoid AI going wrong, but it needs leadership and training opportunities from companies. 

What this brings to the fore is the need for companies to become further aware of the impact of the different trade-offs, between business objectives and responsibility, that they make on a regular basis. Some bigger trade-offs: should I stop deploying the recommender system that contributes to fake news but is also key for my business? And, some smaller trade-offs: what are the costs to train my employees on identifying and mitigating algorithmic bias? Should I hire an ethicist in my team? 

These trade-offs can not only be solved with critical thinking but also with tools. Many tools already exist to help decision-makers protect their business while adopting AI. The think tank Impact AI has actually been releasing a Responsible AI open library (boîte à outils) that provides tools for identifying and addressing data biases, providing employee training as well as keeping the discussion going. 

In this booming industry that is AI, it seems key to be equipped not only with technical skills but also with critical thinking. Navigating AI waters with success will require both for sure. 

Paul-Marie Carfantan is a master student in the Sustainability and Social Innovation MSc at HEC Paris. Passionate of the impacts of AI and data intensives systems, Paul-Marie already consults around the world for companies on AI ethics. Feel free to add the author on Linkedin to further discuss this article.