Skip to main content
AI Knowledge dossier - cover illustration by Kim Roselier

Illustration by Kim Roselier

AI Technology: On a Razor's Edge?

AI Technology: On a Razor's Edge?

As artificial intelligence reshapes industries, society grapples with its profound impact on creativity, governance, healthcare, misinformation, and the future of work. From the geopolitical race for digital sovereignty to creative industries' struggle with generative AI, these perspectives provide a nuanced understanding of how AI is both an opportunity and a challenge, requiring careful regulation, ethical considerations, and strategic adaptation across sectors. Part of the academic work featured in this dossier is funded by the Hi! PARIS Center, which is co-founded by HEC Paris and IP Paris, and supported by the French government as one of the nine projects chosen for the "Clusters IA". Find the pdf of the review here.

Structure

Part 1
AI Accuracy Isn’t Enough: Fairness Now Also Matters
HEC Paris professor Christophe Pérignon reveals why AI models must be tested for fairness, stability, and interpretability, not just predictive performance.
Part 2
AI Sovereignty and the Geopolitics of Submarine Cables
HEC Paris researchers reveal how AI geopolitics is turning undersea cables into critical fault lines in a fragmented world economy.
Part 3
Could AI Trigger the Next Financial Crisis?
HEC Paris research reveals how AI transforms financial forecasting and trading, but may also magnify systemic risk and market fragility.
Part 4
The Rise of Alternative Data and Startups in Finance
A leading figure in financial technology, Claire Calmejane (M.06) highlights Professor Foucault’s research on AI and alternative data in finance and showcases fintech startups tackling major industry challenges.
Part 5
Rethinking AI Ethics Beyond Compliance
As AI reshapes industry and society, ethical oversight lags behind. Seven of HEC’s top scholars provide some answers.
Part 6
In AI We Trust?
New research by HEC Paris faculty reveals a paradox: people trust AI advice more than human guidance, even when they know it’s flawed. 
Part 7
How VINCI Aligns AI with Decentralized Innovation
HEC Paris case study shows how VINCI's innovation hub bridges generative AI tools with its decentralized structure to scale employee learning. 
Part 8
AI Is Redefining Human Creativity at Work
HEC Paris research by Daria Morozova (H.23) reveals how psychological responses to AI shape creative effort, identity, and collaboration in the future workplace. Interview.
Part 9
AI Is Reshaping the Creative Economy
From copyright to competition, HEC Paris research reveals how AI tools are transforming - and polarizing - the creative sector.
Part 10
How Machine Learning Breaks the Habit Myth
Using machine learning, professor Anastasia Buyalskaya reveals how long it takes to form a new habit. Her research could reshape how businesses build lasting behavior change. 
Part 11
Saving Lives in Intensive Care Thanks to AI
A widely used AI mathematical model could reduce ICU mortality by 20% by alerting doctors before patients' conditions worsen.
Part 12
The Promise of AI in HR Undermined by Flawed Data and Fragile Trust
Poor data quality, bias, weak oversight, and shallow applications undermine AI’s impact in recruitment, training, and engagement, as revealed by research from HEC Paris.
Part 13
New Kid on the Block Speeds Up HR Recruitment
Established in 2024, the startup Bluco offers businesses a solution that simplifies and accelerates the recruitment process. Co-founder Nicolò Magnante (H.24), explains.
Part 14
The AI Divide Is Human, Not Technological
HEC Paris research reveals that the technology’s biggest limitation isn’t technical, but human: who benefits, and who gets left behind.
Part 15
How AI Imaginaries Align and Divide Innovators
Shared AI visions spark innovation, but misaligned hopes risk division. In his thesis, Pedro Gomes Lopes of IP Paris explains how managing these imaginaries is key to ecosystem success.
Part 16
AI Startups Succeed Faster with Research-Led Mentoring
A structured mentoring model helps AI innovators go from research breakthroughs to real-world impact, and rethink what responsible AI leadership means. 
Part 17
Inside CDL: How Mentorship Fuels Startup Success
What drives top business leaders to help startups in the Creative Destruction Lab? We talk to two in the program’s AI stream.
Part 18
AI Advances Supply Chain Sustainability Goals
Predictive algorithms reroute ships to avoid whales. AI tools scan for compliance risks in global supplier networks. In this interview, HEC professor Sam Aflaki shows this is real. 
Part 19
AI Must Be Governed Democratically to Preserve Our Future
HEC Paris professor Yann Algan provides some answers on how societies can confront the political risks and civic promises of artificial intelligence.
Part 20
Fake News Spreads Fast - Platforms Must Act Faster
HEC Paris professor David Restrepo Amariles proposes some answers to platforms and lawmakers designed to rethink misinformation regulation now. 
Part 21
Making AI Understandable for Managers
How Vincent Fraitot’s latest book demystifies data science for decision-makers.
Part 22
Students Massively Adopt AI Tools for Schoolwork
HEC Paris students are rapidly adopting generative AI in their academic work, prompting urgent reflections on ethics, pedagogy, and institutional strategy.
Part 23
Gaming the Machine: How Strategic Interactions Shape AI Outcomes
Can algorithms be fooled by the very people they’re meant to assess? HEC Paris researcher Atulya Jain provides some answers. 
Part 1

AI Accuracy Isn’t Enough: Fairness Now Also Matters

AI Accuracy Isn’t Enough: Fairness Now Also Matters
Artificial Intelligence
Published on:

HEC Paris professor Christophe Pérignon reveals why AI models must be tested for fairness, stability, and interpretability, not just predictive performance.

Double exposure of brain drawing over us dollars bill background. Technology concept.

Photo Credits: peshkova on 123rf

Key findings & research contributions:

  • Development of statistical tools to assess fairness, interpretability, frugality, and stability in AI models.
  • Methodologies to identify biases and their causes, and improve fairness without compromising predictive power.
  • Alignment with legal frameworks like the Equal Credit Opportunity Act (ECOA) in the US and the AI Act in Europe.
     

Ever since the arrival of the AI boom, this technology has been revolutionizing industries by enabling both sophisticated decision-making systems at scale and remarkably accurate forecasting. But does it safeguard companies from possible discriminatory and legal abuse? Does AI accuracy integrate the challenges of transparency and fairness? Does it build trust? These are some of the questions we have been exploring for the past four years by monitoring banks and companies and their use of AI in daily business transactions. Our conclusions feed into both HEC’s teaching and partner companies’ practices.

Seeing AI’s Impact... and its Limits 

No one doubts the remarkable progress AI is offering in all fields of economy and business. For instance, in credit markets, its algorithms process vast amounts of data to distinguish between creditworthy and high-risk borrowers, reducing defaults and enhancing lender profitability. 

In the audit fields, AI red flags suspicious transactions and fraudulent activities for which human scrutiny is required. 

In the insurance industry, automated claim management systems are used to speed up reimbursements without any further costly human expertise. 

In E-commerce platforms, AI is used to change prices dynamically and make suggestions to customers which maximizes the sellers’ profit. 

In hiring, AI is employed to screen a large number of CVs, detecting promising applicants and matching their skills to a given job. 

In marketing, the technology’s churning models are employed to quickly detect customers who are likely to stop doing business with the company. This gives it enough time to preemptively act to retain them.

All these examples exemplify how multiple metrics can be used to check whether the AI delivers on its promises. Those metrics focus on the accuracy of the predictions and can be measured either in statistical terms (average error) or in dollar terms (profit-and-loss). 

Going Beyond Accuracy: a Must

However, our research suggests that in many applications we need to go beyond accuracy. For instance, when automatically assessing the creditworthiness of loan applicants, credit scoring models can place groups of individuals sharing a protected attribute, such as gender, age, or racial origin, at a systematic disadvantage in terms of access to credit. 

One of the best-known illustrations occurred when software developer David Heinemeier revealed how Apple’s credit card offered him much better terms than his wife - despite identical wealth and income: “The @AppleCard is such a fucking sexist program,” he wrote on Twitter. “Apple’s black box algorithm thinks I deserve 20x the credit limit she does. No appeals worked”. 

Clearly, testing the fairness of an AI is both a societal and a business imperative. On top of ethical issues, it avoids negative impact in terms of reputation for companies whose use of AI proves to be discriminatory – with all the legal implications this could have.

In other applications, the extra dimension that companies need to heed is the interpretability of the model. Complex machine learning models often act as “black boxes,” making it difficult to explain their decisions. Business leaders need models that are interpretable—able to provide clear and actionable insights on the how-and-why decisions are being made. This transparency also builds trust amongst stakeholders and helps comply with regulations.

A third dimension is the frugality of AI: efficient AI models that achieve high performance with minimal computational resources are particularly appealing to businesses. Frugal models reduce costs, enhance scalability, and minimize environmental impact, making them a practical choice for companies aiming to balance innovation with resource constraints. The example of DeepSeek’s breakthrough in January is a dramatic illustration of this.

Finally, recent research at HEC, "Towards Stable Machine Learning Model Retraining via Slowly Varying Sequences" (by Vassilis Digalakis and coauthors from Harvard and MIT) underlines how stability plays an important role. AI models should produce consistent outputs even as new data become available or hyperparameters are adjusted. Instability can lead to unpredictable performance, undermining trust and acceptability of the systems. 

In medical applications, stability is particularly critical when physicians rely on AI for disease detection or treatment recommendations. The internal mechanisms of the model must be both understandable and stable to ensure reliability. Otherwise, medical professionals will be reluctant to integrate AI-generated insights into their clinical decision-making."

Shooting for a Fair AI Model

My team and I have made it an ongoing research priority to provide companies the tools to monitor these extra dimensions. We do so by developing statistical tests to assess the score of a particular model on one or several of these dimensions. Alternatively, we design AI tools that natively display the properties required by the company to implement platforms answering questions such as fairness and transparency. 

In a recent paper, "The Fairness of Credit Scoring Models," published in 2024 in Management Science, we answer the three following questions. How can we know whether a credit scoring model is unfair against groups of individuals that society would like to protect? If the model is shown to be unfair, how can we find the causes? 

Finally, how can we boost the fairness of this model while maintaining a high level of predictive performance? A further, and non-negligible part of our work is that we propose a methodology which fits harmoniously with current legal framework ensuring fairness in lending on both sides of the Atlantic. This includes the Equal Credit Opportunity Act (ECOA) in the US and the AI Act in Europe.

 

We developed statistical tests to assess the score of a particular AI model on fairness, interpretability, frugality or stability.

 

In another article, "Measuring the Driving Forces of Predictive Performance: Application to Credit Scoring," just out in January 2025, we show how to identify the driving forces behind the performance of any black-box machine learning model. This is of primary importance in credit scoring for banking supervisors as they need to understand why a given model is working or not and, if so, for which borrowers. We have successfully applied our methodology to a novel dataset of auto loans provided by an international bank, thus demonstrating the usefulness of this method.

From Research Lab to Classroom… to Big Business!

Research is about pushing the boundaries of knowledge. This production of knowledge allows HEC to fuel its course programs with original, state-of-the-art, rich contents. For over four years, I have been teaching a course entitled Fairness and Interpretability in the Master in Data Science and AI for Business. 

This joint program between HEC Paris and Ecole Polytechnique integrates much of the tools and conclusions from our research. The technically challenging course introduces students to the latest techniques in fairness and interpretability. It combines advanced research methods with a business focus that is not common in courses typically offered by business schools. And it corresponds well to HEC’s “Teach, Think, Act” DNA.

A number of the statistical tools we develop to test the fairness of AI models are currently being used by several large French banks. They permit these institutions to comply with the EU’s AI Act that started to come into force on August 1, 2024. 

The new European regulation classifies algorithmic lending as a high-risk example of AI applications. This academic-industry partnership is going beyond simple proofs-of-concept as some of our methods are now used in production at scale by the banks. But it goes well beyond a simple application: experts from the banks challenge the researchers and provide feedback, ideas and access to real data. 

Such exchanges are then relayed to our HEC students, creating a virtuous triangle. We are thus enjoying a win-win collaboration that is set to grow.

The Fairness of Credit Scoring Models, by Christophe Pérignon of HEC Paris, Christophe Hurlin and Sébastien Saurin of the University of Orleans, published in Management Science in November 2024, and Measuring the Driving Forces of Predictive Performance: Application to Credit Scoring, published in Arxiv in January 2025. These papers are part of a research agenda on identifying and addressing fairness, interpretability, frugality, and stability in AI models.
See structure
Part 2

AI Sovereignty and the Geopolitics of Submarine Cables

AI Sovereignty and the Geopolitics of Submarine Cables
Geopolitics
Published on:

HEC Paris researchers reveal how AI geopolitics is turning undersea cables into critical fault lines in a fragmented world economy.

world map with cables

Photo Credits: moxumbic/123rf

As the AI arms race ramps up tensions across the globe, the “geopolitical honeymoon” of the early days of the Internet is over, according to Jeremy Ghez and Olivier Chatain. In their ongoing research, funded by the HEC Foundation, the two HEC professors warn that this AI-driven “industrial revolution” could end in a “very messy separation” between the US and China, centered on submarine cables. As the “great divorce” gathers speed, how can businesses adapt and leverage this new infrastructure?

From Shared Networks to Sovereign Ambitions

They’re no thicker than a garden hose – albeit a hose encased in galvanized armor and clad in a polyethylene jacket. And yet these bundles of optical fibers buried beneath the ocean floor power up to 95% of all the data that zips around the globe. In all, there are 700,000 nautical miles of undersea cables. 

Even hidden from sight, there's no escaping Google and Meta (and, to a much lesser extent, Microsoft and Amazon), the heaviest investors in submarine cables. As for Elon Musk, his Starlink satellites simply can’t match the massive amounts of data that cable can handle...

But how do these fiber optic cables stand up to the geopolitics of our planet? For ours is a tumultuous age. The trade showdown between the US and China, the ongoing wars in Sudan and Ukraine, the Israeli invasion of Gaza and the revolution in Syria are all stoking international divisions. And, looming over all this is the shadow of climate change. 

 

In the past, technology united us, and interdependence was a source of stability.

 

These geopolitical frictions are shaping the AI revolution. “In the past, technology united us, and interdependence was a source of stability”, explains Jeremy Ghez, the academic director of the HEC Center for Geopolitics, “but now these geopolitical rivalries have become technological rivalries”. Governments worldwide are being driven by what Olivier Chatain, professor of competitive strategy at HEC, calls “political calculus”. They are all bidding to be masters of their digital destiny. AI sovereignty is the name of this new game. 

Some entities – the US and the EU, for example – have already rolled out their own AI regulations and legislation to ringfence their national interests. If the trend is for control, the potential upshot is “global fragmentation, a patchwork of markets with little consistency, concedes Ghez.

Submarine Kingpins of the Global Economy 

But let’s come back to our underwater cables. Alongside profit-driven companies and the global telecom carriers, state actors rely on undersea networks for their communications. Although governments rarely own the infrastructure, they sometimes control routes indirectly via state-owned telecom operators. “In fact, the competition between rival states is fiercer than the competition between rival firms,” claims Chatain.

Wrapped up in these undersea cables is a startling paradox: this “deep tech” is highly vulnerable. Sharks have been known to take a bite, ship anchors are a menace, and earthquakes pose a problem. Redundancy and swift repairs normally mitigate the impact of a single cut, but a series of incidents may together create prolonged disruptions as in South Africa in March 2024. 

More significantly, there is the threat of covert attacks stemming from the geopolitical tensions among Washington, Beijing, Russia and the EU. A series of cable cuts in the Baltic and Red Seas in late 2024 and around Taiwan in early 2025 raised eyebrows and have been investigated as instances of hostile actions below the threshold of war.

Governments monitor where new cables are installed. In 2020, the White House vetoed a planned subsea data cable between the US and Asia owned by Google and Meta and subsequently imposed a re-routing away from China out of concern for communication interceptions. Google is now developing networks in the Pacific and Indian ocean in an apparent bid to avoid the South China Sea. 

In late 2024, France took the decision to nationalize a strategic submarine network manufacturer and installer (Alcatel Submarine Networks) to prevent this “national security asset” from falling into the wrong hands.

Double-edged Sword Beneath the Waves

This technological revolution is generating countless opportunities for a myriad of actors. “But not necessarily good actors,” points out Chatain. What would happen if cables are sabotaged in a coordinated way?” he asks. There would be little – or no – connectivity: national security communications would be compromised, financial transactions would grind to a halt, and critical services would hang in the balance. 

Interestingly, actors like Google, by investing massively in cable networks in out-of-the-way locations build redundancy and make the overall network more robust.

Both Chatain and Ghez agree it’s time to start thinking outside the (business) box. This new AI landscape is dominated not just by inter-firm rivalry, but also by rivalry – and conflict – between nations. As for civil society, it needs to ask itself if it wants to be a bystander, a pawn or an actor.

Article based on an interview with Olivier Chatain, professor of competitive strategy at HEC, and Jeremy Ghez, academic director of the HEC Center for Geopolitics. This ongoing research is funded by the HEC Foundation.
See structure
Part 3

Could AI Trigger the Next Financial Crisis?

Could AI Trigger the Next Financial Crisis?
Video
Published on:
Updated on:

HEC Paris research reveals how AI transforms financial forecasting and trading, but may also magnify systemic risk and market fragility.

Foucault live masterclass

 

 

Could AI potentially trigger the next financial crisis, as recently suggested by Gary Gensler, chairman of the U.S. Securities and Exchange Commission? In a live masterclass in November 2024, HEC Finance Professor Thierry Foucault presented the associated risks of using big data, alternative data and algorithms for the financial markets, transforming decisions in investment and trading. Highlights.

What links are there between AI and Financial Markets?

If we’re going to start thinking about whether AI could trigger the next financial crisis we first need to think a little bit more about what artificial intelligence has to do with the financial market in the first place. 

Why is this technology so important for finance? Firstly, artificial intelligence must be considered a technology to extract information from a vast amount of data and transform this information into predictions and decisions. In the context of financial markets, this means predictions about stock returns or corporate earnings, for instance. 

People often compare big data to oil. Well, this is big data raw material and you need a technology to transform that into something that can be useful for decision-making among other things. That's exactly what artificial intelligence does with data. In other words, it’s a set of powerful algorithms (so called machine learning algorithms) processing a large amount of data and extracting predictions about the future from this data.

 

AI must be considered a technology to extract information from a vast amount of data and transform this information into predictions and decisions. In the context of financial markets, this means predictions about stock returns or corporate earnings.

 

Why is this important for finance? Because a core function of the financial industry is to produce information. Take the work of a security analyst, for example. The role of a security analyst is to use financial statement and balance sheet information to forecast future corporate earnings, make predictions about where the stock price of a firm is going to go and so forth. That's a prediction problem.

Let me share other examples. Think about a credit rating agency: they have to give a rating to corporate borrowers that reflects the credit risk of a firm. This requires predicting the default risk of the firm. This is again a prediction exercise. Think about a bank that has to decide whether to make a loan or not to a borrower. Part of the decision making process is to assess the default risk of the borrower, another prediction problem. 

In sum many of the activities of the financial industry involve predictions and forecasting. Given that, it's not very surprising that artificial intelligence is having a major impact on the industry. It’s a technology that is changing the way people forecast and make predictions about the future.

How Could AI Displace Traditional Roles in Sectors Like Asset Management?

AI is revolutionizing the asset management industry, reshaping roles traditionally held by human fund managers. As algorithms and machine learning models prove increasingly adept at analyzing vast datasets, the role of human intuition in investment decisions is diminishing. Quants - data scientists who design predictive models relying on vast amount of data and machine learning algorithms- are becoming the key players, supplanting the more traditional approach relying on expertise and judgmental analysis. This shift is not just technological; it is cultural, fundamentally altering the skills that asset managers need to thrive.

One illustrative example is the use of alternative data - such as satellite images of parking lots to predict retailers’ performance or messages posted in social media. Many firms are now selling such data to asset managers. As a result, active fund managers who rely on conventional approaches are finding themselves at a disadvantage. Alternative data combined with tools from AI can identify trends and anomalies faster and more accurately than humans ever could, creating a competitive edge for those who adopt these tools early.

Bridging the skill gap

However, this transition raises critical concerns about employment and the skill gap. Traditional asset managers may find their expertise devalued, while firms race to recruit those with advanced technical capabilities. 

The industry must grapple with the dual challenge of integrating these innovations while ensuring that its workforce is not left behind. Upskilling and adapting to the demands of this new era are no longer optional—they are imperative for survival.

What Risks does AI Pose to Trading and Market Stability?

AI-driven trading is not without significant risks, particularly concerning market stability. As algorithms transition from rule-based systems to self-learning models, they become increasingly opaque - often referred to as 'black-box' models. 

This lack of transparency poses challenges in understanding how decisions are made, which can lead to unpredictable outcomes. A prime example is the potential for flash crashes, where algorithms amplify volatility and destabilize markets within moments.

Moreover, there is evidence that AI systems, when interacting with one another, may inadvertently collude or behave in ways that distort market competition. In simulated experiments, algorithms have been shown to arrive at strategies that resemble price-fixing without explicit human programming. This kind of emergent behavior raises ethical and regulatory questions, as it can undermine the fairness and integrity of financial markets.

Algorithmic interdependence and the risk of market fragility

Another critical concern is market fragility. The interconnectedness of algorithmic trading systems means that a failure in one can cascade, causing widespread disruption. Regulators and market participants must remain vigilant, balancing the efficiencies AI brings with the systemic risks it could introduce. 

Mitigating these risks will require robust oversight, transparency, and an evolving regulatory framework capable of keeping pace with these technological advancements.

Why Is AI an Asset for Short-Term Predictions, but Not Long-Term Ones?

AI excels in short-term predictions because of its unparalleled ability to process massive amounts of data in real-time and identify patterns that are often invisible to human analysts. Markets generate an incredible volume of information every second - from prices and volumes to alternative data sources like social media sentiment or satellite imagery. AI algorithms are designed to digest this complexity and extract actionable insights almost instantaneously. This is particularly valuable in high-frequency trading and other strategies where speed and precision are critical.

However, the same strengths that make AI powerful in the short term become limitations in the long term. Long-term predictions require not just data but also a nuanced understanding of structural changes in the economy, geopolitical shifts, and social trends. 

These factors are inherently harder to quantify and often evolve in unpredictable ways. AI models, by design, are optimized for pattern recognition within the existing dataset, making them less effective at extrapolating beyond known variables or adapting to novel scenarios.

The hidden dangers of short-termism in AI-driven markets

The reliance on AI for short-term gains can also create systemic risks. By prioritizing immediate returns, market participants may overlook long-term value creation and stability. This short-term focus could exacerbate market volatility, as algorithms chase trends and amplify fluctuations. 

To counterbalance this, firms and regulators need to ensure that the pursuit of short-term efficiencies does not undermine market integrity and market participants’ ability to forecasts the payoffs of long-term investment projects and value the benefits and costs of such projects properly.
 

Article based on a masterclass on Thierry Foucault's article, “Does Alternative Data Improve Financial Forecasting? The Horizon Effect” (The Journal of Finance, June 2024), co-written with Olivier Dessaint and Laurent Fresard.
profile - Knowledge - Thierry Foucault
Thierry Foucault
HEC Foundation Chaired Professor
See structure
Part 4

The Rise of Alternative Data and Startups in Finance

The Rise of Alternative Data and Startups in Finance
Published on:

A leading figure in financial technology, Claire Calmejane (M.06) highlights Professor Foucault’s research on AI and alternative data in finance and showcases fintech startups tackling major industry challenges.

claire calmejane HEC

Claire Calmejane

Short-term gains, long-term losses: the tradeoff in alternative data use

The financial data market, valued at $42 billion, and the alternative data market, which is growing to nearly $10 billion, highlight the increasing role of information in financial markets. Thierry Foucault’s research, “Does Alternative Data Improve Financial Forecasting? The Horizon Effect”, published in The Journal of Finance in 2024, sheds light on the impact of a growing alternative data market.

It demonstrates that, while this market enhances short-term forecasting accuracy, it also diverts attention from long-term projections, thus undermining analysts’ ability to assess strategic value over time. Foucault argues that alternative data’s immediate availability and real-time nature improve short-term stock forecasts. 

However, its overwhelming volume can lead to information overload, diminishing analysts’ capacity for in-depth, long-term analysis. This imbalance has profound implications, including misaligned stock valuations and reduced incentives for firms to pursue long-term investments.

Technological innovations offer potential solutions

Fortunately, there are emerging technological innovations that offer potential solutions to the problems identified in Foucault’s research. Synthetic data and alternative data sources can improve data quality issues, while large language models (LLMs), generative adversarial networks (GAN) and reinforcement learning provide tools to analyze complex relationships in financial markets. By integrating synthetic data, reinforcement learning, and LLMs into financial systems, institutions can bridge the gap between short-term performance and long-term reliability.

Fortunately, there are emerging technological innovations that offer potential solutions to the problems identified in Foucault’s research. 

 

Challenges persist

Yet, challenges persist. Simplistic applications of models fail to address weak signals, and neural networks struggle with unstructured or dynamic data. To counter these limitations, specialized datasets tailored to capital markets, advanced training methodologies, and modular, scalable systems are necessary. These approaches can better capture hidden relationships and adapt to rapid market changes.

 

Start-ups could pave the way for the future of finance

HEC has the good fortune to benefit from the Creative Destruction Lab (CDL) programs. As a CDL’s mentor, I can identify promising startups such as Lemon AI, Synthera AI and Revaisor offering solutions. 

  • Lemon AI is exploring advancements in synthetic data to curate high integrity datasets and bridge short-term and long-term data analyses.
  • Synthera AI develops proprietary AI models to generate synthetic financial market data, enabling investors to simulate and analyze complex market dynamics.
  • And Revaisor is addressing compliance and transparency challenges as part of the governance. 

As with any innovation, the initial outcomes may not be perfect. However, at HEC, the ability to foster collaborations between carefully selected startups and researchers is a key differentiator. This helps financial professionals embrace innovation while maintaining vigilance to safeguard long-term strategic decisions. The research conducted by Professor Foucault can also enhance these safeguards.

 

Between 2020 and 2022, Claire Clamejane was nominated one of the 100 most influential women in Finance in Europe by the Financial News. She began her career in 2006 at the Capgemini Consulting's Technology Transformation department before joining Lloyds Banking Group in 2012, where she led digital delivery and risk transformation. As Innovation Director at Société Générale, she then drove Digital P&L, AI and fintech investments through SG Ventures. 

Interview of Claire Clamejane based on a research from HEC Professor Thierry Foucault, “Does Alternative Data Improve Financial Forecasting? The Horizon Effect” (The Journal of Finance, June 2024), co-written with Olivier Dessaint and Laurent Fresard. 
See structure
Part 5

Rethinking AI Ethics Beyond Compliance

Rethinking AI Ethics Beyond Compliance
Artificial Intelligence
Published on:

As AI reshapes industry and society, ethical oversight lags behind. Seven of HEC’s top scholars provide some answers.

Hi! PARIS Center’s Meet Up roundtable on AI, Ethics & Regulations

Hi! PARIS Center’s Meet Up roundtable on “AI, Ethics & Regulations”, with, from the left to the right, speakers David Restrepo Amariles (HEC Paris), Thomas Le Goff and Tiphaine Viard of Télécom Paris, and moderator Anne Laure Sellier (HEC Paris), October 17, 2024. Photo: Ciprian Olteanu - Waverline.

Ethical AI is not merely a technical challenge, it is a societal imperative. As artificial intelligence advances at breakneck speed, concerns around equity, responsibility, and governance are rising just as quickly. Yet laws remain ambiguous, and corporate strategies are often reactive rather than principled.

At the Hi! PARIS Center’s roundtables on AI, Ethics & Regulations, HEC Paris faculty and research fellows from across disciplines are helping define what meaningful ethics in AI actually looks like. Their insights span liability, labor, advertising, platform power, and the systemic inequalities embedded in AI development—pushing us to see ethics not as a compliance checkbox, but as a foundation for trust and inclusion. 

What are the risks posed by unregulated adoption?  

To build trustworthy AI policies, businesses must address three key challenges:
•    Understanding AI’s rapid evolution
•    Managing premature AI deployments
•    Adapting to new AI labor dynamics

To do so, companies need to better estimate the swift advancements in AI technologies to align their strategies, which is crucial for competitiveness. They also should ensure AI solutions are thoroughly tested and refined before implementation, instead of releasing underdeveloped AI tools that can lead to operational inefficiencies.

Finally, the shift from in-house development to integrating external AI providers requires new skills and governance frameworks.

David Restrepo Amariles, HEC Associate Professor of Artificial Intelligence and Law,
Fellow at Hi! PARIS, and member of the Royal Academy of Sciences, Letters, and Fine Arts of Belgium.

 

Who is liable when AI harms?

The EU’s revised Product Liability Directive (PLD) partially addresses AI-related harms, such as those caused by personal injury, property damage, or data loss. It allows claims for certain types of software defects.

However, abuses like discrimination, violations of personality rights, or breaches of fundamental rights fall outside the PLD’s scope. EU law offers no remedy for individuals harmed by these AI abuses, leaving the matter to be regulated by individual Member States. This adds to the EU’s regulatory complexity.

Pablo Baquero, Assistant Professor in Law at HEC Paris and Fellow at the Hi! PARIS Center

 

Can advertising balance between AI and privacy?

AI in advertising introduces multiple privacy risks, including:
•    The collection of vast amounts of personal data
•    Invasive profiling
•    Risks of unintentional discrimination

Through automated processes, users may not fully understand how their data is tracked or used, raising concerns about consent and transparency.

EU regulations such as the GDPR and the upcoming AI Act propose stricter data protection measures and increased transparency requirements, and accountability frameworks. These regulations aim to ensure fair and lawful practices, granting individuals greater control over their data while fostering trust in AI-driven advertising.

Klaus Miller, Assistant Professor in Marketing at HEC Paris

 

Are startups acting more ethically than large firms?

All firms face ethical issues, but larger firms are better resourced to deal with those issues. Yet, our research finds that many high-tech startups have ethical AI policies and take actions that support ethical behaviors despite no regulatory or legal requirements to do so.
Startups are more likely to adopt ethical AI policies when they have a data-sharing relationship with technology firms like Amazon, Google, or Microsoft.

Challenges remain, however. Ethical AI development policies may still lack oversight boards or audits that entice their employees to follow the policy.

Michael Impink, Assistant Professor in Strategy and Business Policy at HEC Paris

Michael Impink at a Hi! PARIS Meet Up

On April 22, 2024 at a Hi! PARIS Meet Up on AI & Ethics hosted by Capgemini, Michael Impink presented his work to business representatives and researchers. Photo: Ciprian Olteanu - Waverline.

 

Why are digital platforms regulators?

Digital platforms function as private regulators in society by setting rules that impact broader markets and societal outcomes. For example, they control social movements by mediating resource mobilization for social movements, potentially restricting collective action. Furthermore, digital platforms like Airbnb often fail to ensure compliance on external platforms, although they enforce rules within their ecosystems. As private regulators, platforms must adopt transparent practices that align with societal values.

Digital platforms function as private regulators in society by setting rules that impact broader markets and societal outcomes.

Madhulika Kaul completed her PhD at HEC and is now an Assistant Professor in Strategy at Bayes Business School, in London, England. Her research was funded by Hi! PARIS and received a HEC Foundation Award for the best thesis 2024. 

Madhulika Kaul HEC Foundation Award 2024

On March 10, 2025 at the HEC Alumni office in Paris, Madhulika Kaul received the PhD Thesis Award from Johan Hombert, Associate Dean of the PhD program, and Claire Calmejane, Jury President and member of the HEC Foundation Research Committee. She was accompanied by her supervisor, Professor Olivier Chatain. Photo by HEC Paris.

 

How can game models tackle bias and manipulation?

Using a game theory model to analyze interactions between strategic agents and AI algorithms, researchers found that algorithms are often influenced by data inputs from self-interested individuals, which creates unreliable outcomes.

This research shows that ethical AI requires not only technical solutions but also a deeper understanding of human behavior and incentives.

Atulya Jain completed his PhD at HEC and is now a postdoctoral researcher at the Institute for Microeconomics at the University of Bonn, in Germany. His research was funded by Hi! PARIS.

 

How can we work for diversity and systemic change in AI governance?

Surveys show that women are 20% less likely than men to adopt tools like ChatGPT. Other surveys suggest these gaps could be the result of unmet training needs for women in the workplace and workplace restrictions.

Surveys show that women are 20% less likely than men to adopt tools like ChatGPT.

Such discrepancies start young. Before AI is deployed in the public domain or workplace, it needs to be designed by someone who can write a code program. Yet, twice more men than women aged 16–24 knew how to program across the European Union in 2023.

The social biases that create social barriers to entry for girls learning to code are thus built into the AI systems themselves. As the use of these AI systems proliferates across sectors at high speed, we run the risk of deploying a powerful tool that exacerbates inequalities.

In our view, weaving sector-specific and cultural perspectives into policy and system design is just as crucial as passing new regulations to ensure AI solutions are truly fair for everyone.

Marcelle Laliberté, HEC Chief Diversity, Equity, and Inclusion Officer. Marcelle Laliberté contributed to the UN’s 2024 AI governance report as a nominated expert, with the collective position paper “Governing AI’s Future: Indigenous Insights, DEI Imperatives, and Holistic Strategies – a cross disciplinary view,” cowritten with HEC PhD graduate Claudia Schulz and Olivia Sackett, Fellow at Swinburne University of Technology, published in 2024.

 

Related topics:
Artificial Intelligence
See structure
Part 10

In AI We Trust?

Information Systems
Published on:

New research by HEC Paris faculty reveals a paradox: people trust AI advice more than human guidance, even when they know it’s flawed. 

Photo Credits: Have a nice day on Adobe Stock

Key findings:

  • Algorithmic advice is preferred over human advice — even when the suggestions are identical.
  • People show high trust in AI despite known flaws, a behavioral bias the researchers term “algorithm appreciation.”
  • Too much transparency reduces trust: when users were given overly complex performance data, reliance on AI dropped.
  • AI-based advice can aid high-stakes decisions, but overconfidence and blind trust pose ethical risks.
  • Moderation is key: providing relevant, digestible information improves AI uptake without triggering cognitive overload. 

Would you take advice from an algorithm — even if you knew it made mistakes?

Surprisingly, most people would. In a series of behavioral experiments, HEC Paris Professors Cathy Liu Yang and Xitong Li, along with Sangseok You of Sungkyunkwan University, discovered that people tend to follow advice from AI more readily than from humans — even when the advice is identical, and even when the AI’s imperfections are made explicit.

This effect, known as “algorithm appreciation,” has powerful implications across domains — from consumer behavior to courtroom decisions, cancer diagnosis, and beyond. But it also raises critical questions about over-reliance, information overload, and accountability when things go wrong. 

Algorithms Now Guide Everyday and High-Stakes Decisions

Machine recommendations result in 80% of Netflix viewing decisions, while more than a third of purchase decisions on Amazon are influenced by algorithms. In other words, algorithms increasingly drive the daily decisions that people make in their lives.

It isn’t just consumer decision making that algorithms influence. As algorithms appear increasingly in different situations, people are using them more frequently to make more fundamental decisions. For example, recent field studies have shown that decision makers follow algorithmic advice when making business decisions or even providing medical diagnoses and releasing criminals on parole.

Why People Trust Machines Over Humans

People regularly seek the advice of others to make decisions. We turn to experts when we are not sure. This provides us with greater confidence in our choices. It is clear that AI increasingly supports real-life decision making. Algorithms are ever more intertwined with our everyday lives. What we wanted to find out is the extent to which people follow the advice offered by AI.

To investigate the matter, we conducted a series of experiments to evaluate the extent to which people follow AI advice. Our study showed that people are more likely to follow algorithmic advice than identical advice offered by a human advisor due to a higher trust in algorithms than in other humans. We call this phenomenon “algorithm appreciation”.

When Transparency Backfires

We wanted to find out more, to see if people would follow AI advice even if the AI is not perfect. Our second series of experiments focused on exploring under which conditions people might be either more likely or less likely to take advice from AI. We engineered experiments that tested whether people would have greater trust in algorithms even when they were aware of prediction errors with the underlying AI.

Surprisingly, when we informed participants in our study of the algorithm prediction errors, they still showed higher trust in the AI predictions than in the human ones. In short, people are generally more comfortable trusting AI than other humans to make decisions for them, regardless of known and understood imperfections in the process.

 

People are generally more comfortable trusting AI than other humans to make decisions for them, regardless of known and understood imperfections in the process, except when there is too much information about the algorithm and its performance.

 

There was an exception to this rule. We found that when transparency about the prediction performance of the AI became very complex, algorithmic appreciation declined. We believe this is because the provision of too much information about the algorithm and its performance can lead to a person becoming overwhelmed with information (cognitive load). This impedes advice taking. This is because people may discount predictions when presented with too much information about the underpinning detail and they are unable or unwilling to internalize it. However, if we do not overwhelm people with information about AI then they are more likely to rely on it.

The Risk of Overconfidence in Automation 

If algorithms can generally make better decisions than people, and people trust them, why not rely on them systematically? Our research raises potential issues of over-confidence in machine decision-making. In some cases, the consequences of a bad decision recommended by an algorithm are minor: If a person chooses a boring film on Netflix they can simply stop watching and try something else instead. However, for high-stakes decisions that an algorithm might get wrong, questions about accountability come into play for human decision-makers. Remember the miscarriage of justice in the UK Post Office, when more than 700 Post Office workers were wrongfully convicted of theft, fraud and false accounting between 2000 and 2014, because of a fault in a computer system.

AI in the Courtroom and the Clinic

However, our research has important implications for medical diagnosis. Algorithmic advice can help where there is a patient data for examination. AI can predict with a level of likelihood whether the chances of the patient having cancer are 60% or 80% and the healthcare professional can include this information in decision making processes about treatment. This can help avoid a patient’s higher level of risk being overlooked by a human and it can lead to more effective treatment, with the potential for a better prognosis.

In wider society, algorithms can help judges in the court system make decisions that will drive a safer society. Judges can be given predictions from algorithms that present the chance of a criminal possibly committing the crime again and so decide for how long to put them away.

Methodology

To explore how and why transparency in performance influences algorithm appreciation, we conducted five controlled behavioral experiments, each time recruiting more than 400 participants via Amazon's Mechanical Turk. Across the five experiments, participants were asked to perform a prediction task in which they predict a target student’s standardized math score based on nine pieces of information about the student before and after being presented with advice generated by the algorithmic prediction regarding the student’s predicted score.

Applications

Where firms need to make investment decisions, employees will trust AI to help inform those choices. With good data and solid, well-thought-out underlying algorithms, this has the potential to save businesses a lot of money.
Based on an interview with HEC Paris professors of Information Systems Cathy Liu Yang and Xitong Li on their paper “Algorithmic versus Human Advice: Does Presenting Prediction Performance Matter for Algorithm Appreciation,” co-written with Sangseok You, Assistant Professor of Information Systems at Sungkyunkwan University, South Korea, and published online in the Journal of Management Information Systems, 2022. This research work is partly supported by funding from the Hi! PARIS Fellowship and the French National Research Agency (ANR)'s Investissements d'Avenir LabEx Ecodec (Grant ANR-11-Labx-0047).
See structure
Part 7

How VINCI Aligns AI with Decentralized Innovation

How VINCI Aligns AI with Decentralized Innovation
Artificial Intelligence
Published on:

HEC Paris case study shows how VINCI's innovation hub bridges generative AI tools with its decentralized structure to scale employee learning. 

employees holdong different tech and media and collaborating

Photo Credits: normaals/123rf

In a case study published in Harvard Business School Publishing in 2024, HEC researcher Aluna Wang examines VINCI’s AI transformation as a possible blueprint for other multinationals hoping to seamlessly integrate new generative AI technology into the workplace.

Turning a Construction Giant into an AI Pioneer 

How can major companies develop AI skills among their employees while maintaining a decentralized structure? VINCI, a world leader in concessions, energy, and construction, offers compelling insights through its innovation hub, Leonard, and its groundbreaking AI projects. In our case, we study “Building Innovation at VINCI,” co-written with Harvard Business School Professor Dennis Campbell and HBS Research Associate Ana Carlota Moniz, we analyze the ways this French multinational seeks to answer the challenges AI poses towards maintaining innovation, decentralization, and the upskilling of its employees. 

AI Innovation Begins at the Core of Operations 

We explore Leonard's innovation initiatives, including one of VINCI's most ambitious AI projects used in the building of The Link, the iconic new skyscraper in Paris La Defense. The project features SprinkIA, an AI-driven generative design tool that enhances sprinkler system calibration in blueprints. But its development raises important challenges and questions about deploying innovative technologies across a highly decentralized organization. Does VINCI’s decentralization allow the company to quickly deploy innovative new AI technologies across the group? Is the company’s business model optimal for encouraging further innovation? Indeed, how can Leonard promote innovation in such a large firm?

Leonard’s Model: Empowering People to Build Better 

VINCI's decentralized structure, comprising thousands of business units across multiple divisions, aims to encourage entrepreneurial innovation. Initiated in July 2017, Leonard serves as the group's platform for innovation and foresight, facilitating creation and experimentation in day-to-day activities. Bruno Daunay, AI Lead at Leonard, and François Lemaistre, Brand Director of Axians and sponsor of the group’s AI effort, have been working to boost awareness and adoption of new technologies through a carefully structured approach.

Leonard’s six-month framework combines rigorous technical training with practical application. "People were really curious and motivated because they loved their work, but they were sick of doing it inefficiently and wanted to change it," explains Daunay. Leonard has built a robust innovation ecosystem, including partnerships with academic institutions like the Hi! PARIS Center and the establishment of Centers of Excellence. The evolution of DIANE, from a single innovation project to a Center of Excellence for generative design, exemplifies this approach.

Balancing Core Principles with Technological Innovation

This innovation strategy aligns with VINCI's commitment to decentralization. As Xavier Huillard, VINCI's CEO, notes, "We are probably the most decentralized company in France. Maybe even in Europe!" His philosophy of "inverting the pyramid" prioritizes operational rather than management levels. This approach has yielded significant results - SprinkIA, for instance, can optimize a sprinkler system blueprint in just 11 minutes, a process that previously took over five days.

Looking ahead, Lemaistre offers an optimistic perspective: "For five years now, we've been explaining what AI is and what it isn't - it's not the robots taking over mankind, it's about improving everyone's jobs. We don't see AI as disruptive in our business model. We think it'll improve it. We want to keep opening doors we've never opened before." 

A Blueprint for Responsible AI Deployment 

As organizations across the globe grapple with AI integration, VINCI's approach through Leonard demonstrates how large companies can foster innovation while maintaining their core organizational principles.

Aluna Wang is Assistant Professor of Accounting at HEC Paris and a chairholder at the Hi! PARIS Center. Her case study, “Building Innovation at VINCI,” is co-written with Harvard Business School Professor Dennis Campbell and HBS Research Associate Ana Carlota Moniz. Learn more in "The AI Odyssey: History, Regulations, and Impact – A European Perspective.” This Hi! PARIS Center's MOOC features Aluna Wang's interviews of Vinci's François Lemaistre and Bruno Daunay, who share their perspectives on the integration of AI in their operations. 
Aluna Wang
Assistant Professor
Accounting and Management Control
See structure
Part 13

AI Is Redefining Human Creativity at Work

AI Is Redefining Human Creativity at Work
Human Resources
Published on:
Updated on:
5 minutes

HEC Paris research by Daria Morozova (H.23) reveals how psychological responses to AI shape creative effort, identity, and collaboration in the future workplace. Interview.

human - robot - phonlamaiphoto

©phonlamaiphoto on Adobe Stock

While many may think creativity will be a key human capacity as work processes get increasingly automated, AI-applications get more creative as well: for example, AI-painted portraits sell at exorbitant prices, pop hits feature AI-generated music, and mainstream media ‘hire’ AI-journalists to publish thousands of articles a week. How workplace AI-application will impact creative processes, crucial for innovation and organizational development, is an important question that needs rigorous scientific attention.

Rethinking AI as a Social Actor in Creative Work

You began this research in 2019 as a doctoral student at HEC. Why? 
AI’s role in creativity remains understudied. On one hand, AI can accelerate innovation and broaden creative possibilities. On the other, obstacles like algorithmic appreciation and aversion can hinder adoption, trust, and even human self-image.

What is new in my work is to consider AI not only as a technological tool but as a social actor influencing human beliefs, behavior, and identity, and, hence, the collaboration of human and AI. This perspective is vital for organizations aiming to foster innovation without compromising human dignity and creativity. By understanding and shaping perceptions of AI, managers can facilitate more productive and fulfilling human-AI partnerships.

Mapping Research: From Behavior to Belief

How did you approach this topic? 
First, I examined how different attitudes toward AI (curiosity, caution, and skepticism) shape creative performance. My fellow researchers and I assessed how exposure to AI affects creative effort, comparing situations where humans collaborate with AI or simply witness its outcomes.

The second phase focused on understanding how AI’s authority - whether it gives orders or only suggestions - affects its usage in creative and non-creative tasks. In this experimental project, I analyzed people’s adherence to AI-generated recommendations, considering how they view AI’s expertise in relation to tasks of varying complexity.

Finally, I took a conceptual approach to defining “situated humanness” in AI-augmented environments. By examining how certain human qualities become more salient in response to AI’s involvement, I contribute to the emerging dialogue on what it means to be human in an AI-driven workplace.

Attitudes Toward AI Shape Creative Outcomes

So, how do people perceive themselves in creative tasks when working with AI? 
People who were cautious about AI engaged more deeply in creative tasks, whereas skeptics often disengaged, leading to poorer creative outcomes. Surprisingly, those curious about AI showed minimal changes in their creative performance.

And how do orders or suggestions given by AI affect human behavior in creative versus non-creative tasks? 
People followed AI’s advice more readily in non-creative tasks but resisted it in creative contexts, especially when they felt their human uniqueness was threatened.

Designing Better Collaboration Between Humans and Machines

How do humans adapt their identities and behaviors when collaborating with AI? 
Human-AI collaboration in creative environments is deeply influenced by psychological perceptions and task framing. When creativity is portrayed as a uniquely human skill, individuals tend to invest more effort, although this doesn’t always yield better creative outcomes.

Also, the way AI is introduced - authoritative figure versus supportive tool - significantly impacts how people engage with it.

 

Humanness and Advanced Technologies in Organizations: On Being Truly Human While Working with AI”, Daria Morozova, Stefan Haefliger, Zoe Jonassen, Anil R. Doshi, Zhu Feng, Shane Schweitzer, published in the Academy of Management, July 2024. This research was funded by the Hi! PARIS Center.
See structure
Part 9

AI Is Reshaping the Creative Economy

AI Is Reshaping the Creative Economy
Artificial Intelligence
Published on:

From copyright to competition, HEC Paris research reveals how AI tools are transforming - and polarizing - the creative sector.

open book with algorithms

Photo Credits: ra2studio on 123rf

How Generative AI Is Reshaping Cultural Production 

Is the rise of generative AI tools like ChatGPT and Midjourney breaching the last bastion of human exclusivity in economic activities: creation? The exponential use of these tools raises fundamental issues around the quality of art itself, a judicial framework for content producers, and the visibility of cultural creators. 

Yet, we believe flexible regulation could enhance rather than threaten human creativity. We all have in mind the unprecedented five-month strike in 2023 by thousands of Hollywood writers demanding protection against generative AI tools. Their Guild won important concessions concerning accreditation, complementarity and general usage. 

But, parallel to such conflicts and concerns, generative technological tools continue to be adopted by creative industries for their efficiency and ability to streamline repetitive tasks. In the book industry, for instance, platforms like Genario offer writing assistance, providing narrative structures and trend analyses to guide authors. At the same, tools like Babelio and Gleeph enable better matching of supply and demand through personalized recommendations based on reader preferences. 

Intellectual Property in the Algorithmic Age 

Beyond redefining art and copyright issues, AI's deployment in creative industries raises questions on what appropriate legal framework should govern relationships between content producers and AI operators. Furthermore, what impact could it have on creator visibility and creation quality? AI operators, like digital platforms, derive value from exploiting large volumes of content rather than specific pieces. This creates conflicts around value sharing, as seen between music streaming platforms and rights holders, or search engines and press operators. While fragile agreements are emerging, they highlight the need for a new general framework for these novel relationships.

Too Much Content, Not Enough Visibility 

AI is also increasing content abundance by lowering barriers to entry for creation. Amazon, for example, had to limit authors to uploading only three books… per day! This amplification of available content makes it increasingly costly to achieve the visibility necessary for a new work or author to emerge. This could lead to even greater precariousness for small actors or new, genuine creators, who are often more likely to bring innovation.

 

AI is also increasing content abundance by lowering barriers to entry for creation.

 

The impact of AI on creators and creation is twofold. On the one hand, it provides new tools and intensifies a hyper-competitive context. On the other, it accentuates polarization between a few prominent creators and countless others struggling for visibility. In this environment, phenomena like BookTok on TikTok or personalized algorithmic recommendations could become crucial tools for navigating this abundance, reinforcing dependence on digital prescribers and exacerbating inequalities between creators.

The Role of Platforms in Creator Inequality 

While AI presents such challenges, it also offers opportunities. In the book industry, AI-generated image banks for book covers and voice synthesis for audiobooks simplify and accelerate production processes. However, again, this ease of production further intensifies competition and the struggle for visibility. 

Why Regulation Must Empower, Not Restrict 

The key to harnessing both AI's potential and protecting creators lies in adaptive regulation. By establishing a balanced framework, it's possible to foster innovation while respecting creators' rights and ensuring fair compensation for their work. 

As we grapple with this new landscape, the focus should be on creating an ecosystem where AI enhances rather than replaces human creativity, and where the benefits of technological advancements are equitably distributed across the creative industries.
 

Reference: “Quelle politique pour les industries culturelles à l'ère du numérique ?”, edited by Thomas Paris, Alain Busson, David Piovesan. HEC Paris Associate Professor Thomas Paris is also a CNRS researcher and the scientific director for the HEC Master in Media, Art & Creation.
See structure
Part 2

How Machine Learning Breaks the Habit Myth

Marketing
Published on:

Using machine learning, professor Anastasia Buyalskaya reveals how long it takes to form a new habit. Her research could reshape how businesses build lasting behavior change. 

hand washing_cover

Three key facts:

  1. Machine learning: The study uses large datasets and machine learning to uncover the diverse contextual variables influencing habit formation.
  2. Debunking the 21-days myth: There is actually not a fixed timeframe to establish new habits.
  3. Context matters: Certain variables had very little effect on the formation of a habit, whereas other factors turned out to matter a lot.

 

How AI Can Predict Human Routines

If you’ve ever tried to get in shape, you know how difficult it can be to develop a regular exercise habit. At first, just changing into your workout clothes and getting yourself to the gym seems to take an inordinate amount of effort, and the actual exercising may feel uncomfortable and awkward. But gradually, if you stick with it, you not only see improvement in your physical condition, but even begin to look forward to your regular workouts.

 

A popular myth says that if you stick with a new behavior for 21 days, it becomes permanent, but this isn’t based on scientific research.

 

But how long does it take to make exercising a habit? There’s a popular myth that if you stick with a new behavior for 21 days, it becomes permanent, but that guestimate isn’t based on scientific research. That’s why I and my colleagues at several U.S. universities decided to investigate the subject of habit formation using a powerful tool—machine learning, a branch of AI and computer science which utilizes data and algorithms to mimic the way that humans learn. Our paper marks the first time that machine learning has been used to study how humans develop habits in natural settings.

 

Our paper is the first to use machine learning to study how people form habits in real-world situations.

 

Insights for Nudging Healthier Employee Behavior 

What we learned about habit formation refuted popular wisdom. As it turns out, it appears that there isn’t a single magic number of days, weeks or months for establishing a new habit. 

To the contrary, when we studied the development of two different behaviors, we found very different time spans were required for each one to become predictable. Exercising appears to take several months to become habitual. In contrast, handwashing – the other behavior we analyzed - is predictably executed over a much shorter time span, a few days to weeks. 

From Gym to Hospital: Lessons in Habit Engineering

In the past, one of the limitations of habit research has been that researchers have depended upon participants filling out surveys to record what they do, a methodology that typically limits sample size and may introduce noise. In our research, by using large datasets that rely on automatically recorded behavior—for example, exercisers swiping their badges to enter a fitness center—and then using machine learning to make sense of the data, we were able to study a larger group of people over longer time periods in a natural environment. 

In addition, by using machine learning, we don’t necessarily have to start with a hypothesis based upon a specific variable. Instead, we’re able to observe hundreds of context variables that may be predictive of behavioral execution. Machine learning essentially does the work for us, finding the relevant predictors. 

To study exercisers’ habit formation, we partnered with 24 Hour Fitness, a major North American gym chain, to study anonymized data about gym use. Our dataset spanned a 14-year-period from 2006 to 2019, and included about 12 million data points collected from more than 60,000 users who had consented to share their information with researchers when they signed up to be in a fitness program. 

We were able to look at a long list of variables, ranging from the number of days that had elapsed between visits to the gym, to the number of consecutive days of attendance on the same day of the week. We whittled down the participants to about 30,000 who had been members for at least a year, and studied their behavior from the first day that they joined the gym. 

To study hospital workers’ formation of hand-washing as a habit, we obtained data from a company that employed radio frequency identification (RFID) technology to monitor workers’ compliance with sanitary rules. Each data point had a timestamp, as well as anonymized hospital and room locations. This enabled us to look at the behavior of more than 3,000 workers in 30 hospitals over the course of a year. 

Rethinking Interventions with Personalized Predictors 

We discovered that certain variables had very little effect on the formation of a habit, whereas other factors turned out to matter a lot. For example, for about three-quarters of the subjects, the amount of time that had passed since a previous gym visit was an important indicator of whether they would show up to the gym. The longer it had been since they’d worked out, the less likely they were to make a habit of it. Additionally, we found that the day of the week was highly predictive of gym attendance, with Monday and Tuesday being the strongest predictors.

 

We discovered that certain variables had very little effect on the formation of a habit, whereas other factors turned out to matter a lot.

 

We also studied the impact of the StepUp Challenge, a behavioral science intervention intended to increase gym attendance, whose designers included two of the researchers on our team. That analysis yielded an interesting insight. The motivational program had a greater effect on less predictable gym-goers than it did on ones who had already established a regular pattern, echoing a finding in the habit literature that habits may make people less sensitive to changes in rewards. 

With hospital workers and hand-washing, we discovered that habit formation came more quickly—usually within about two weeks, with most hospital staff forming habits after nine to 10 hospital shifts. The most important predictor of hand-washing was whether workers had complied with hand-washing rules on the previous shift. We also found that 66 percent of workers were influenced by whether others complied with hand-washing rules, and that workers were most likely to wash their hands upon exiting rooms rather than when they entered them.

That raises the question: Why did workers develop the hand-washing habit so much more quickly than gym goers developed the workout habit? One possible explanation is that compared to hand-washing, going to the gym is a less frequent and more complex sort of behavior. Hand-washing is more likely to involve chained sensorimotor action sequences, which are more automatic. Once you get in the habit of washing your hands, you may do it without even thinking. Going to the gym, in contrast, is something that still requires time, planning and intention, even after it’s become a familiar part of your lifestyle.

 

Applications

The study analyzed how people form habits in natural settings. It is relevant for businesses looking to create “habit-forming” products for consumers, and managers looking to instill good habits in their employees.

Methodology

To get a better understanding how habits develop in natural settings, the researchers developed a machine learning methodology that was suitable for analyzing panel data with repeated observations of behavior. They utilized a Predicting Context Sensitivity (PCS) approach, which identified the context variables that best predict behavior for each individual. PCS uses a least absolute shrinkage and selection operator (LASSO) regression, a hypothesis-free form of statistical analysis which does not pre-specify what variables are likely to be predictive of an individual’s behavior. LASSO generated a person-specific measure of overall behavioral predictability, based on the variables that are predictive of that person’s behavior.
This article was based upon the paper “What can machine learning teach us about habit formation? Evidence from exercise and hygiene,” published in April 2023 in PNAS (Proceedings of the National Academy of Sciences), co-authored by Anastasia Buyalskaya with Hung Ho of the University of Chicago, Xiaomin Li and Colin Camerer of California Institute of Technology, and Katherine L. Milkman and Angela L. Duckworth of the University of Pennsylvania. In addition, other various published news sources were utilized.
See structure
Part 11

Saving Lives in Intensive Care Thanks to AI

Saving Lives in Intensive Care Thanks to AI
Artificial Intelligence
Published on:
Updated on:

A widely used AI mathematical model could reduce ICU mortality by 20% by alerting doctors before patients' conditions worsen.

What if hospital doctors had a reliable way to identify the patients whose health was most likely to take a turn for the worse, and then proactively send those patients to the ICU? With almost 6 million patients admitted annually to ICUs in the United States, the question is anything but anodyne. 

Our research, “Robustness of Proactive Intensive Care Unit Transfer,” published in January 2023 in Operations Research, is based on nearly 300,000 hospitalizations in the Kaiser Permanente Northern California. Kaiser is recognized as one of America’s top hospitals in treatment for illnesses like leukemia and heart attacks. 

Its data indicated that by proactively transferring patients to ICUs, hospitals reduce the mortality risk and length of stays. But there is a risk of going too far. Indeed, other research indicates that if doctors transfer too many patients to these units they may become congested and the survival rate becomes negatively impacted. Should the ICUs be filled to capacity, this could mean that some patients who need ICU care are not able to obtain it.

Our research suggests that for a proactive ICU transfer policy to work, three policies should be instigated: arrival rates must be recalibrated; the number of nurses in the ICU should be reviewed; and decisions about the transfer of patients must be gauged according to their recovery rate. If these metrics are not aligned, doctors might not make the right transfer decisions. 

Creation of a Simulation Model for Hospitals

One of our key collaborators for this research, Gabriel Escobar, served as the regional director for hospital operations research at Kaiser Permanente Northern California. Kaiser provided us with unprecedented and anonymized hospitalization data on patients from 21 Kaiser Permanente facilities. Thanks to this information, we built a simulation model which mimics how an actual hospital works. 

This includes generating arrival and departure rates, the evolution of the patients’ condition, and every interaction they have with the system. With such micro-modeling, we can track the simulated patient as if (s)he were a real hospitalized patient. This enabled us to test different scenarios of arrivals and transfer policies.

To build our simulation model we used the mathematical framework called Markov Decision Process (MDP), a common practice in AI. This is a model for sequential decisions over time,  allowing users to inspect a sequence of decisions, and analyze how one choice influences the next one. The sequence is influenced only by earlier decisions, not by future ones. We thus designed an optimization method, based upon a machine learning model, to estimate the impact of various transfer policies. 

 

Given a certain way of transferring patients, we saw the estimated mortality rate could fall by 20 %!

 

When we ran the model, we discovered that relatively small adjustments can have an impact on the mortality of the overall patient population. Given a certain way of transferring patients, we saw the estimated mortality rate could fall by 20 %!

AI Won’t Replace Human Decision-making in Hospitals

The question remains: should humans still be involved in ICU transfers, or should we rely solely on algorithms to do it? We believe these two methods could be complementary. Humans must have the final word. But their decisions could usefully be assisted by the recommendations from an algorithm. 

Our research seeks to encourage the implementation of simple transfer decision rules based on common health metrics summarizing the health conditions of the patients and certain thresholds. This type of threshold policy is extremely simple to deploy and readily interpretable. Using micro-modeling to understand a complicated enterprise and develop algorithms to assist in decision making can – and should - lead to better outcomes.

The original research article, “Robustness of Proactive Intensive Care Unit Transfer,” was co-signed by Julien Grand-Clément, Assistant Professor in Information Systems and Operations Management at HEC Paris and chair holder at the Hi! PARIS Center, and colleagues Carri W. Chan and Vineet Goyal (Columbia University), and Gabriel Escobar (research scientist at the Kaiser Permanente Northern California Division of Research and director of the Division of Research Systems Research Initiative). It was published in January 2023 in Operations Research.
See structure
Part 12

The Promise of AI in HR Undermined by Flawed Data and Fragile Trust

The Promise of AI in HR Undermined by Flawed Data and Fragile Trust
Human Resources
Published on:
Updated on:

Poor data quality, bias, weak oversight, and shallow applications undermine AI’s impact in recruitment, training, and engagement, as revealed by research from HEC Paris.

facial recognition - cover

Photo Credit: Sittinan on Adobe Stock

Key Findings:

  • AI enhances HR productivity, particularly in administrative, legal, and low-stakes recruitment tasks.
  • Data limitations weaken impact: HR datasets are often too small or biased to deliver reliable outcomes.
  • Lack of governance creates risk: Overuse of gimmicky tools without oversight may erode employee trust in HR. 

 

Based on a survey carried out among HR managers and digitalization project managers working in major companies, I recall three potential pitfalls regarding the data used, the risk of turning AI into a gimmick, and algorithmic governance. But first, let's remind what do we mean by AI.

What Do We Mean By AI?

The term artificial intelligence is polysemous, just as AI itself is polymorphic. Hidden behind AI’s vocabulary – from algorithms, conversational AI and decisional AI to machine learning, deep learning, natural language processing, chatbots, voicebots and semantic analysis – is a wide selection of techniques, and the number of practical examples is also rising fast. 

There’s also a distinction to be made between weak AI (non-sensitive intelligence) and strong AI (a machine endowed with consciousness, sensitivity and intellect), also called "general artificial intelligence" (a machine that can apply intelligence to any problem rather than to a specific problem). "With AI as it is at the moment, I don’t think there’s much intelligence, and not much about it is artificial... We don’t yet have AI systems that are incredibly intelligent and that have left humans behind. It’s more about how they can help and deputize for human beings" (Project manager).

HR Teams Embrace AI to Boost Productivity 

For HR managers, AI paves the way to time and productivity savings alongside an "enhanced employee experience" (HR managers). For start-ups (and there are 600 of them innovating in HR and digital technology, including around 100 in HR and AI), the HR function is "a promising market".

Administrative and Legal Support: Helping Save Time 

AI relieves HR of its repetitive, time-consuming tasks, meaning that HR staff, as well as other teams and managers, can focus on more complex assignments.

Many administrative and legal help desks are turning to AI (via virtual assistants and chatbots) to respond automatically to questions asked by employees – "Where is my training application?" or "How many days off am I entitled to?" – in real time and regardless of where staff are based. AI refers questioners to the correct legal documentation or the right expert. EDF, for example, has elected to create a legal chatbot to improve its performance with users. 

The chatbot is responsible for the regulatory aspects of HR management: staff absences, leave, payroll and the wage policy: "We had the idea of devising a legal chatbot to stop having to answer the same legal and recurring questions, allowing lawyers to refocus on cases with greater added value. In the beginning, the chatbot included 200 items of legal knowledge, and then 800... Users are 75% satisfied with it" (Project manager).

 

AI can help find the correct legal documentation or the right expert, AI can check the accuracy of all declarations, AI can personalize social benefits based on employee profiles.

 

AI isn’t just employed for handling absences and leave or processing expense reports and training but also for the administrative and legal aspects of payroll and salary policy. For pay, AI systems can be used to issue and check the accuracy and consistency of all declarations. In another vein, AI offers packages of personalized social benefits based on employee profiles.

Recruitment Is an AI Testing Ground 

Recruitment: Helping to Choose Candidates

Recruitment is another field where AI can be helpful: it can be used to simplify the search for candidates, sift through and manage applications, and identify profiles that meet the selection criteria for a given position. 

Chatbots can then be used to talk to a candidate in the form of pre-recorded questions, collecting information about skills, training and previous contracts. "These bots are there to replace first-level interactions between the HR manager or employees and candidates. It frees up time so they can respond to more important issues more effectively" (Project manager). 

Algorithms analyze the content of job offers semantically, targeting the CVs of applicants who are the best match for recruiters' expectations in internal and external databases via professional social networks such as LinkedIn. CV profiles that were not previously pre-selected can then be identified. 

Unilever has been using AI in tandem with a cognitive neuroscience approach to recruiting since 2016. Start-ups offer a service where they highlight candidate profiles without the need for CVs, diplomas or experience. Their positioning is based on affinity and predictive matching or building smart data. 

These tools are aimed primarily at companies with high volumes of applications to process, such as banks for customer service positions or large retailers for departmental supervisors. Candidates are notified about the process and give their permission for their smartphone’s or computer’s microphone and camera to be activated.

These techniques appear to be efficient for low-skilled jobs and markets that are under pressure. At the same time, the more a position calls for high, complex skills, the greater the technology’s limitations.

While new practical examples are emerging over time, the views of corporate HR managers and project managers diverge regarding the added value of AI for recruitment. Some think that AI helps generate applications; identifies skills that would not have been taken into account in a traditional approach; and provides assistance when selecting candidates. Others are more circumspect: it’s possible as things stand, they argue, that people are "over promising" in terms of what AI can bring to the table.

Training and Skills: Personalized Career Paths

The AI approach to training involves a shift from acquiring business skills to customizing career paths. With the advent of learning analytics, training techniques are evolving. Data for tracking learning modes (the time needed to acquire knowledge and the level of understanding) can be used to represent the way people learn and individualize suggestions for skills development.

In addition, AI is used to offer employees opportunities for internal mobility based on their wishes, skills and the options available inside the company. AI start-ups put forward solutions for streamlining mobility management that combine assessment, training and suggestions about pathways, positions and programs for developing skills. These are limited in France, however, with the RGPD (General Data Protection Regulation), although they can be individualized in other countries.

 

We use AI to identify talent not detected by the HR and managerial teams; and to detect talent with a high risk of leaving the company. 

 

Saint-Gobain has decided to use the potential of machine learning to upgrade the way it manages its talents. A project team with diverse profiles (HR, data scientists, lawyers, business lines, etc.) has been set up with two objectives: to use AI to identify talent not detected by the HR and managerial teams; and to detect talent with a high risk of leaving the company. Confidentiality is guaranteed, and no decision is delegated to machines .

Towards a Better Understanding of Commitment

AI provides opportunities for pinpointing employees who are at risk of resigning or for improving our understanding of the social phenomena in companies.

"We’re going to ask employees questions every week or two weeks in 45 seconds on dozens of engagement levers... The responses are anonymized and aggregated, and – from another perspective – will give indicators to the various stakeholders, the head of HR, the manager, an administrator and so on. We’ll be able to tell what promotes or compromises engagement in real time while offering advice" (Project manager).

Thanks to this application, HR managers and other managers and employees can have use of real-time indicators that show the strengths and weaknesses of the teams without knowing by name who said what. “As well as helping to improve your own work and personal style, these variables mean that, if a corrective action has been put in place, you can tell what its real effect is straightaway” (HR manager). 

While artificial intelligence can, as these various examples show, support human resources management, some of these new uses are still being tested.

In addition to the way they are implemented, questions and criticisms remain, including the issue of HR data, return on investment and algorithmic governance.

Governing AI in HR Requires Human Judgment 

Data Quality and Restricted Quantity 

Data is the key ingredient for AI, and its quality is of paramount importance. If the data injected isn’t good, the results will be vague or distorted. The example of Amazon is emblematic in this respect: after launching its first recruitment software, the company decided to withdraw it quickly from the market because it tended to favor the CVs of men. In fact, the computer program Amazon used was built on resumes the company had received over the last 10 years, most of which were based on male profiles. 

In addition, data sets in the field of HR tend to be narrower compared to other areas. Indeed, the number of people employed here – including in large companies – appears to be particularly low compared to the number of purchases made by customers. The quantity of sales observations for an item is very important, and big data applications can be easily performed. But there’s nothing of the sort for HR!

The quantity and quality of data needs to be considered, as well as how to embed it in a context and time interval that encourage analysis and decision-making. This is the case, for instance, in the field of predictive maintenance, where expert systems can detect signs of wear in a machine before human beings: by collecting the appropriate data, just-in-time interventions can be made to overhaul the machinery. It’s a different matter for human systems, however, where the actions and reactions of individuals are not tracked (and should they be?), and may turn out to be totally unpredictable.

Return on Investment and the Dangers of Gimmickry

Investing in AI can be costly, with HR managers concerned about budgets and returns on investment. 

"Company websites have to be revamped and modernized on a regular basis... And let's be realistic about it, making a chatbot will cost you around EUR 100,000, while redoing the corporate website will be ten times more expensive... And a chatbot is the ‘in thing’ and gives you a modern image, there’s a lot of hype, and – what’s more – it’s visible for the company! It’s something that can be seen!" (Project manager).

In many organizations, the constraints imposed by the small amount of data, and the few opportunities to iterate a process, raise the question of cost effectiveness. The HR managers we met for our study asked themselves: Should they invest in AI? Should they risk undermining the trust that employees may have in HR just to be one of the early adopters? And, although AI and virtual technology are in the limelight, are they really a priority for HR? Especially since trends come and go in quick succession: version 3.0 has barely been installed before version 4.0 and soon 5.0 come on the scene.

A further danger lies in wait for AI: the descent into gimmickry, a downward spiral which, it must be emphasized, doesn’t just threaten AI but also management tools more broadly. While the major HR software is now equipped with applications that can monitor the veritable profusion of HR indicators, isn’t there still the risk that we’ll drown in a sea of information whose relevance and effectiveness raise questions? Say yes to HR tools and no to gimmicks! 

Many start-ups operating in the field of AI and HR often only have a partial view of the HR function. They suggest solutions in a specific area without always being able to integrate them into a company’s distinct ecosystem.

Algorithmic Governance

Faced with the growth in AI and the manifold solutions on offer, HR managers, aware of AI's strengths, are asking themselves: "Have I found HR professionals I can discuss things with? No. Consultants? Yes, but all too often I’ve only been given limited information, repeated from well-known business cases that don’t really mean I know where I‘m setting foot... As for artificial intelligence, I’ve got to say, I haven't come across it. 

Based on these observations, I admit I’m not very interested in the idea of becoming a beta tester, blinded by the complexity of the algorithms and without any real ability to compare the accuracy of the outcomes that these applications would give me. Should we risk our qualities and optimal functioning, not to mention the trust our employees put in us, just to be among the early adopters?" (HR manager).

Society as a whole is defined by its infinite diversity and lack of stability. Wouldn’t automating management and decision-making in HR fly in the face of this reality, which is both psychological and sociological? The unpredictability of human behavior cannot be captured in data. With AI, don’t we risk replacing management, analysis and informed choices with an automatism that destroys the vitality of innovation? 

What room would there then be for managerial inventiveness and creativity, which are so important when dealing with specific management issues? While the questions asked in HR are universal (how can we recruit, evaluate and motivate…?), the answers are always local. The art of management cannot, in fact, be divorced from its context. 

Ultimately, the challenge is not to sacrifice managerial methods but to capitalize on advances in technology to encourage a holistic, innovative vision of the HR function where AI will definitely play a larger role in the future, especially for analyzing predictive data.

 

See also: Chevalier F (2023), “Artificial Intelligence and Human Resources Management: Practices and Questions” in Monod E, Buono AF, Yuewei Jiang (ed) Digital Transformation: Organizational Challenges and Management Transformation Methods, Research in Management, Information Age Publishing.

"AI and Human Resources Management: Practices, Questions and Challenges", AOM Academy of Management Conference, August 5-9, 2022, Seattle, USA. 
 

This article presents a summary of “Intelligence Artificielle et Management des Ressources Humaines: pratiques d’entreprises”, Enjeux numériques, No. 15 (September 2021) by Françoise Chevalier and Cécile Dejoux. This research is supported by a grant of the French National Research Agency (ANR), “Investissements d’Avenir” (LabEx Ecodec/ANR-11-LABX-0047).
See structure
Part 13

New Kid on the Block Speeds Up HR Recruitment

New Kid on the Block Speeds Up HR Recruitment
Human Resources
Published on:

Established in 2024, the startup Bluco offers businesses a solution that simplifies and accelerates the recruitment process. Co-founder Nicolò Magnante (H.24), explains.

Nicolò Magnante - Co-Founder & CEO of Bluco

Nicolò Magnante - Co-Founder & CEO of Bluco © MarieAugustin / Saif Images

What’s new in Bluco’s recruitment system?

Forget CVs, forget cover letters! Nowadays, candidates apply through WhatsApp, using text messages, images, or voice and video recordings. Our startup then creates a profile for candidates and synchronizes it into a company recruiting with ATS software. This process only takes a few minutes for the candidate.

Could this system introduce bias in the selection of candidates?

Absolutely not. The information provided is not used for automated selection: the recruitment teams are the ones who select the candidate. Bluco collects a candidate’s answers more fully and more efficiently, so recruiters can actually know more about the candidate, and make even more objective and fairer hiring decisions. The idea is to facilitate the process, not to replace the recruiter – and keep it all confidential, in line with the European AI Act.

illustration of Bluco: from CV to voice messages

Candidate messages are purely declarative. How can we ensure that the candidates have the required skills?

Yes, it's declarative, like a CV, but with many more checkups. For a sales position, for instance, a high proportion of candidates claimed to be English bilinguals, so we asked them for a 30-second voice message. For truck drivers, we asked for a certified photo of their driving license. None of this data is kept after the recruitment process.

What does this solution bring to the applicants and the employers?

It significantly increases the number of applications: one just needs to scan a QR Code (on an advertisement, an Instagram page, in universities, etc.) or click a button on the company’s career page to start a WhatsApp conversation with the company, identified by its name, logo and verification tick. But most importantly, recruiters save a huge amount of time in processing applications, as WhatsApp streamlines all the other steps of the process. Scheduling appointments for interviews is automated, for example: candidates provide their availability on WhatsApp, and the software connects to the recruiter’s calendar to automatically create the event.

Who uses Bluco today?

We offer our solution to both businesses that synchronize Bluco with their ATS and those that use it independently as a recruiting platform. Our clients include Porsche, Trenord railways, and Free Telecoms. We are also working on adapting our solution to iMessage for the American market, where WhatsApp is not widely used.

What impact have HEC academics had in developing Bluco?

Well, in my first year of my Master I took Laura Sibony's course on art, culture and AI, and that was a turning point. She was one of  the first to explore AI’s impact in these fields and revealed to me the possibilities it offers. The following year, I enjoyed many AI-related conferences as the trend took an upward turn. Finally, Stéphane Madoeuf's “Digital Innovation and Acceleration” program featured discussions on ethics and data protection that deeply influenced my professional career.

Interview by Lionel Barcilon of HEC Stories
See structure
Part 14

The AI Divide Is Human, Not Technological

The AI Divide Is Human, Not Technological
Artificial Intelligence
Published on:

HEC Paris research reveals that the technology’s biggest limitation isn’t technical, but human: who benefits, and who gets left behind.

AI is advancing at a brisk clip. Since the launch of OpenAI’s ChatGPT in late 2022, the business world has witnessed a wave of groundbreaking Generative AI tools that can churn out human-life text, code and images in seconds. But this technology is more than a bundle of algorithms and data points. In fact, Peter Mathias Fischer, Associate Professor of marketing at HEC Paris believes it is a mirror reflecting humanity’s hopes, fears and aspirations. 

Beyond the hype: who really benefits from AI? 

Fischer's research reveals contradictions for businesses aiming to ensure that AI’s immense potential is harnessed equitably. As the AI arms race surges ahead, Fischer says the “honeymoon period” of unchallenged optimism is ebbing away. He warns that, with AI reshaping industries and accelerating innovation, research and teaching, some businesses might be left behind. 

Thus, his research and teaching at HEC Paris focuses on the critical question of how humans perceive and interact with AI, particularly in the workplace. One critical question is whether AI empowers employees or merely widens existing inequalities. “There’s a heated debate between the democratization of AI, and the idea that only a select few will benefit,” Fischer notes. “My initial studies suggest the latter - those at the forefront will gain the most.” 

Critical thinking as the new digital skill

This unequal distribution of benefits, he suggests, stems not from AI itself, but from human foibles. “Many people don’t benefit because they lack curiosity and critical-thinking skills,” he says. “To fully democratize AI, we need to educate people in a way that fosters these competencies.”

Education, he argues, is therefore the linchpin for bridging the gap between AI’s potential and its uneven adoption. Critical thinking, adaptability and openness to new technologies are going to be vital, he says, if individuals are to harness the power of AI for meaningful work and personal growth. Without this shift, Fischer warns, the gap between AI’s potential and its practical benefits will only widen.

Technology as a relationship tool, not a replacement

From global giants like Google to emerging (and now dominant) players like OpenAI, companies are in a race to harness AI. And Fischer’s research suggests a deeper layer to this revolution: the subtle yet critical role of AI in industries built on personal connections. “In some cases, AI should facilitate rather than replace human engagement,” he says. 

For example, companies selling complex products like insurance often rely on human connections to build trust. “AI can act as an intermediary, getting a foot in the door for sensitive conversations in industries like banking or healthcare,” he explains. The challenge, he notes, is designing AI systems that enhance human interaction rather than erode it.

 

The challenge is designing AI systems that enhance human interaction rather than erode it.

 

Another critical area of Professor Fischer’s research tackles the role of AI in learning and education. While most current approaches center on training machines and improving models, the more crucial question is how we can use AI and machines to train ourselves and enhance our competences and skills. “We need to shift our focus from classical machine learning to learning with and from machines.” 

See structure
Part 15

How AI Imaginaries Align and Divide Innovators

How AI Imaginaries Align and Divide Innovators
Artificial Intelligence
Published on:

Shared AI visions spark innovation, but misaligned hopes risk division. In his thesis, Pedro Gomes Lopes of IP Paris explains how managing these imaginaries is key to ecosystem success.

four persons talking around a table and a computer

Image created with ChatGPT

Key findings & research contributions:

•    Sociotechnical imaginaries associated with AI played a key role in the emergence of a Franco-German AI innovation ecosystem
•    These shared imaginaries initially aligned partners, and became sources of tension
•    The ecosystem did not collapse, but morphed into a smaller, stronger configuration
•    The primary risks in AI innovation ecosystems are relational, not technical
•    Managing such ecosystems requires technical coordination and continuous negotiation of shared imaginaries.

To address grand challenges in ecosystemic innovation, innovators must share understanding to align their expectations. We studied an emerging innovation ecosystem driven by AI and found that the members’ imaginaries associated with AI motivate the emergence of innovation despite high uncertainty. Yet, we also found that AI imaginaries generate tensions, preventing innovation.

Shared imaginaries make AI ecosystems emerge 

First, let’s define the key concept of our study, “sociotechnical imaginaries”. Sociotechnical imaginaries are collective visions of a desirable future, shaped by shared ideas about how science and technology can support a certain social order (Jasanoff & Kim, 2019).

New innovation ecosystems are emerging - often initiated by governments, involving corporate partners, startups, academia and public institutions. These ecosystems intend to transform entire sectors through the deployment of AI. But they also raise a fundamental question: how do such diverse actors manage to collaborate on something as uncertain, abstract, and ambitious as AI to solve complex problems?

This was the starting point of our research. We wanted to understand what makes these early AI ecosystems emerge - and what threatens their development. So we studied how players build a shared understanding of what AI is, and what they want to do with it. 

This process unfolds through a “test and learn” dynamic, where the meaning attributed to the technology evolves gradually, shaped by the emergence of POCs (proofs of concept), POVs (proofs of value), and early prototypes. These milestones open the door to potential industrialization, often as part of new project opportunities. 

In such a context, we found the key ingredient for AI ecosystems to emerge. This is not technical alignment, this is not contractual obligation, this is shared imaginaries.

Imagining the future together: a case study

Our study focused on a Franco-German consortium launched in 2022 and funded by both states. Its objective: to design AI software that could support large-scale, sustainable and resilient renovation of existing social housing. The project brought together a major construction firm, a national social housing provider and several of its subsidiaries, an AI research lab, and a start-up - all with different priorities, capabilities, and time horizons.

At first, the ecosystem was formed remarkably fast under the influence of public funding. We show that this was due to three powerful and shared sociotechnical imaginaries associated with AI.

Three powerful and shared sociotechnical imaginaries associated with AI:

1. Innovation imaginary: the belief that AI is the next essential step for any serious organization, and that failing to invest in it could mean falling behind. This vision created a sense of urgency and gave legitimacy to the project. 

2. Cybernetic imaginary: AI was seen as a system for optimizing workflows, enabling augmented data-driven decisions, and reducing uncertainty in renovation projects. 

3. Finally, a techno-solutionist imaginary positioned AI as a key technology to address systemic challenges - climate change, energy poverty, or insanity housing.

These imaginaries allowed project members to share a common language and a common ambition. They aligned actors who otherwise had little in common and gave meaning to their collaboration. Yet as the project moved from ideas to execution, these imaginaries became sources of tension. 

From shared vision to strategic friction

Once the project was launched, expectations started to diverge. For some partners, especially operational teams, the narratives seemed increasingly disconnected from everyday practices. Field managers in housing subsidiaries had trouble identifying relevant use cases.

Some, despite being essential contributors to the project’s success, postponed their engagement, waiting for the AI to "prove itself" before investing further. 

This situation created a self-destructive dynamic, with a chicken-and-egg paradox: on the one hand, the progress of projects was significantly slowed down due to a lack of input from housing and renovation experts, while on the other, these partners attributed their limited involvement to the absence of tangible intermediate results in the projects.

At the same time, the software developed did not live up to the high expectations raised by the initial visions. For example, at one point, an AI prototype took enormous effort to deliver results that ultimately proved less useful than a dashboard already in use in one of the partner companies and based on simple statistics.

Another key challenge concerned the translation of local expertise into formalized, computable models. Renovation practices differed widely between regions, subsidiaries, teams and individual project managers. Efforts to build a standardized “digital twin” of these buildings and the way they should be renovated ran into cultural, technical, and epistemic obstacles. In some cases, actors withdrew from the modeling efforts altogether, preferring to rely on informal, experience-based decision-making.

As a result, the roadmap had to be revised. Some contributors scaled back their involvement, others redefined their roles, and the consortium moved towards more modest goals.

Ecosystem resilience requires more than early alignment

What these frictions revealed is that early innovation ecosystems are held together by shared high expectations - but also made vulnerable by them. When expectations diverge, so does commitment. And when the software developed fails to meet the visions that mobilized initial support, trust can erode quickly, and members leave.

Rather than a collapse, what we observed was a transformation. The ecosystem morphed - its initial ambition narrowed, but its internal cohesion improved as it navigated challenges together and built trust among partners. A smaller, more aligned group of contributors continued the work, while new partners joined the ecosystem to challenge and help revise use cases.

This process offers important lessons for those coordinating AI driven innovation ecosystems. First, imaginaries alignment is key but not enough. It must be complemented by careful expectation management, regular reality checks, and iterative recalibration. 

 

The primary risks in AI ecosystems are relational, not technical.

 

Second, the primary risks in AI ecosystems are relational, not technical. Misunderstandings, role ambiguities, and mismatched incentives do more damage than failed code. 

And third, ecosystem emergence is chaotic, nonlinear, and adaptive. Its development lies in the ability to pivot, to reformulate shared goals, and to identify a minimum viable ecosystem or a protoecosystem (Marcocchia & Maniak, 2018). This means a small, stable group of aligned contributors who can keep the project moving. Their work helps attract new partners who bring fresh value - and eventually, engage final customers.

Managing AI through imagination and reflection

Our research suggests that sociotechnical imaginaries are not just side effects of innovation, they are central to it. They are the glue that holds actors together at the outset. But they are also potential fault lines if not revisited, negotiated, and translated into credible roadmaps capable of mobilizing key contributors.

As AI continues to spread into public and private sectors alike, the question is not only how to build better algorithms - but how to design better collaboration. This means designing not only technologies, but also a space for shared learning and reflection. How to combine vision and pragmatismm to inspire with imagination, to manage risk, and to stay rooted in real-world practice.

Ultimately, AI is not only a technical issue, but also a matter of governance and organizational design. Its ability to address grand challenges will depend on our capacity to manage both dimensions effectively.

 

References:

Jasanoff, S., & Kim, S. H. (Eds.). (2019). Dreamscapes of modernity: Sociotechnical imaginaries and the fabrication of power. University of Chicago Press.

Marcocchia, G., & Maniak, R. (2018). Managing 'proto-ecosystems'-two smart mobility case studies. International Journal of Automotive Technology and Management, 18(3), 209-228.
 

Article by Pedro Gomes Lopes on his PhD thesis, “The socio-technical imaginaries of artificial intelligence for sustainability projects: a case study of an innovation ecosystem”. Pedro Gomes Lopes is a 2nd year PhD student at IP Paris (i3-CRG, École Polytechnique, CNRS). His research is supervised by Sihem BenMahmoud-Jouini of HEC Paris and David Massé of Telecom Paristech.
See structure
Part 16

AI Startups Succeed Faster with Research-Led Mentoring

AI Startups Succeed Faster with Research-Led Mentoring
Artificial Intelligence
Published on:

A structured mentoring model helps AI innovators go from research breakthroughs to real-world impact, and rethink what responsible AI leadership means. 

HEC academics on a board at the CDL

HEC professors Peter Fischer and Sebastian Becker moderating a CDL program

"We’re different,” explains Professor Peter Mathias Fischer, Professor of Marketing and Academic Director of the Creative Destruction Lab (CDL)’s AI stream in Paris, one of the most ambitious initiatives to propel AI startups forward. “CDL provides a platform that connects venture capitalists, academics, companies and mentors. Startups go through four structured sessions, each with specific objectives.” 

Fischer’s work at CDL - alongside his teaching and research at HEC Paris - reveals how business education and academic rigor can be harnessed to build the next generation of responsible AI leaders. From pioneering ideas like “Machine Learning 2.0” to new insights into frugal AI and synthetic data, his work showcases how HEC is helping shape the strategic deployment of artificial intelligence.

Professor Carlos Serrano at the CDL
HEC professors Hélène Chanut-Musikas (on the left) and Carlos Serrano (in the middle) moderating a CDL program

 

Empowering deep-tech founders to lead the AI revolution

This structured yet flexible model addresses a common challenge among deep-tech founders: turning groundbreaking ideas into viable businesses. “Many founders come from academic or technical backgrounds,” Fischer notes. 

“They’ve created brilliant algorithms, but need help with product-market fit, pricing, or in ways to convince investors to back their ideas with cold hard cash.” 

At HEC Paris, Fischer is dedicated to shaping future business leaders equipped to steer industries through the AI era. He teaches Master students as well as MBA students and Executives. “Our ambition is to prepare students to lead technology’s deployment, whether at major corporations or startups,” he says.

One standout example of this teaching impact is Inbolt, a robotics company founded by HEC Paris alumni who the professor met during one of HEC’s entrepreneurship programmes. “They recently secured €15 million in funding,” he shares, highlighting the success of startups emerging from the French business school.

In the classroom, Fischer also experiments with new ideas from his research, such as “Machine Learning 2.0” - a concept that explores how humans can learn from machines to enhance their own intelligence. “It’s about using machines to make us smarter,” he explains.

AI’s next frontier: synthetic data and frugal AI 

Looking ahead, Professor Fischer highlights two mega trends that could reshape the future of AI development: synthetic data and frugal AI. Synthetic data, artificially generated information used to train machine learning models, is becoming a game-changer by overcoming challenges like data scarcity and privacy concerns. 

On the other hand, frugal AI focuses on creating resource-efficient models that achieve powerful results without the massive energy consumption typically associated with advanced AI systems. “These are critical developments,” Fischer explains, “as they address the twin challenges of accessibility and sustainability.”

 

HEC professor Thomas Astebro at a CDL session
HEC professor Thomas Astebro at a CDL session

 

From Neurotech to Impact: AI for Good

He also points to startups like Inclusive Brains, a standout participant at CDL, as an example of AI’s future potential. This was founded by neuroscience professors Olivier Oullier of Aix-Marseille University and Anais Llorens of Berkeley University, alongside Paul Barbaste, an alumnus from the Master of Science Polytechnique-HEC entrepreneurs degree. 

The company is pioneering AI-powered neurotechnology designed to help disabled individuals regain movement. “They are a prime example of how business can harness AI for good,” Fischer says.

 

Related topics:
Artificial Intelligence
See structure
Part 17

Inside CDL: How Mentorship Fuels Startup Success

Inside CDL: How Mentorship Fuels Startup Success
Artificial Intelligence
Published on:

What drives top business leaders to help startups in the Creative Destruction Lab? We talk to two in the program’s AI stream.

David Greenberg is a family doctor in Toronto, and Grant Philpott is an Intellectual Property (IP) expert and engineer in Salisbury. Since the Creative Destruction Lab began in 2011, this program has called on hundreds such professionals to volunteer their services in guiding budding entrepreneurs and scientists in their first steps as entrepreneurs.
 

Grant Philpott CDL mentor

Grant Philpott

Grant, can you introduce yourself?

I am an Intellectual Property (IP) expert. My knowledge of this area was gained over a 32-year career at the European Patent Office, culminating in the COO post. I’m also an engineer and the Founding Director of Trusted Technology Network, where I carry out IP consulting on technological innovation and its protection. My main activities involve advising startups and large corporations alike, developing IP strategies, supporting law firms, and advising investors.

What made you become a mentor for CDL, its AI stream?

My mentorship with CDL's AI stream stems from my work with startups and spinouts. I'm passionate about IP because it is a critical yet often overlooked aspect for entrepreneurs, especially in the startup stage. Early-stage companies often lack proper IP foundations, which can lead to serious issues as they grow and attract competition from firms with strong patent portfolios and other IPs. 

The risk of being exposed to infringement or other IP-based proceedings is ever-present for a successful company, but much of that risk can be mitigated, at reasonable cost, by early attention to IP. 

 

I'm passionate about IP because it is a critical yet often overlooked aspect for entrepreneurs, especially in the startup stage.

 

Within the HEC CDL program you are mentoring Lymbic, a neurotech startup. What are its assets?

Lymbic is a company which has developed a biometric authentication technique based on brain waves. Much of the time a software company needs to adopt a hybrid approach, protecting specific elements of its inventions under different IP categories so that they are adequately protected from being copied without permission. 

My main contribution is to help them develop their IP strategy. This tech is complex for IP since software must meet specific criteria for patent eligibility before it can be patented. For example, a book in itself is never technical, it cannot be patented and is protected by copyright. Conversely, software is able to use copyright to protect its code, but it may also be eligible for patenting if it can be shown that it results in a technical effect. 

There are also the issues of trade secrecy and open source to consider. Much of the time a software company needs to adopt a hybrid approach, protecting specific elements of its inventions under different IP categories so that they are adequately protected and as difficult as possible for competitors to copy.

Supporting the CDL program is very satisfying at several levels. While we are helping nascent companies with our mentoring, we are learning from that exchange and from the very knowledgeable fellow mentors with whom we work.

It is also extremely motivating to have the opportunity to see close-up the fascinating tech of the companies that we mentor and the problems that they have overcome. I have really enjoyed the time I’ve spent with CDL!

 

David Greenberg CDL mentor

David Greenberg

David Greenberg, can you introduce yourself?

I’m a family doctor in Toronto. I am by nature a generalist (my undergrad degree was in Sociology and Art History) who likes to be involved in lots of different activities. Among those, I spent five years as the head of the Health Care Division of Goldfarb Consultants, at the time the largest market research company in Canada. I have continued consulting for many other pharmaceutical and other health-care related companies.

What made you become a mentor for CDL?

I found out about CDL through Professor Avi Goldfarb, whose father was my partner in the market research business. (editor’s note: Avi Goldfarb is the Rotman Chair in Artificial Intelligence and Healthcare. He authored books on Prediction Machines with Ajay Agrawal and Joshua Gans, and presented a keynote at HEC’s first AI and Entrepreneurship Workshop in 2023). 

I was honored to be asked to be a mentor in Toronto in 2018 and have since continued in that role in Oxford, Berlin, and now Paris. I then got asked to help the CDL’s health stream at Oxford, and then the AI stream in Berlin and at HEC Paris.

AI is not my expertise, but as I explained to my wife who asked me "what the hell do you know about AI?", I don't need to know how a car's engine works to drive it. At the end of the day, all startups have similar issues and challenges. 

In terms of mentoring, I love to help people who have brilliant ideas, to find a way to make them relevant, often helping them to figure out the problem that exists and trying to provide a solution to resolve it. I am often dumbstruck by the educational and professional achievements of the founders, and especially their facility in English, which is often the second and third language. 

However, this often leads to them using a significant amount of jargon, which can distort what it is they are doing. So, helping them to convey their messages in a clear, consistent, concise way is gratifying. It is also important to guide them away from telling their audience what they want them to hear, instead of what they need to hear - something that they often find quite helpful. 

“My involvement in CDL has and continues to be one of the highlights of my life.”

Whenever I am approached with a new project, I have three metrics that I use to decide if I want to get involved. Can I have any fun with it, can I make money with it, and is the world a little better place because of it. If I get at least two positive answers out of three, I say yes. In that regard, my involvement in CDL has and continues to be one of the highlights of my life.

You mentored Cogitat, a neurotech startup. What assets did you find?

Cogitat uses a software that can decode information from electroencephalograms to help monitor physical movements.  Through its software it detects brain waves and converts them into digital commands without any physical movement. The tool they developed can be used for so many cases, such as patients’ reeducation or playing video games without moving. 

The tech is amazing, but they need to find the right use case and market to be able to make a living out of it. For this, they need to find a beachhead market, and from there the next steps will fall into place. I’m happy to help them to execute this strategy and what to do next.

 

Related topics:
Artificial Intelligence
See structure
Part 3

AI Advances Supply Chain Sustainability Goals

AI Advances Supply Chain Sustainability Goals
Sustainable Development
Published on:

Predictive algorithms reroute ships to avoid whales. AI tools scan for compliance risks in global supplier networks. In this interview, HEC professor Sam Aflaki shows this is real. 

iStock_mutarusan_management research_cover

Photo credits: Mutarusan on iStock

Professor Aflaki, how are you helping make supply chains more sustainable?  

I use a multidisciplinary research approach that combines data analytics, operations management, and behavioral science to identify the key leverage points within the supply chain where interventions can have the most significant impact. To do this, I study the incentives and barriers that businesses, policymakers, consumers, and suppliers face when investing in sustainable measures. I also look at how regulatory frameworks can balance sustainability with innovation and growth. Additionally, I examine how consumer behavior can be influenced toward more sustainable choices through information, transparency, and choice architecture redesign.

 

By leveraging data and analytics, companies can better manage their supply chains, identifying where to improve energy efficiency, invest in renewable energies, and reduce waste.

 

Our goal is to explore how technology can illuminate the footprint of supply chain activities. By leveraging data and analytics, companies can better monitor and manage their supply chains, identifying areas where improvements can be made in terms of energy efficiency, investment in renewable energies, and waste reduction.

Why is energy a central theme in your work? 

Absolutely! My research critically examines energy efficiency and the transition to renewables as fundamental components of a sustainable energy shift. Despite the clear economic and environmental benefits of energy-efficient solutions, their adoption rates lag behind their potential. I'm particularly focused on bridging the energy efficiency gap, exploring how data analytics and strategic contracting can encourage the adoption of energy-efficient technologies, moving us closer to net-zero targets. This research, entitled “Performance-Based Contracts for Energy Efficiency Projects,” is funded by donors of the HEC Foundation’s Research Committee, who I would like to thank.

What challenges does the renewable energy sector face? 

In addition to energy efficiency, a sustainable energy transition requires investment in renewable sources of energy. In this research, we focus on the renewable energy sector, particularly offshore wind energy. We investigate the delicate balance between maximizing energy production and mitigating environmental impacts. While it is essential to move towards renewable energy sources, there is a risk of overlooking the long-term environmental consequences, such as waste management and the lifecycle footprint of renewable technologies. Based on the lessons we've learned from past technological rushes, like the e-waste crisis, our research advocates for a more nuanced approach. Our research advocates for a lifecycle approach to renewable technology development, ensuring we don't overlook long-term environmental costs in the rush toward renewables.

What supply chain challenges are you focused on? 

Supply chain management is undergoing significant transformation due to the tightening of due diligence regulations worldwide. These rules demand greater accountability and transparency from companies through all supply chain levels, not just with direct suppliers. We examine how the relative bargaining power between suppliers and buyers influences the design of these legislations.

Navigating this shift is complex, as it involves understanding the dense network of global supply chain relationships, which span diverse legal and compliance landscapes.

Yet, this complexity also opens doors for innovation in supply chain management. Digital technologies, particularly data analytics and blockchain, are pivotal in ushering in a new era of transparency and accountability. Blockchain, for example, enables the creation of secure, immutable records, offering unprecedented traceability and verification capabilities across the supply chain.

This is where data analytics and AI can help solve these supply chain challenges, right? 

Indeed! AI and machine learning are game changers, improving supply chain forecasting, risk evaluation, and compliance. These technologies offer insights that can significantly enhance supply chain sustainability, including improved forecasting of disruptions, better evaluation of supplier risks, and enhanced social and environmental compliance. For instance, AI tools can process large datasets to forecast disruptions and highlight ethical concerns with suppliers, which is crucial for enhancing the resilience and sustainability of the supply chain. This would allow companies to have less exposure toward non-compliance penalties enforced by due diligence legislation.

Can you share a real-world example of these technologies in action?

The use cases are extremely diverse and effective. A cool example of this application is the initiative by CMA CGM, for which I am honored to hold the HEC Chair on Sustainability and Supply Chain Analytics. I am currently in the process of writing a case about their use of predictive analytics to protect marine life. The company utilizes advanced data analysis to predict the migration paths of whales and adjust their shipping routes accordingly. This initiative demonstrates the potential of predictive analytics in reducing environmental impact.

What ethical and environmental risks come with AI? 

As we harness the power of AI, we must be vigilant about the potential unintended consequences, including the environmental impact of powering AI systems and the ethical considerations around data privacy and algorithmic bias. 

My research on investment in renewables advocates for a comprehensive approach that considers their full lifecycle and implications rather than just the immediate benefits. This same approach can be applied to the development and use of AI. It is crucial to consider ethical, environmental, and social impacts from the outset to ensure that our pursuit of technological advancement does not compromise our commitment to sustainability and ethical integrity.

How does your work with the Hi! PARIS Center support this mission? 

The Hi! PARIS Center is a vibrant hub where academia, industry, and policy intersect, providing a unique platform for interdisciplinary research on the intersection of AI and sustainability. Our collaborative initiatives, such as the Hi!ckathon - a hackathon and several roundtables - we held last December on the impacts and uses of AI in supply chains, demonstrate our commitment to using AI for positive environmental and social outcomes. The center fosters the exchange of ideas and encourages innovations that are technologically advanced yet grounded in sustainability principles. Ultimately, this contributes to a more resilient and efficient global supply chain.

 

*The Long-Term Costs of Wind Turbines, by Sam Aflaki, Atalay Atasu, and Luk N. Van Wassenhove, Harvard Business Review, February 20, 2024.

 

References: Working papers by Sam Aflaki (HEC Paris) and Ali Shantia (Toulouse Business School): “Transparency and Power Dynamics: A Game Theoretic Analysis of the Supply Chain Due Diligence Regulations”, with Sara Rezaee Vessal (ESSEC and HEC alumni); “Performance-Based Contracts for Energy Efficiency Projects”, with Roman Kapuscinski (University of Michigan).
See structure
Part 19

AI Must Be Governed Democratically to Preserve Our Future

AI Must Be Governed Democratically to Preserve Our Future
Governance
Published on:

HEC Paris professor Yann Algan provides some answers on how societies can confront the political risks and civic promises of artificial intelligence.

crowd demonstration

Photo Credits: halfpoint on 123rf

Watch the DECODING video (in French):

 

 

Who Leads and Funds AI-Democracy Initiatives 

The majority of AI-democracy initiatives are led either by civil society and foundations or by private companies. Governments and universities play a much smaller role in shaping these discussions. The most active AI-democracy initiatives are concentrated in the United States, far more than in Europe or other regions.

AI initiatives in the world mapping

 

 

A disproportionate share of investments in AI and democracy comes from major tech giants, raising concerns about sovereignty and the privatization of democracy.

This analysis highlights an urgent need for public action. There is nothing inevitable about technology—it can be an opportunity, but only if citizens and public authorities actively engage with it.

 

Technology can be an opportunity, but only if citizens and public authorities actively engage with it. We found three broad categories of initiatives, representing three different worldviews on the future of democracy in the AI era:

1. Protecting democracy from AI

This category includes initiatives, primarily from civil society and some governments, focused on safeguarding democracy from the risks posed by AI. These risks include deepfakes, polarization of public debate, emotional manipulation, and mental health concerns. AI can reinforce echo chambers, escalating emotions rather than fostering rational discourse.

There is also the question of "hypnocracy"—a digital governance system that manipulates public consciousness through AI-driven engagement algorithms. Are we at risk of creating a new "digital idiot," driven by AI algorithms that prioritize emotional reactions over critical thinking?

To counter these risks, initiatives in this group advocate for stronger regulatory frameworks, independent oversight mechanisms, and safeguards to protect democratic integrity.

2. Leveraging AI to Reinvent Democracy

A more optimistic approach focuses on how AI can enhance democratic processes. AI has the potential to reduce human biases in decision-making, particularly in governance, justice, and public administration. It can also foster large-scale citizen collaboration and improve public service efficiency.

For instance, AI can be used to generate collaborative policy recommendations based on citizen input, as seen in Taiwan’s use of AI-powered open consultations. AI can also enhance judicial fairness by mitigating cognitive biases in legal rulings. However, concerns remain about the potential dehumanization of public decision-making.

Recent advances in open-source AI—such as the Chinese-backed Deepseek—illustrate both the opportunities and the challenges in ensuring democratic control over these technologies.

3. The Rise of Libertarian AI Governance

The third vision, largely promoted by American tech giants, embraces AI as a tool for a radically decentralized, algorithm-driven democracy. This perspective, influenced by figures such as Peter Thiel and Elon Musk, prioritizes efficiency and individual freedom over democratic safeguards.

In this model, AI and decentralized autonomous organizations (DAOs) could replace traditional governance structures, reducing the role of state institutions. However, this raises critical concerns:

•    If AI decisions are driven by private interests, who ensures fairness and accountability?
•    Should algorithms operate without democratic oversight?
•    Does the push for “liquid democracy” ultimately weaken democratic sovereignty?

A Balanced Approach: AI for Democracy, Democracy for AI

In our report, we propose a middle path:

  1. Addressing the roots of democratic dissatisfaction. The crisis of representative democracy is not caused by AI—it stems from citizens’ frustration with political and economic systems that fail to protect them. Addressing these underlying issues is essential.
  2. Rebuilding social trust. Digital platforms have isolated individuals, eroding real-world social interactions. We need to restore face-to-face democratic engagement through initiatives like local debate forums and public AI discussions (e.g., “Café IA”).
  3. Ensuring democratic control over AI. AI algorithms must be open-source, auditable, and governed by citizen oversight bodies. We propose creating citizens' assemblies dedicated to AI governance.
  4. Education as the foundation of AI democracy. AI literacy must start from an early age, not just to understand how algorithms work but to teach collaborative problem-solving and social resilience.

Our research underscores the urgency of democratic engagement with AI. Whether AI strengthens or undermines democracy will depend on how societies choose to govern and integrate it. AI must serve democracy, just as democracy must guide AI’s development.

 

AI and Democracy: The Coming Civilization, by Yann Algan and Gilles Babinet.
Gilles Babinet
Entrepreneur, Member of the French National Generative AI Committee Teaching at HEC Paris
See structure
Part 20

Fake News Spreads Fast - Platforms Must Act Faster

Fake News Spreads Fast - Platforms Must Act Faster
Artificial Intelligence
Published on:
Updated on:

HEC Paris professor David Restrepo Amariles proposes some answers to platforms and lawmakers designed to rethink misinformation regulation now. 

someone holding a smartphone

Photo Credits: peopleimages12/123rf

Why Meta’s Policy Shift Sparks Concern 

In early January 2025, Meta announced a controversial shift in its approach to misinformation, replacing independent fact-checkers on Facebook and Instagram with a Community Notes-style system. As the company framed it, this move is designed to support “more speech and fewer mistakes” by leveraging user contributions to contextualize misleading posts. Such claims reflect those made by X, which implemented similar policies following the takeover by Elon Musk. But our research on his company underlines how the question of speed undermines such policies as falsehoods spread considerably faster than rectifications.

Increasingly, we have seen how quickly fake news can upend financial markets and corporate reputations. In 2023, for example, a fabricated tweet showing a fake explosion near the Pentagon rattled the U.S. stock market, causing a brief but impactful downturn. Then there was the notorious case of Eli Lilly’s fake tweet promising free insulin in November 2022. That cost the pharmaceutical multinational $22 billion in the stock market. This isn’t a new phenomenon - as far back as 2013, a fake report of explosions at the White House caused the S&P 500 to lose $130 billion in market capitalization within minutes.

Research on X’s Community Notes and their Limits

These examples demonstrate that fake news is more than an annoyance – it presents a significant social, economic, political and reputational threat. This is one of several conclusions from our years of research built on a database of around 240,000 notes from X’s (formerly Twitter) Community Notes program. This is a system where users collaboratively provide context to potentially misleading posts. We sought to analyze the causal influence of appending contextual information to potentially misleading posts on their dissemination. While the program offered valuable insights into combating misinformation, our findings reveal critical limitations.

In this study, we found that Community Notes double the probability of a tweet being deleted by its creator. However, as we point out, the note often arrives too late, as around 50% of retweets happen within the first six hours of a tweet’s life. While Community Notes reduce on average retweets by more than 60%, the median note takes over 18 hours to be published - too slow to combat the initial, viral spread of misinformation. This confirms an MIT study in 2018 which showed that falsehoods can go “10 to 20 times faster than facts”. 

It also highlights a critical challenge: while community-driven fact-checking is a valuable tool, its current design and speed are insufficient to mitigate the rapid dissemination of fake news. And the latter is only getting faster.

The Way Forward: Leadership in the Age of Misinformation

Meta’s decision to replace independent fact-checkers with a Community Notes-style system on Instagram and Facebook highlights the urgency of addressing misinformation at scale. Its announcement sparked a wave of criticism, including an open letter to Mark Zuckerberg from the International Fact-Checking Network (IFCN) which warned of the increased risks of misinformation and its consequences for businesses and society. The letter underscored that this approach undermines accountability and could exacerbate the rapid spread of fake news, leaving businesses particularly vulnerable.

As our research demonstrates, these systems need to evolve to match the speed of misinformation’s spread. We believe that integrating AI-driven tools could significantly enhance human efforts, enabling faster detection and flagging of potentially harmful content. For example, machine learning models trained to identify patterns of misinformation can serve as an early warning system, while large language models (LLMs) can complement these efforts. LLMs analyze the linguistic and thematic patterns of viral posts to provide real-time contextualization. 

This dual approach allows platforms and companies to respond to misinformation more effectively and in near real-time. Moreover, fostering partnerships between social media platforms, governments, and private entities could lead to more unified standards for combating fake news.

Is Regulation Effective in Promote a Human-centric AI?

Parallel to our research on fact-checkers, we have been exploring regulatory initiatives - such as Europe’s AI Act and an AI Liability Directive - which aim to promote a human-centric approach to AI. Our recent study explores a dual approach to tackle excessive regulation and ineffective policy in the field of AI. We suggest that for AI regulation to promote a human-centric approach, human rights should be the main regulatory benchmark for assessing AI systems and balancing the purposes they serve in the market. 

A first practical step is to require an explicit proportionality test. This test acknowledges that AI systems may impact human rights, such as privacy and non-discrimination, and requires developers to explicitly disclose the trade-offs between the optimisation strategies designed to achieve the business objectives pursued by the AI systems and their potential negative impact on human rights. 

Moreover, the proportionality would also help to make explicit the trade-offs between human rights themselves, such as in cases where content moderation is performed by algorithms. These algorithms, by determining whether or not to moderate potentially offensive messages, ultimately balance the rights to freedom of expression and non-discrimination. 

Secondly, we suggest a co-evolutionary and life cycle approach which can help ensure accountability beyond the design stage. We propose to achieve this through meaningful human control and human-AI interaction across the entire lifecycle of the system. This allows decision-makers to constantly update and adapt AI systems to answer the challenges they identify during each phase. 

Staying Ahead of the Curve

In today’s fast-moving digital landscape, trust has become as valuable as revenue. The rapid spread of misinformation, amplified by market-driven platforms, presents both a risk and an opportunity for businesses and governments alike. Through research and real-world examples, we see that those who proactively address these challenges can foster both resilience and long-term integrity.

The way forward requires a blend of technological innovation and strategic collaboration. Businesses must integrate AI-driven tools to detect and mitigate misinformation faster than it can spread. However, technology alone is not enough. Leadership is also crucial. By adopting regulatory frameworks and implementing proportionality tests, organizations can ensure that human rights remain central to their AI strategies. 

This regulatory approach helps make explicit the trade-offs between business objectives and their potential impact on rights such as privacy and non-discrimination. Furthermore, continuous human oversight across the entire AI lifecycle ensures that systems can evolve in response to emerging risks and ethical concerns.

Businesses that stay ahead of the curve by investing in these strategies not only protect their reputations but also contribute to a more informed and resilient society. In doing so, they turn today’s crises into opportunities for innovation and leadership, shaping a future where trust and accountability are the cornerstones of success.

David Restrepo Amariles is HEC Associate Professor of Artificial Intelligence and Law, Hi! PARIS Fellow, and Worldline Chair Professor. Thomas Renault, Assistant Professor of Economics at the University Paris I. Aurore Troussel Clément, lawyer and HEC PhD candidate in AI and law.
See structure
Part 21

Making AI Understandable for Managers

Making AI Understandable for Managers
Artificial Intelligence
Published on:

How Vincent Fraitot’s latest book demystifies data science for decision-makers.

vincent fraitot book on ai & data - cover

Managers Don’t Need to Code to Understand AI

Vincent Fraitot's book, La data science et l’IA accessibles aux managers (“Data Science and AI for Managers”), offers a concrete, example-based roadmap to help non-specialist managers navigate data science and artificial intelligence.

Artificial intelligence may be the most powerful business tool of the 21st century, but for many managers, it still feels like a black box. Without a background in math or programming, how can they confidently lead AI initiatives or engage with data scientists?

That’s the problem Vincent Fraitot, Academic Director of the MSc in Data Science and AI for Business at HEC Paris and École Polytechnique, set out to solve in his 2023 publication.

The book offers more than a simplified explanation of AI concepts: it delivers a toolkit designed to help managers make better strategic decisions in an increasingly data-driven world.

Bridging the Gap Between Strategy and Data

Fraitot was driven by a common frustration: most books on AI either jump too quickly into advanced math or rely heavily on programming tutorials.

Beyond the first chapter, they almost always lose the reader,” he explains. “Unless you're already fluent in Python or linear algebra, you’re out.”

His book takes a different route. It’s structured in two parts: a business-first section that focuses on strategy, value creation, and use-case selection, followed by a technical section that explains how data science works under the hood - without requiring any coding or math skills.

A Toolkit, Not a Textbook

What sets the book apart is its use of detailed case-based analogies and decision frameworks. For example:

  • To explain supervised learning, Fraitot uses a real-world HR case where a manager wants to predict which new recruits are likely to thrive in a company. Rather than throwing algorithms at the problem, the book shows how the manager must first define success criteria, identify biases in past data, and align AI metrics with business goals.
  • When addressing unsupervised learning, the book includes a marketing case where managers want to segment clients. Instead of presenting clustering formulas, Fraitot uses visual diagrams to walk through the logic of grouping based on customer behavior - from clickstream analysis to loyalty programs.
  • In a chapter on model interpretability, he presents a procurement case involving automated supplier scoring, showing how transparency in AI outputs is critical for both trust and regulatory compliance.
    These examples are not simplified - they’re concrete. And they’re designed to match the kinds of challenges that managers face on a day-to-day level in sectors like retail, finance, logistics, or HR.

Designing for the Non-Tech Executive

Fraitot’s background in pedagogy (he recently won the prestigious Vernimmen Teaching Award at HEC Paris) is evident throughout the book. Every concept is paired with intuitive visuals, and key ideas are revisited in different contexts to reinforce understanding.

He also integrates lessons from his work at Hi! PARIS where he oversees pedagogical coordination and regularly works with executive education cohorts.

The aim isn’t to replace the data scientist,” he says, “but to empower managers to ask better questions, build stronger teams, and avoid costly misalignments in data projects.”

 

The aim isn’t to replace the data scientist, but to empower managers to ask better questions.

 

Preparing for the Generative AI Shift

Published just before the explosion of generative AI tools like ChatGPT, the book focuses primarily on classic machine learning and data analytics. But Fraitot is already working on a sequel to address the new wave of capabilities - and risks - brought by generative models.

The foundations still apply,” he insists. “But managers need new frames to evaluate these systems, especially in terms of creativity, hallucination, and intellectual property.”

Until then, Data Science and AI for Managers offers a timely and practical foundation for business leaders ready to take the first confident step into AI.


The book is available on Eyrolles.com, here.

 

See structure
Part 22

Students Massively Adopt AI Tools for Schoolwork

Students Massively Adopt AI Tools for Schoolwork
Artificial Intelligence
Published on:

HEC Paris students are rapidly adopting generative AI in their academic work, prompting urgent reflections on ethics, pedagogy, and institutional strategy.

In 2024, two HEC students researched the use of Generative AI (GenAI) by their fellow scholars. Belarusian graduate Alena Hrynkevich conducted experiments showing that almost everyone is using GenAI in their dissertations. Lebanese student James Kandalaft, meanwhile, discussed how HEC could redefine its value proposition to align with the demands from students and employers using technology disruption. We talked to both, starting with Alena. We also asked HEC Dean of Pre-experience Programs Yann Algan to respond to the students’ research conclusions.

interview with Alena Hrynkevich

Alena HRYNKEVICH

How did you proceed with these two surveys?

Alena Hrynkevich: We focused our first study on students taking the Artificial Intelligence and Behavioral Science Master course at the University of St. Gallen, where I spent a semester as part of my HEC's double degree. 20 participants were put in a hypothetical setting of a vignette study; they were asked to rate four quality dimensions of one feedback written by GenAI (“Understandability", "Usefulness", "Specificity", and "Empathy"), followed by another feedback written by the professor for the same real-life midterm-examination student’s answer.

Our second survey, conducted at HEC in a Master’s degree in International Management, involved 73 students from various programs - from Grande Ecole and Masters of Science, mostly in finance, economics and marketing, and from double degrees. They are in their early twenties and consist in 53% of males and 46% of females.. They were in their early twenties and more or less equally split between male and female students. I interviewed them on the use of GenAI for their Master's thesis. This involves a range of writing tasks where GenAI can assist, including idea generation, literature review, text optimization, data analysis, and content creation. This makes it a suitable context to explore GenAI's impact on academic research and writing.

And what tools did they say they used for these tasks?

96% said they were using ChatGPT, 46% Grammarly, 11% Copilot, and 10% Gemini. And one third of them pay for one or several AI tools.

What did you find on the impact of GenAI on academic work?

GenAI helps improve thesis structure and coherence, particularly in literature review, grammar refinement, and content generation. It also improves efficiency and saves time for both students and professors. 

Students saved an average of 38% of time when writing with an AI tool. 

Do you think AI could replace the professors?

GenAI is a complementary tool, not a replacement. I started my first study asking students if they could tell if the feedback comes from AI or teachers. The study does not prove that AI is better than professors, but it does show that AI-generated feedback is comparable in quality. While AI can offer precise and empathetic responses, it cannot replace personal mentorship and subject-matter expertise of educators.

Indeed, in my second study, students predominantly used AI for assistance, rather than full automation, 43% manually reviewing and editing AI-generated content, and 26% only using AI for initial ideas. However, AI adoption can free up professors'’ time for more impactful teaching and guidance.

What about ethical concerns and academic integrity?

To the question on how they ensure integrity of the use of GenAI, 84% of students answered being aware of its potential academic integrity issues. Yet most (89%) did not discuss AI use with their supervisors.

Students may be reluctant to disclose their use of GenAI not just due to fears of academic misconduct but more due to the absence of clear guidelines regarding AI'’s permissible use in coursework and likely negative perceptions as a lack of effort, potentially impacting their grades and their relationship with the professor.

Were there any results surprising to you? 

I grew up using the Internet and e-learning tools, so I was not so surprised by how they use GenAI. 
Yet, it helped me reflect on integrity. Most students say they didn’t discuss the use of AI with their supervisors, so we are still not at the point of time where we understand how we should use AI -– and even if we should cite it or not. Basically, we don’t know how much freedom we should have as students. 

I think that GenAI is useful to organize and other tasks, but we can improve the way we ensure collaboration between the stakeholders in education to integrate AI ethically, with accountability and transparency. It’s difficult to say who is responsible for it but it’s a reality we all share and we must be accountable.

A MBA candidate reimagines HEC Paris in an AI-driven world

james kandalaft MBA student at HEC

James Kandalaft, MBA candidate at HEC Paris

In 2024, MBA student James Kandalaft provided a compelling overview of how HEC could leverage AI to enhance and redefine its programs. His 55-page study goes beyond just the usage of GenAI among students, exploring how the school can strategically redefine its value proposition in the face of AI and technological disruption. This includes industry trend analysis, curriculum benchmarking, and stakeholder insights from students, faculty, and employers.

Kandalaft’s work is a call to action, “Students will begin their journey with a fundamental course on AI, providing a strong foundation in this transformative technology,” he writes. “As they progress, they will dive into specialized electives, exploring the intricacies of machine learning, blockchain, and AI ethics. Picture students working hands-on in an AI Lab, collaborating on real-world projects, and gaining practical experience through partnerships with leading tech companies.” 

Kandalaft also describes the benefits teachers can enjoy and concludes with a series of hands-on measures to align the school with the demands of the business work: “By integrating AI and technology, enhancing experiential learning opportunities, fostering innovation, and prioritizing ethical considerations, HEC Paris will maintain its competitive edge... This comprehensive approach ensures that HEC Paris remains at the forefront of business education, preparing its graduates for impactful and successful careers in an AI-driven world.”

Yann Algan reacts to the students’ research

yann algan dean of pre experience programs at HEC

Yann Algan, Dean of Pre-experience Programs at HEC Paris

These two investigative approaches once again showcase the creativity and excellence of HEC students - young minds capable of asking the right questions about society’s biggest challenges, in this case, AI, while also delivering solutions that avoid both overregulation and naïve techno-solutionism. 

Alena Hrynkevich’s thesis reveals two striking insights. First, the massive adoption of AI in thesis writing is already a reality: 96% of students use it to assist with writing, though not necessarily for creativity. GenAI is a complementary tool, not a replacement.

But here’s an even more eye-opening fact: a staggering 89% of students did not discuss their AI usage with their supervisors, particularly regarding integrity. This makes it urgent to initiate a collective debate among students, professors, and all stakeholders in higher education to ensure the most ethical and effective use of AI. So far, discussions have largely centered on AI’s impact on assessment, with little focus on how it can enrich the learning process itself.

And here’s the good news: our students already have solutions! Discover James Kandalaft’s original proposals on how HEC’s MBA program can better align with the evolving expectations of both students and employers by embracing AI and technological disruption.

Long live HEC students!

Yann Algan, also Professor of Economics, recently published “Trusting others: How unhappiness and social distrust explain populism”, as part of the 2025 World Happiness Report.
 

Related topics:
Artificial Intelligence
See structure
Part 23

Gaming the Machine: How Strategic Interactions Shape AI Outcomes

Gaming the Machine: How Strategic Interactions Shape AI Outcomes
Economics
Published on:

Can algorithms be fooled by the very people they’re meant to assess? HEC Paris researcher Atulya Jain provides some answers. 

People discussing code displayed across two monitors in a room with neon lighting at a modern office.

Photo Credits: pitinan on 123rf

How we can trick AI 

AI algorithms, as machines, assume that the data they process is unbiased and originates from external sources. However, in many cases, the data is provided by individuals, who often distort it to serve their own interests. For instance, financial analysts bias their predictions for high commissions, while unqualified applicants tailor their resumes with targeted keywords to get past the filters.

How should we optimally respond to AI algorithms? Do these algorithms perform well against strategic data sources? Understanding these dynamics is crucial because they influence decision-making across sectors such as finance, online markets, and healthcare. My research aims to explore the interactions between self-interested individuals and algorithms, emphasizing the need for algorithms that account for strategic behavior. This approach can lead to the development of AI algorithms that perform better in complex and strategic environments.

With this challenge in mind, with Vianney Perchet, a professor at the “Centre de recherche en économie et statistique” (CREST) at the ENSAE, we built a model based on game theory to analyze the interaction between a strategic agent and an AI algorithm. 

How AI Can Be Tricked with Strategic Forecasts

Consider a repeated interaction between a financial analyst and an investor. Each day, the analyst forecasts the chances that an asset will be profitable. Analysts tend to inflate these chances in pursuit of commission from selling the asset. Therefore, the investor only wants to follow their recommendations if they are credible.

Consider an AI platform that uses a statistical test to verify how reliable the analyst’s predictions are. It only forwards the forecasts if they pass the “calibration test." This test checks whether the predictions match what actually happens. For example, if an analyst forecasts a 60% chance of profit over five days, the test checks if there was a profit on around three of those days. Calibration is essential for forecasting and is used to evaluate the accuracy of prediction markets.

Calibration test:

calibration test

 

Since the analyst must pass the calibration test, what strategies can she use to send forecasts? A knowledgeable analyst can always pass the calibration test by honest reporting. For example, suppose that if an asset is profitable today, there’s an 80% chance it will be profitable tomorrow; if it’s not profitable today, there’s a 20% chance it will be profitable tomorrow. The analyst can truthfully report these probabilities to pass the calibration test.

Are there other ways for the analyst to pass the test? Yes, she could garble (or add noise to) the truthful forecasts. For instance, she could randomize between reporting 60% and 40% in a way that still allows her to pass the calibration test. While the forecasts must remain accurate, they can be less precise (or informative) than the truthful forecasts. This implies that strategic forecasting is possible, allowing the analyst to achieve better outcomes than by simply being truthful.

Why No-Regret Learning Isn’t Foolproof 

We also looked at what happens when the investor uses no-regret learning algorithms for decision-making. Regret measures the difference between what one could have achieved and what one actually obtains. No-regret algorithms ensure that, in hindsight, an investor could not have done better by consistently making the same choice. However, we show that using a no-regret algorithm can lead to worse performance for the investor compared to relying on the calibration test.

We found that agents can manipulate the data to serve their own interests, which can cause the algorithm to perform poorly. Therefore, it is essential to understand who is supplying the data and what their motivations are. This underscores the pressing need to create performance benchmarks for algorithms in strategic environments.

The Case for Responsible, Strategic AI Design 

This research examines how individuals strategically interact with AI algorithms, focusing on data manipulation. By analyzing its impact on predictions and recommendations, this research opens new paths at the intersection of economics and computer science. 

These findings align with the mission ofn the Hi! PARIS Center to advance responsible AI research and to design robust AI systems that promote trust and reliability.

This is particularly relevant in industries like finance and healthcare, where biased recommendations can lead to significant societal implications, such as economic disparities and unequal access to services.

"Calibrated Forecasting and Persuasion", by Atulya Jain (supervised by Professor Tristan Tomala) and Vianney Perchet. This research is funded by the Hi! PARIS’s Center. Learn more here

Related content on Artificial Intelligence

Artificial Intelligence
Can the EU Act Shape or Shake Business and Innovation?
Artificial Intelligence
Making AI Understandable for Managers
Artificial Intelligence
Inside CDL: How Mentorship Fuels Startup Success
Artificial Intelligence
Students Massively Adopt AI Tools for Schoolwork
Artificial Intelligence

The AI Divide Is Human, Not Technological

Artificial Intelligence
AI Startups Succeed Faster with Research-Led Mentoring
Subscribe button for Knowledhe@HEC newsletter

Newsletter knowledge

A monthly brief in your email box and 3 issues of the book per year.

follow us

Insights @HECParis School of #Management

Follow Us

Support Research

Our articles are produced thanks to our reader's support