Skip to main content
Article

In AI We Trust?

Information Systems
Published on:

New research by HEC Paris faculty reveals a paradox: people trust AI advice more than human guidance, even when they know it’s flawed. 

Photo Credits: Have a nice day on Adobe Stock

Key findings:

  • Algorithmic advice is preferred over human advice — even when the suggestions are identical.
  • People show high trust in AI despite known flaws, a behavioral bias the researchers term “algorithm appreciation.”
  • Too much transparency reduces trust: when users were given overly complex performance data, reliance on AI dropped.
  • AI-based advice can aid high-stakes decisions, but overconfidence and blind trust pose ethical risks.
  • Moderation is key: providing relevant, digestible information improves AI uptake without triggering cognitive overload. 

Would you take advice from an algorithm — even if you knew it made mistakes?

Surprisingly, most people would. In a series of behavioral experiments, HEC Paris Professors Cathy Liu Yang and Xitong Li, along with Sangseok You of Sungkyunkwan University, discovered that people tend to follow advice from AI more readily than from humans — even when the advice is identical, and even when the AI’s imperfections are made explicit.

This effect, known as “algorithm appreciation,” has powerful implications across domains — from consumer behavior to courtroom decisions, cancer diagnosis, and beyond. But it also raises critical questions about over-reliance, information overload, and accountability when things go wrong. 

Algorithms Now Guide Everyday and High-Stakes Decisions

Machine recommendations result in 80% of Netflix viewing decisions, while more than a third of purchase decisions on Amazon are influenced by algorithms. In other words, algorithms increasingly drive the daily decisions that people make in their lives.

It isn’t just consumer decision making that algorithms influence. As algorithms appear increasingly in different situations, people are using them more frequently to make more fundamental decisions. For example, recent field studies have shown that decision makers follow algorithmic advice when making business decisions or even providing medical diagnoses and releasing criminals on parole.

Why People Trust Machines Over Humans

People regularly seek the advice of others to make decisions. We turn to experts when we are not sure. This provides us with greater confidence in our choices. It is clear that AI increasingly supports real-life decision making. Algorithms are ever more intertwined with our everyday lives. What we wanted to find out is the extent to which people follow the advice offered by AI.

To investigate the matter, we conducted a series of experiments to evaluate the extent to which people follow AI advice. Our study showed that people are more likely to follow algorithmic advice than identical advice offered by a human advisor due to a higher trust in algorithms than in other humans. We call this phenomenon “algorithm appreciation”.

When Transparency Backfires

We wanted to find out more, to see if people would follow AI advice even if the AI is not perfect. Our second series of experiments focused on exploring under which conditions people might be either more likely or less likely to take advice from AI. We engineered experiments that tested whether people would have greater trust in algorithms even when they were aware of prediction errors with the underlying AI.

Surprisingly, when we informed participants in our study of the algorithm prediction errors, they still showed higher trust in the AI predictions than in the human ones. In short, people are generally more comfortable trusting AI than other humans to make decisions for them, regardless of known and understood imperfections in the process.

 

People are generally more comfortable trusting AI than other humans to make decisions for them, regardless of known and understood imperfections in the process, except when there is too much information about the algorithm and its performance.

 

There was an exception to this rule. We found that when transparency about the prediction performance of the AI became very complex, algorithmic appreciation declined. We believe this is because the provision of too much information about the algorithm and its performance can lead to a person becoming overwhelmed with information (cognitive load). This impedes advice taking. This is because people may discount predictions when presented with too much information about the underpinning detail and they are unable or unwilling to internalize it. However, if we do not overwhelm people with information about AI then they are more likely to rely on it.

The Risk of Overconfidence in Automation 

If algorithms can generally make better decisions than people, and people trust them, why not rely on them systematically? Our research raises potential issues of over-confidence in machine decision-making. In some cases, the consequences of a bad decision recommended by an algorithm are minor: If a person chooses a boring film on Netflix they can simply stop watching and try something else instead. However, for high-stakes decisions that an algorithm might get wrong, questions about accountability come into play for human decision-makers. Remember the miscarriage of justice in the UK Post Office, when more than 700 Post Office workers were wrongfully convicted of theft, fraud and false accounting between 2000 and 2014, because of a fault in a computer system.

AI in the Courtroom and the Clinic

However, our research has important implications for medical diagnosis. Algorithmic advice can help where there is a patient data for examination. AI can predict with a level of likelihood whether the chances of the patient having cancer are 60% or 80% and the healthcare professional can include this information in decision making processes about treatment. This can help avoid a patient’s higher level of risk being overlooked by a human and it can lead to more effective treatment, with the potential for a better prognosis.

In wider society, algorithms can help judges in the court system make decisions that will drive a safer society. Judges can be given predictions from algorithms that present the chance of a criminal possibly committing the crime again and so decide for how long to put them away.

Methodology

To explore how and why transparency in performance influences algorithm appreciation, we conducted five controlled behavioral experiments, each time recruiting more than 400 participants via Amazon's Mechanical Turk. Across the five experiments, participants were asked to perform a prediction task in which they predict a target student’s standardized math score based on nine pieces of information about the student before and after being presented with advice generated by the algorithmic prediction regarding the student’s predicted score.

Applications

Where firms need to make investment decisions, employees will trust AI to help inform those choices. With good data and solid, well-thought-out underlying algorithms, this has the potential to save businesses a lot of money.
Based on an interview with HEC Paris professors of Information Systems Cathy Liu Yang and Xitong Li on their paper “Algorithmic versus Human Advice: Does Presenting Prediction Performance Matter for Algorithm Appreciation,” co-written with Sangseok You, Assistant Professor of Information Systems at Sungkyunkwan University, South Korea, and published online in the Journal of Management Information Systems, 2022. This research work is partly supported by funding from the Hi! PARIS Fellowship and the French National Research Agency (ANR)'s Investissements d'Avenir LabEx Ecodec (Grant ANR-11-Labx-0047).

Related content on Information Systems

Artificial Intelligence

Saving Lives in Intensive Care Thanks to AI

By Julien Grand-Clément

Information Systems

How We Can Support the Digital Transformation of Microbusiness Owners

By Shirish Srivastava

Artificial Intelligence

How AI Can Help Figure Out When Hospital Patients Need Intensive Care

By Julien Grand-Clément

man chating with a chatbot on his cell phone - thumbnail
Information Systems

How Should We Design the Next Generation of AI-Powered Chatbots?

By Shirish Srivastava

Information Technology

Using Innovations on Social Media for More Engagement? Be Aware of The Cultural Differences

By Reza Alibakhshi, Shirish Srivastava

Information Systems

Digitalization as an Enabler of Business Transformation: The Orange Case

By Shirish Srivastava, Joseph Nehme