Skip to main content

©2025 Olivia Lopez - HEC Paris. Artwork generated with Midjourney.

Why People Trust AI More Than Humans — Even When It’s Flawed

New research by Cathy Yang, Xitong Li and Sangseok You reveals a paradox: people trust AI advice more than human guidance, even when they know it’s flawed.

Key findings
  • People show high trust in AI despite known flaws, a behavioral bias the researchers term “algorithm appreciation.”
  • Too much transparency reduces trust: when users were given overly complex performance data, reliance on AI dropped.
  • AI-based advice can aid high-stakes decisions, but overconfidence and blind trust pose ethical risks.
  • Moderation is key: providing relevant, digestible information improves AI uptake without triggering cognitive overload.

Would you take advice from an algorithm — even if you knew it made mistakes. Surprisingly, most people would. In a series of behavioral experiments, we discovered that people tend to follow advice from AI more readily than from humans — even when the advice is identical, and even when the AI’s imperfections are made explicit.

This effect, known as “algorithm appreciation,” has powerful implications across domains — from consumer behavior to courtroom decisions, cancer diagnosis, and beyond. But it also raises critical questions about over-reliance, information overload, and accountability when things go wrong.

Algorithms Now Guide Everyday and High-Stakes Decisions

Machine recommendations result in 80% of Netflix viewing decisions, while more than a third of purchase decisions on Amazon are influenced by algorithms. In other words, algorithms increasingly drive the daily decisions that people make in their lives.

It isn’t just consumer decision making that algorithms influence. As algorithms appear increasingly in different situations, people are using them more frequently to make more fundamental decisions. For example, recent field studies have shown that decision makers follow algorithmic advice when making business decisions or even providing medical diagnoses and releasing criminals on parole.

Why People Trust Machines Over Humans

People regularly seek the advice of others to make decisions. We turn to experts when we are not sure. This provides us with greater confidence in our choices. It is clear that AI increasingly supports real-life decision making. Algorithms are ever more intertwined with our everyday lives. What we wanted to find out is the extent to which people follow the advice offered by AI.

To investigate the matter, we conducted a series of experiments to evaluate the extent to which people follow AI advice. Our study showed that people are more likely to follow algorithmic advice than identical advice offered by a human advisor due to a higher trust in algorithms than in other humans. We call this phenomenon “algorithm appreciation”.

When Transparency Backfires

We wanted to find out more, to see if people would follow AI advice even if the AI is not perfect. Our second series of experiments focused on exploring under which conditions people might be either more likely or less likely to take advice from AI. We engineered experiments that tested whether people would have greater trust in algorithms even when they were aware of prediction errors with the underlying AI.

Surprisingly, when we informed participants in our study of the algorithm prediction errors, they still showed higher trust in the AI predictions than in the human ones. In short, people are generally more comfortable trusting AI than other humans to make decisions for them, regardless of known and understood imperfections in the process.

People trust AI more than humans to decide for them, even knowing its flaws—unless they’re given too much information about how the algorithm works.

There was an exception to this rule. We found that when transparency about the prediction performance of the AI became very complex, algorithmic appreciation declined. We believe this is because the provision of too much information about the algorithm and its performance can lead to a person becoming overwhelmed with information (cognitive load). This impedes advice taking. This is because people may discount predictions when presented with too much information about the underpinning detail and they are unable or unwilling to internalize it. However, if we do not overwhelm people with information about AI then they are more likely to rely on it.

The Risk of Overconfidence in Automation

If algorithms can generally make better decisions than people, and people trust them, why not rely on them systematically? Our research raises potential issues of over-confidence in machine decision-making. In some cases, the consequences of a bad decision recommended by an algorithm are minor: If a person chooses a boring film on Netflix they can simply stop watching and try something else instead. However, for high-stakes decisions that an algorithm might get wrong, questions about accountability come into play for human decision-makers. Remember the miscarriage of justice in the UK Post Office, when more than 700 Post Office workers were wrongfully convicted of theft, fraud and false accounting between 2000 and 2014, because of a fault in a computer system.

AI in the Courtroom and the Clinic

However, our research has important implications for medical diagnosis. Algorithmic advice can help where there is a patient data for examination. AI can predict with a level of likelihood whether the chances of the patient having cancer are 60% or 80% and the healthcare professional can include this information in decision making processes about treatment. This can help avoid a patient’s higher level of risk being overlooked by a human and it can lead to more effective treatment, with the potential for a better prognosis.

In wider society, algorithms can help judges in the court system make decisions that will drive a safer society. Judges can be given predictions from algorithms that present the chance of a criminal possibly committing the crime again and so decide for how long to put them away.

Methodology

To explore how and why transparency in performance influences algorithm appreciation, we conducted five controlled behavioral experiments, each time recruiting more than 400 participants via Amazon's Mechanical Turk. Across the five experiments, participants were asked to perform a prediction task in which we predict a target student’s standardized math score based on nine pieces of information about the student before and after being presented with advice generated by the algorithmic prediction regarding the student’s predicted score.

Applications

Where firms need to make investment decisions, employees will trust AI to help inform those choices. With good data and solid, well-thought-out underlying algorithms, this has the potential to save businesses a lot of money.

Sources

Based on an interview with HEC Paris professors of Information Systems Cathy Liu Yang and Xitong Li on their paper Algorithmic versus Human Advice: Does Presenting Prediction Performance Matter for Algorithm Appreciation,” co-written with Sangseok You, Assistant Professor of Information Systems at Sungkyunkwan University, South Korea, and published online in the Journal of Management Information Systems, 2022. This research work is partly supported by funding from the Hi! PARIS Fellowship and the French National Research Agency (ANR)'s Investissements d'Avenir LabEx Ecodec (Grant ANR-11-Labx-0047).

cathy liu yang
Meet the Author
Prof. Liu Cathy Yang
Assistant Professor - Marketing

Liu Cathy Yang's research interest broadly lies in preference measurement and social networks. Her dissertation investigated the impact of incentives on consumer's information processing behaviors and preferences.

Xitong Li HEC
Meet the Author
Prof. Xitong Li
Professor - Information Systems

Xitong Li's research interests include how to use online data/information and the identification of causal impacts of using online data/information or social media. Specifically, he has two research streams. One is to use applied econometric (mainly reduced-form) methods for causal inference, and...

Sangseok You HEC professor
Meet the Author
Sangseok You
Assistant Professor - Information Systems

Sangseok You's research focuses on understanding how teams working with technologies operate and promote team outcomes. Topics of his research encompass human-robot collaboration, artificial intelligence, and virtual and distributed collaboration in open source software context. Sangseok You was...

Newsletter

Big Issues, Bold Thinking. In your inbox, once a month.