Skip to main content
About HEC About HEC Faculty & Research Faculty & Research Master’s programs Master’s programs MBA Programs MBA Programs PhD Program PhD Program Executive Education Executive Education Summer School Summer School HEC Online HEC Online About HEC Overview Overview Who We Are Who We Are Egalité des chances Egalité des chances Career Center Career Center International International Campus Life Campus Life Stories Stories The HEC Foundation The HEC Foundation Coronavirus Coronavirus Faculty & Research Overview Overview Faculty Directory Faculty Directory Departments Departments Centers Centers Chairs Chairs Knowledge Knowledge Master’s programs Master in
Management Master in
Management
MSc International
Finance MSc International
Finance
Specialized
Masters Specialized
Masters
X-HEC
programs X-HEC
programs
Dual-Degree
programs Dual-Degree
programs
Visiting students Visiting students Certificates Certificates Student Life Student Life
MBA Programs MBA MBA Executive MBA Executive MBA TRIUM EMBA TRIUM EMBA PhD Program Overview Overview HEC Difference HEC Difference Program details Program details Research areas Research areas HEC Community HEC Community Placement Placement Job Market Job Market Admissions Admissions Financing Financing Executive Education Executive Masters Executive Masters Executive Certificates Executive Certificates Executive short programs Executive short programs Online Online Train your teams Train your teams Executive MBA Executive MBA Summer School Summer programs Summer programs Youth Leadership Initiative Youth Leadership Initiative Admissions Admissions FAQ FAQ HEC Online Overview Overview Degree Program Degree Program Executive certificates Executive certificates MOOCs MOOCs
blue sea - Andrey_Armyagov-AdobeStock

©Andrey Armyagov - Adobe Stock

How to Improve Decision Making

This in-depth dossier features the latest and cutting-edge research findings on decision making from HEC Paris' professors. We hope that the tools presented will help you think your decision making from new angles and to elaborate appropriate strategies in various situations, especially during these times of uncertainty.

Structure

Part 1
Yes, You Can Be Trained To Make Better Decisions
Mental distortions known as cognitive biases often shifts our judgement away from rational prescriptions. While such biases are normal – it's just the way our brains are wired – they can lead to poor choices, sometimes with disastrous consequences. But new evidence shows how simple training can help us identify these biases and tremendously improve decision making.
Part 2
Decision Making: Do You Need a Decision Theorist… or a Shrink?
Human beings are notoriously bad at making rational decisions. Even theoretical models designed to help you find the “right” answer are limited in their applications. A trio of researchers calls for a re-appraisal of decision theory, arguing that basic tools can improve decision making by challenging underlying assumptions and uncovering psychological biases.
Part 3
How to Deal with Severe Uncertainty?
Severe uncertainty, deep uncertainty, radical uncertainty, ambiguity… different actors in a range of fields – decision scientists, risk analysts, climate scientists, central bankers – use a variety of phrases to talk of some extreme, important yet too often ignored form of uncertainty. But what is it? And how should we deal with this particular species of uncertainty: how should we characterise it, communicate it, and decide in the face of it? In this interview, CNRS Research Director and HEC Paris Research Professor Brian Hill explains the concept and unveils applicable tools based on theoretical models for guiding decisions in situations of severe uncertainty.
Part 4
The Uncertainty Across Disciplines Project
We, individuals and society, are faced today with many important decisions involving radical degrees of uncertainty. To better communicate on the current state of knowledge about uncertainty, and incorporate it into decisions, Brian Hill, CNRS and HEC Paris Professor of Economics and Decision Sciences, initiated the Uncertainty Across Disciplines project.
Part 5
The Impact of Overconfidence and Attitudes towards Ambiguity on Market Entry
For many people who have started their entrepreneurial adventure, the biggest challenge is to believe in yourself. Yet, for those who choose this path, confidence can also make the entrepreneur underestimate actual business risks, leading to fatal decisions. Researchers of HEC Paris Business School and Bocconi University offer a new explanation for why decision makers often appear too confident, and shed light on the consequences of this characteristic.
Part 6
Thinking About Time Flying? It Can Affect Your Decision Making
When the clock in our minds ticks loudly, it changes not only our perspective of the time remaining in our lives, but also how we process information. A trio of researchers investigated how thinking about the concept of time can affect our decision making. This unique piece of research could explain biases in hiring, voting, and many other contexts.
Part 7
A New Theory in Economics Helps Predict Future Events
When will be the next financial crisis? Who is going to win the next US presidential elections? How do we create beliefs about such events? By understanding how probabilistic beliefs form, economic theorists can now explain and predict phenomena that depend on rational beliefs. Latest research by Rossella Argenziano and Itzhak Gilboa equips economic modeling with a theory and a set of tools of belief formation, based on statistics and psychology. Some of the immediate applications are the equilibrium selection in coordination games.
Part 8
Is It Rational to Stockpile in Times of Crisis?
The health crisis caused by COVID-19 has triggered an economic one. We observe a significant portion of the population fearing shortage of primary consumption goods and marked stockpiling behavior. Because such behavior increases the risk of shortage, several stores have decided to ration some goods, and governments have had to make public announcements to reassure consumers that there would be no shortage. Avoiding consumer stockpiling is hence one of the key aspects of the management of this crisis. But is it rational to stockpile in times of crisis? We review and discuss the rational and irrational aspects of such behavior.
Part 9
Decision Making That Reflects Your Strategy
Business decisions are not always in line with company strategy. Researchers Olivier Sibony et al. explore what lies behind counterproductive business decisions and outline guidelines for designing better strategic decision processes.

Part 1

Yes, You Can Be Trained To Make Better Decisions

Decision Sciences

Mental distortions known as cognitive biases often shifts our judgement away from rational prescriptions. While such biases are normal – it's just the way our brains are wired – they can lead to poor choices, sometimes with disastrous consequences. But new evidence shows how simple training can help us identify these biases and tremendously improve decision making.

debiasing cover

Despite its incredible abilities, our brain is often fooled into making seemingly irrational decisions because of certain biases in the way it processes information. Decision making is complex, so we take mental shortcuts based on our emotions, experience or just the way information is framed. We tend to see patterns where there aren't any (clustering illusion), be overly optimistic about our own abilities (overconfidence bias), follow the judgement of others (bandwagon effect) and so on. Scientists regularly remind us of the many ways cognitive biases interfere with the choices we make. 

How does cognitive bias affect decision making?

It can cloud our judgement and lead to disastrous choices. Cognitive bias has practical ramifications beyond private life, extending to professional domains including business, military operations, political policy, and medicine. Some of the clearest examples of the effects of bias on consequential decisions feature the influence of confirmation bias on military operations. Confirmation bias - that is, the tendency to conduct a biased search for and interpretation of evidence in support of our hypotheses and beliefs - has contributed to the downing of Iran Air Flight 655 in 1988 and the decision to invade Iraq in 2003.

So are we doomed to make terrible decisions? 

Daniel_KAHNEMAN
Daniel Kahneman, 
2002 Nobel Memorial Prize
in Economic Sciences

Ever since Daniel Kahneman and Amos Tversky formalized the concept of cognitive bias in 1972, most empirical evidence has given credence to the claim that our brain is incapable of improving our decision-making abilities. However, our latest field study, published by Psychological Science in September 2019, suggests that a one-shot de-biasing training can significantly reduce the deleterious influence of cognitive bias on decision making. We conducted our experiment in a field setting that involved 290 graduate business students at HEC Paris. In our experiment, a single training intervention reduced biased decision making by almost a third.

How much does (or could) this improve decision making? 

The results of our paper - led by Professors Anne Laure Sellier (HEC Paris), Irene Scopelliti (City University of London) and Carey K. Morewedge (Boston University) –establish a clear link between cognitive bias reduction training and improved judgment/decision-making abilities in a high-risk managerial context. Our results could have far-reaching consequences for everyday choices, but also for crucial and high-stakes decisions. At a military level, it could help avoid some of the deadly errors the US Armed Forces committed in the past. As American educator Ben Yagoda pointed out in his compelling article in The Atlantic last year, without confirmation bias, the US may not have believed Iraq possessed weapons of mass destruction and decided to invade Iraq in 2003. As the official 2005 report to George W. Bush put it: “The disciplined use of alternative hypotheses could have helped counter the natural cognitive tendency to force new information into existing paradigms.”

The results of our paper establish a clear link between cognitive bias reduction training and improved judgment/decision-making abilities in a high-risk managerial context.

Which particular biases can be attenuated and how?

Our research focuses on one particular training intervention, which had produced large and long-lasting reductions of confirmation bias, correspondence bias, and the bias blind spot in the laboratory. Our intervention was originally created for the Office of the Director of National Intelligence and was designed to reduce bias in US government intelligence analysts.

The intervention involved playing a serious game that gives players personalized feedback and coaching on their susceptibility to cognitive biases. The training elicited biases from players during game play, and then defined each bias. It gave examples of how each bias influenced decision making in professional contexts (e.g., intelligence and medicine), explained to participants how their choices may have been influenced by the biases, and provided participants with strategies to avoid bias and practice opportunities to apply their learning to new problems.

How exactly did you train the participants in your study?

Before or after they played the serious game, students from three Master’s programs at HEC Paris were asked to crack Carter Racing, a complex business case modelled on the fatal decision to launch the Space Shuttle Challenger, which disintegrated a few minutes after take-off in 1986. Each participant acted as the lead of an automotive racing team making a high-stakes, go/no-go decision: remain in a race or withdraw from it. The case is designed so that its surface features suggest the team should race, but careful analysis of the case evidence reveals that racing will have catastrophic consequences for the team. We measured the effects of cognitive bias reduction training to see if the intervention improved decision making in the case. Would trained participants decide to race, or not? Crucially, trainees were not aware that their decision making would be examined for bias. 

Can such training truly improve judgement?

The results were promising. Participants trained before completing the case were 29% less likely to choose the inferior hypothesis-confirming solution (i.e., to race) than participants trained after completing the case. This result held when we controlled for individual differences including gender, work experience, GMAT scores, GPA, and even participants propensity for cognitive reflection (i.e., their tendency to override an incorrect “gut” response and engage in further reflection leading up to a correct answer). Our analyses of participants’ justifications for their decisions suggest that their improved decision making was driven by a reduction in confirmatory hypothesis testing. Trained participants generated fewer arguments in support of racing—the inferior case solution—than did untrained participants.

Our results provide encouraging evidence that training can improve decision making in the field, generalizing to consequential decisions in professional and personal life. Trained participants were more likely to choose the optimal case solution, so training improved rather than impaired decision making.

How applicable are your (lab-tested) results in the wider world?

Of course, our findings are limited to a single field experiment. More research is needed to replicate the effect in other domains and to explain why this game-based training intervention transferred more effectively than have other forms of training tested in past research. Games may be more engaging than lectures or written summaries of research findings. The game also provided intensive practice and personalized feedback, which is another possibility. A third possibility is the way the intervention taught players about biases. Training may be more effective when it describes cognitive biases and how to mitigate them at an abstract level, and then gives trainees immediate practice testing out their new knowledge on different problems and contexts.

Games may be more engaging than lectures or written summaries of research findings.

People have been debating how to overcome the many ways in which we deviate from rationality well before the concept of cognitive bias was first coined over six decades ago. The general conclusion has been that decision making cannot be improved within persons, and the only way to reduce bias is through changes to the environment like nudges. In September 2018, Nobel laureate Daniel Kahneman said, “You can’t improve intuition. Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning… Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument, the rules go out the window.”

We believe our results show, fortunately, that this conclusion may be premature. Training appears to be a scalable and effective intervention that can improve decisions in professional and personal life.

Article based on an interview with Anne Laure Sellier of HEC Paris and on her paper, “Debiasing Training Improves Decision Making in the Field”, co-authored by Irene Scopelliti, of City University of London and Carey K. Morewedge of Boston University.
Related topics:
Decision Sciences
See structure

Part 2

Decision Making: Do You Need a Decision Theorist… or a Shrink?

Decision Sciences

Human beings are notoriously bad at making rational decisions. Even theoretical models designed to help you find the “right” answer are limited in their applications. A trio of researchers calls for a re-appraisal of decision theory, arguing that basic tools can improve decision making by challenging underlying assumptions and uncovering psychological biases.

a shrink analyzing his patient on a couch - rudall30

©rudall30 on Adobe Stock

Is it worth insuring my house against hurricane damage? Which route will help me beat traffic? Should I invest in this stock? Latte, black, cappuccino, mocha or vanilla? Every day, we are faced with hundreds of decisions, some big, some small, some tough, some easy. Sometimes we follow our instinct, sometimes our intellect, sometimes we just go with habit. But more important than how we choose between various options, is the question, how should we choose? 

Decision theory offers a formal approach, often seen as a rational way to handle managerial decisions. While this theoretical framework has not lived up to early expectations, failing to provide the “right” answer in every case, a trio of researchers says not to throw the baby out with the bathwater just yet.

Decision making has been formalized and useful, but…

Decision making has been formalized since the age of Enlightenment, a famous early example being Blaise Pascal's wager about the existence of God. Decision theory and its key concepts (utility, or desirability of an outcome, states of the world, or possible scenarios, etc.) culminated in the mid-20th century with the invention of game theory and the development of mathematical tools of analysis. 

“In the 1950's there was the idea that mathematical models could automate decisions,” says Itzhak Gilboa, professor of decision science at HEC Paris. “There has been a measure of success, with applications to logistics, or, for example, to route optimization with Google Maps.” 

And yet, today, decision theory is all but dismissed, including in business circles. Olivier Sibony, who worked as a management consultant for 25 years before joining HEC Paris to teach strategy, says he literally never encountered decision theory in those 25 years, either in words or practice, the exception being within a minority of financial institutions. “It's shocking,” he muses, “because it is taught in business schools as a sensible way to make decisions.” 

…Decision theory has its limits 

The textbook model of decision theory, however enticing and elegant it may be, has a number of limitations that prevent it from being widely used by managers.

The theoretical model raises some very practical challenges. Probability is often hard to calculate due to lack of data about the same past problems. Similarly, the desirability of an outcome, such as career choice for example, is hard to quantify because of the wealth of criteria by which it is judged: income, prestige, work-life balance... 

What’s more, behavioral psychology has shown that human beings, far from being the rational agents assumed by economic theory, are hopelessly irrational. Confirmation bias makes us prone to disregard negative data about the option we are considering; overconfidence makes us consistently overestimate our chances of success; mental accounting makes us value equivalent outcomes differently depending on the way they are framed; and on and on.

The list of psychological biases we suffer from is so long, it's a miracle that we haven't blundered ourselves into extinction, as a race. “But we are teetering on the brink of just that!” counters Olivier Sibony. 

 

The list of psychological biases we suffer from is so long, it's a miracle that we haven't blundered ourselves into extinction, as a race.

 

And just because the world functions relatively well doesn't mean we have been good at making decisions, including in business, where success often boils down to sheer luck. “Even a billionaire like Warren Buffet acknowledges the role of luck in his success,” adds Sibony. “We do observe a lot of failures; after all, millions of years of evolution have prepared us to recognize rotten food, but not rotten counterparties,” joke both HEC professors. 

For a rehabilitation of the basic tools of decision theory

Recognizing all the limitations of decision theory, the specialists nonetheless believe that certain tools can be helpful. 

The axioms of rational decision making are especially important in the context of strategic decisions made by managers and executives, who might need to present and justify decisions to their superiors or boards. 

 

Decision theory is not a magic wand for a final answer. It should be used as a conceptual framework, or tool, rather than as a theory that is directly applicable.

 

Decision theory is not a magic wand for a final answer. It should be used as a conceptual framework, or tool, rather than as a theory that is directly applicable. The researchers outline 3 different types of decisions and how decision theory can potentially serve in each of those cases: 

1. In the first type of decision, outcomes and probabilities are clear and all relevant inputs are known or knowable, which means that finding the best solution is simply a matter of using mathematical analysis based on classical decision theory. Simple computing power can find the single best solution (optimize a route or, in the case developed in the research article, allocate sales reps to territories according to travel costs). The decision maker needs not even know the details of the algorithm that the software uses. 

2. In the second type of decision, the desired outcome is clear but not all of the relevant inputs are known or knowable. In this case, decision theory cannot provide a single best answer but can test the consistency of the reasoning by formulating the decision maker’s goals, constraints, and so on, to check whether the reasoning makes sense. 

3. In the third type of decision, either because data is missing or because the logic of the proposed decision cannot be articulated, even the desired outcome is unclear. In such a case, the problem cannot be described in the language of decision theory. But, while theory cannot provide a “correct” answer, it can still serve to test the intuition and logic of the decision maker.

There may be no objective way to assign precise probabilities to different scenarios, or even to identify all the possibilities, but the theory can still potentially challenge underlying assumptions or processes. 

“If you want to be in a certain market just for your ego, fine, but it's my job to uncover it!” says Itzhak Gilboa, comparing the process to “sitting down with a shrink before you press the button”. The idea is simply to understand one's own motivations for a decision and to be comfortable enough with them to explain the rationale to one's own boss. The researcher likes to think of the approach as a “humanistic project”, improving decisions in a way that will ultimately be useful to society – “even if business decisions are rarely life and death matters!”

Applications

Focus - Application pour les marques
The researchers say the most important idea to retain is that of challenging decision making. When it comes to the second and third type of decisions, where an algorithm cannot simply identify the best solution for you, the researchers recommend collaborating with someone who has a firm grasp of decision theory – someone who knows, for example, what a utility function is, or desirability of outcome, and so on – to challenge your decision-making process. "The best thing you can do to improve the quality of a decision is to ask an outsider to challenge not the decision itself but the process and its logic,” says Olivier Sibony. “There are very practical ways of getting theory and practice to dialogue, by setting up routines and methods."

Methodology

methodology
The paper first reviews the main principles and concepts of decision theory and explores its limitations to explain why it is not currently used in business decision making. The researchers then make a case for decision theory as a conceptual framework whose tools can be used to support and refine intuition, and give examples of applications through three imaginary dialogues with executives faced with three different business cases.
Based on an interview with Itzhak Gilboa and Olivier Sibony on their research paper “Decision theory made relevant: Between the software and the shrink,” co-authored with former HEC PhD student Maria Rouziou (Research in Economics, 2018). To find out more about how to use decision theory to challenge your decision making, read the full paper here.
See structure

Part 3

How to Deal with Severe Uncertainty?

Decision Sciences

Severe uncertainty, deep uncertainty, radical uncertainty, ambiguity… different actors in a range of fields – decision scientists, risk analysts, climate scientists, central bankers – use a variety of phrases to talk of some extreme, important yet too often ignored form of uncertainty. But what is it? And how should we deal with this particular species of uncertainty: how should we characterise it, communicate it, and decide in the face of it? In this interview, CNRS Research Director and HEC Paris Research Professor Brian Hill explains the concept and unveils applicable tools based on theoretical models for guiding decisions in situations of severe uncertainty.

hourglass on the grass

©icedmocha on Adobe Stock

What is severe uncertainty?

A central characteristic of severe uncertainty is the lack of justified probabilities. When tossing a coin, we know precisely the probability of heads. Economists standardly assume that all uncertainties are glorified coin tosses: we can come up with a precise probability for whatever might happen (even if we might not always be right about it). But clearly many real-life situations are just not like that. There are many cases where we don’t know something for sure, and, though that doesn’t necessarily mean that we know nothing at all, what we do know is not enough to justify a solid, precise probability.

 

A central characteristic of severe uncertainty is the lack of justified probabilities.

 

What’s the coronavirus mortality rate? We know that it’s worse than the flu, and below 15%, but beyond that? Can we give a number we are 90% sure about? How fast will the global economy recover to turn-of-the-year GDP levels, or the Dow Jones to its pre-Covid-19 levels? They will almost surely not be there by September, but beyond that? Can we put precise probabilities? What will happen to sea level in, say, New York over the next 30 years? Given our understanding of climate change, we know it will rise, and almost certainly by less than 4m, but beyond that?  

Why is severe uncertainty relevant now?

Severe uncertainty is especially relevant now because we increasingly face situations involving it. Examples abound, including climate mitigation policy, Coronavirus reaction, economic policy, and of course business decisions. I should also add that this is being increasingly recognized, with the ex-governor of the Bank of England, Lord King, having just published a book on Radical Uncertainty with John Kay.

 

These decisions don’t allow us the time to do that: we have to respond to the Coronavirus before fully understanding it.

 

What do all these examples have in common? Urgency. Since the problem is lack of knowledge, one instinctual response would be to go out and do (more) research. But these decisions don’t allow us the time to do that: we have to respond to the Coronavirus before fully understanding it; by the time we know the sea level in New York in 2050 it might be too late to save it from flooding; and so on.

South of Manhattan surrounded by the bay
"Can we put precise probabilities? What will happen to sea level in, say, New York over the next 30 years?" (Photo: South of Manhattan, New York City ©DiegoAransay on AdobeStock)

 

Why do most people in economics, finance and risk analysis continue to discount severe uncertainty, by assuming that all uncertainty can be fully captured by probabilities?

There are basically two reasons: one pragmatic and the other principled. First, it’s easier to work with precise probabilities, and the mathematical methods are familiar. Second, a bunch of philosophical, “axiom-based” arguments purport to show that, if you stray from precise probabilities, your decision making will violate some seemingly “rational” dynamic principles. These arguments have persuaded many over the years. If they were right, then these rationality principles would justify pretending that we always had precise probabilities (despite the egregiousness of the pretence).

 

In my research, I show that you can satisfy the rationality principles, even if you do not stick to precise probabilities.

 

In sum, beyond these arguments, the only barrier to a more refined, richer approach to uncertainty is inertia. In my research (1), I show that these arguments rest on a mistake: you can satisfy (properly formalised versions of) the rationality principles, even if you do not stick to precise probabilities. It thus removes the main hurdle to building an account of rational or sensible decision making that doesn’t need to assume precise probabilities.

How should we decide in the face of severe uncertainty, then? 

As I see it, severe uncertainty poses a double challenge. The first is to work out what we do know and how solid that knowledge is, avoiding two pitfalls: nihilism – assuming that because we can’t put probabilities, we don’t know anything at all – and self-deception – pretending or assuming that we know more or have more precise knowledge than we in fact do. The second is to work out how to harness what we know – and more importantly recognize what we don’t – in decision making. Good, responsible, and informed but not self-deceptive decision making.

In my research, "Confidence in Beliefs and Rational Decision Making" (2), I have developed an approach to decision under uncertainty that meets each of these challenges. It combines two ingredients:

1. Confidence

Forget pretending that you can always give a probability and:

a. Ask for your best guess. Then ask how confident you are of it. That might not be very confident at all (if so: don’t rely on it!)
b. Then ask: if you had to give a probability range that you were very confident in, what would it be. (For difficult cases, this range could be very large: that’s what makes the case difficult!)
c. Repeat, asking for ranges that more or less confident in, or sure of.
(Note that ranges are well-known ways of not having to give precise values. To take a topical example, often in discussions of Covid-19 (e.g. here), epidemiologists report ranges. Under the proposal, you don’t even need to settle on a single range, but just ask how confident you are in a given range – on the basis of what you know).

 

2. Confidence-based caution
a. For more important decisions, demand more confidence in the judgements on which you rely to take the decision. If you have lots of confidence in a judgement or an assessment, by all means base your decision on it. If not, perhaps you should fall back on the (weaker, more imprecise) judgements of which you are more sure – especially if the decision is very important.
b. Now these judgements may be so weak as not to support any option as best: you don’t know enough to categorically justify a single course of action. In such cases, acknowledging this is a crucial first step. In the face of it, it’s best to show caution and take an alternative that won’t lead to too bad a result, no matter which of the values in the range (of which you are sufficiently confident) turns out to be right.

 

Basically, this advice amounts to applying precaution when you are not confident enough for the importance of the decision, and choosing boldly when you are.

This approach is not just common sense: in my research (2), I have shown that it can be defended by the sort of principled, “rationality” arguments used by some to defend the reducibility of all uncertainty to probabilities.

What about models? 

You often find criticism of, say, economic models with a tendency, when attacking the use of probabilities to represent uncertainty, of throwing the baby out with the bathwater. This is a case of what I previously called the pitfall of nihilism. By contrast, climate scientists have a relatively sophisticated use of models, which can serve as an example.

They realise that models are the input to an assessment or judgement about the question of interest (e.g. temperature in 2050, etc.), but no model – nor even all models – provide the whole picture.

 

In my research on climate uncertainty, uncertainty is reported as a form of confidence judgements on the probability assessments that come out, or could have come out, of the models.

 

Climate scientists (e.g. in IPCC reports) have to make a judgement, drawing on models, but also on other evidence, their experience and common sense. And these judgements do not generally come in the form of precise probabilities, although that’s what models produce. Rather, as I have discussed in my research with co-authors on climate uncertainty ((3) and (4)), they rightly report uncertainty in the form of confidence judgements on the probability assessments that come out, or could have come out, of the models. In other words, they adopt as reporting practice the approach I set out above.

 

Listen to Brian Hill in this podcast:

 
1. Dynamic consistency and ambiguity: A reappraisal, Games and Economic Behavior, 120:  289-310, 2020.
2. Confidence in Beliefs and Rational Decision Making, Economics and Philosophy, 35(2): 223-258, 2019
3. Climate Change Assessments: Confidence, Probability and Decision, Philosophy of Science 84 (3): 500-522, 2017 (with R. Bradley, C. Helgeson) 
4. Combining probability with qualitative degree-of-certainty metrics in assessment, Climatic Change 149 (3-4): 517-525, 2018 (with R. Bradley, C. Helgeson) 

 

Learn more on Brian Hill’s “Decision Making under Severe Uncertainty” website, including filmed interviews of experts on the “Uncertainty Across Disciplines” project.  
Brian Hill GREGHEC
Brian Hill
CNRS Research Professor
Related topics:
Decision Sciences
See structure

Part 4

The Uncertainty Across Disciplines Project

Decision Sciences

We, individuals and society, are faced today with many important decisions involving radical degrees of uncertainty. To better communicate on the current state of knowledge about uncertainty, and incorporate it into decisions, Brian Hill, CNRS and HEC Paris Professor of Economics and Decision Sciences, initiated the Uncertainty Across Disciplines project.

Itzhak Gilboa cover

How should governments decide in the face of the sorts of uncertainties involved in climate change, energy policy, genetically modified organisms or nanotechnologies, to take a few examples? And what role should scientist’s current state of knowledge and uncertainty play, and how can this uncertainty best be represented, communicated and incorporated into decisions?

The Uncertainty Across Disciplines project aims to paint a portrait of the current state of the art across a wide range of scientific disciplines and professions regarding the study of (severe) uncertainties, and decisions in the face of them.

Through a series of 10 interviews of leading experts and actors, the project presents the perspectives, results and positions in these fields, as well as individual viewpoints on the current and future state of research and practice. 

Find the 10 interviews on this page, the video playlist on YouTube here, as well as the podcast playlist and a special podcast on the COVID-19 case.

 

brian hill interview
Prof. Brian Hill (on the left) interviewing Prof. Massimo Marinacci (on the right).

 

These interviews will hopefully allow comparison, stimulate discussion, and foster communication and collaboration among these various actors. 

- Brian Hill, CNRS Research Professor in the Economics and Decision Sciences Department at HEC Paris

 

 

Filmed interviews

 

 

 

 

 

 

 

 

 

 

 

Podcast playlist

If you prefer to listen only, click on the image to open the link:

 

 

Why is Coronavirus pandemic a particularly challenging case for decision-makers today?

In this podcast, Brian Hill provides tools for appropriate and rational decision-making through the notion of confidence in judgments. Click on the image to open the link:

 

Cover Photo: Itzhak Gilboa, Professor of Economics and Decision Sciences at HEC Paris.
Brian Hill GREGHEC
Brian Hill
CNRS Research Professor
See structure

Part 5

The Impact of Overconfidence and Attitudes towards Ambiguity on Market Entry

Decision Sciences

For many people who have started their entrepreneurial adventure, the biggest challenge is to believe in yourself. Yet, for those who choose this path, confidence can also make the entrepreneur underestimate actual business risks, leading to fatal decisions. Researchers of HEC Paris Business School and Bocconi University offer a new explanation for why decision makers often appear too confident, and shed light on the consequences of this characteristic.

overconfident young business man with a phone on a skateboard
Cover Photo Credits: ©lassedesignen on Adobe Stock

 

Many of the key strategic decisions made in businesses may result in wasteful allocation of resources or excess market entry. For example, close to 75% of those who choose careers in entrepreneurship would have been better off as wage workers, and almost 80% of angel investors never recoup their money, both indicating that too many (unskilled) people enter into these activities. Similarly, an average corporate acquisition is more likely to destroy value than to add value.

Many of the key strategic decisions made in businesses may result in wasteful allocation of resources or excess market entry.

Why does this happen? One possible answer that we study may lay in systematic biases that decision makers exhibit when making business entry decisions. We focus on the behavioral drivers of market entry in strategic business contexts with two characteristics that are virtually omnipresent. First, these settings are inherently ambiguous. That is, we know what might happen, but we don’t know the chances that they might happen. Ambiguous situations can be contrasted with risky ones when we know the chances of what will happen, for example when playing roulette. Second, the ambiguity in such settings, and the associated payoff, is likely to be perceived by decision makers as related to their own skills, often in comparison to rivals.

old confident executive man
©Robert Kneschke on AdobeStock

These characteristics imply that at least two distinct behavioral mechanisms could explain entry into the ambiguous, skill-based markets on which we focus in this study: overconfidence -- believing that one’s chances of success are higher than what they really are -- and having a positive attitude toward ambiguity. 

Like many before us we use a laboratory setting to make more precise claims about causality. We rely on a novel experimental treatment where we change the level of confidence that individuals have about their own skills, and the level of ambiguity.

Decision makers are ambiguity seeking when the result of the competition depends on their own and others’ skills.

We find that decision makers are ambiguity seeking when the result of the competition depends on their own and others’ skills. That is, decision makers are more willing to gamble with their money on competitions where the distribution of outcomes is shrouded by a lack of knowledge about what will happen, rather than when they have precise data on the chances of success. When outcomes of competitions are more unknown, having the opportunity to believe that your own ability affects results appears to make them more attractive. 

Similarly, we also show that overconfidence only affects entry in skill-based competitions and does not appear in games that are chance based. 

Both overconfidence and ambiguity seeking can therefore explain why individuals enter into entrepreneurship taking huge risks with their savings, or why mergers and acquisitions often do not pay off.

Article by Thomas Astebro, L’Oreal Professor of Entrepreneurship at the Economics and Decision Sciences Department at HEC Paris, based on the research paper, The Impact of Overconfidence and Ambiguity Attitude on Market Entry, co-authored by Cédric Gutierrez of Bocconi University and Tomasz Obloj of HEC Paris. Published in Organization Science (2020).
See structure

Part 6

Thinking About Time Flying? It Can Affect Your Decision Making

Decision Sciences

When the clock in our minds ticks loudly, it changes not only our perspective of the time remaining in our lives, but also how we process information. A trio of researchers investigated how thinking about the concept of time can affect our decision making. This unique piece of research could explain biases in hiring, voting, and many other contexts.

business people watching a sandglass - cover
Cover Photo Credits: ©kirill_makarov on AdobeStock

What happens in our minds when time seems to pass by quickly?

Do you ever get the feeling that your time is running out? Perhaps you’ve been dwelling about the fact that we’ve reached the end of another decade and you’ve still not got life quite figured out. Maybe you’re questioning your life choices after seeing that your friends are all getting married, having children and buying houses, and you’re still stuck in the same job you had five years ago. We all get the feeling that the clock is ticking every now and then, but does this feeling change the way that we interpret new information? This is what we set out to investigate, specifically; we wanted to see how this feeling that the clock is ticking impacts a phenomenon known as “information distortion”.

Information distortion is the idea that people tend to be biased towards their pre-existing beliefs when they hear new facts.

Information distortion is the idea that people tend to be biased towards their pre-existing beliefs when they hear new facts. For example, imagine you are a hiring manager at an accountancy firm and you must choose between two job applicants, Adam and Mark. You hear a series of pieces of information about them in sequence. The first piece of information you look at just so happens to be education. Adam has a first-class university degree but Mark only received a second-class degree. Next you learn that Adam has already received some experience working in another similar firm while Mark is fresh out of university. Information distortion occurs if you were to evaluate this second piece of information, the job experience Adam received, as favouring him more than you would have done if you hadn’t already seen that he received a first-class degree. This phenomenon has been shown to occur everywhere from legal decisions to medical diagnoses.

hiring interview
(Photo Credits: weedezign on AdobeStock)

Manipulating people's time perspective

In order to test experimentally whether the feeling that time is running out, known as “limited time perspective”, impacts information distortion; we asked participants to describe a milestone in their life which they felt they had limited time left to achieve. They were given examples such as getting married or achieving their dream career. Participants in the control group were instead asked to report how long they spent each week completing surveys on Amazon Mechanical Turk, the platform where they were recruited. Next, we asked them how likely they would be to invest in a new business venture, producing a new type of material for making furniture. We then presented four attributes of the furniture in sequence. After each feature of the material was presented, we asked the participants to rate whether the new information made them more likely to invest in the product.

Our finding could help us understand why in the world today facts seem to be becoming more and more distorted and political polarisation appears to be increasing.

As we predicted, we found that leading participants to have a limited time perspective made them more likely to distort information. In other words, thinking about the limited time left in their lives made them more likely to hold on to the beliefs they had before having new information.
Our finding could help us understand why in the world today facts seem to be becoming more and more distorted and political polarisation appears to be increasing. As facts become distorted such as in the case of “fake news” websites that spread ideology-fuelled misinformation, people become more polarised, ebbing towards opposing ends of the political spectrum and rejecting evidence that doesn’t confirm their beliefs. Our results suggest that this could be linked to the fact that we are living in a society where we often feel we don’t have enough time, it’s possible that this may be increasing political polarisation.

time square in New York by night, Reno Laithienne on Unsplash
"Our results suggest that the fact to reject evidence that doesn’t confirm their beliefs could be linked to the fact that we are living in a society where we often feel we don’t have enough time."
(Photo by Reno Laithienne on Unsplash)

When age increases bias

Another aspect of the recent phenomenon of increasing political polarisation that is touched upon by our research is ageing. It’s well known that the elderly tend to vote differently from young people and in recent times there has been much speculation that the gap is widening. In recent years, this gap in voting behaviour has been blamed on everything from Brexit to the election of Donald Trump.

Thinking about the limited time left in their lives made them more likely to hold on to the beliefs they had before having new information.

In order to assess whether age has an impact on information distortion, we repeated our study but instead of artificially leading participants to have a limited time perspective we looked at age. To make our participants think about their age, we asked them to categorise themselves as 18-29, 30-50, or over 50 years of age. We then conducted our study as before and compared the different results based on the various age groupings. As we expected, we found that ageing had the same impact as having a limited time perspective, our older participants were more likely to show biased towards their own pre-existing beliefs. 

Ultimately our work shows that the age-old phenomenon of age impacting information distortion can be artificially manipulated very easily in people of all ages by making them think about the time they have left in their lives. Our research provides the first evidence of such a phenomenon so it should be treated with a healthy level of scepticism until it is supported by further studies, however it may provide a fruitful avenue for further research.

Methodology

Focus - Methodologie
We used Amazon’s Mechanical Turk to recruit participants, then we manipulated them to have a limited time perspective. After that we had them complete a decision task involving imagining investing in a new business venture, in order to assess the impact of limited time perspective on information distortion.

Applications

Focus - Application pour les marques
Our research has implications for political scientists studying the causes of information distortion. It may also prove valuable for marketers as our work could have implications for subjects such as brand loyalty and consumer confidence. It could also benefit human resources professionals due to the implications our work has for understanding the decision process of older managers.
Based on an interview with Anne-Sophie Chaxel and on her article “The impact of a limited time perspective on information distortion”, co-written with Catherine Wiggins of Cornell University and Jieru Xie of Virginia Polytechnic Institute and State University, Organizational Behavior and Human Decision Processes, 149 (2018).
See structure

Part 7

A New Theory in Economics Helps Predict Future Events

Economics

When will be the next financial crisis? Who is going to win the next US presidential elections? How do we create beliefs about such events? By understanding how probabilistic beliefs form, economic theorists can now explain and predict phenomena that depend on rational beliefs. Latest research by Rossella Argenziano and Itzhak Gilboa equips economic modeling with a theory and a set of tools of belief formation, based on statistics and psychology. Some of the immediate applications are the equilibrium selection in coordination games.

black swan on a lake - Tatiana-AdobeStock

©Tatiana on Adobe Stock

How can people predict future events? They create beliefs and probabilities based on the observation of similarities between past events and an ongoing event. Let’s understand this through the three cases of the Obama election, the fall of the Soviet bloc and the curbing inflation.

1 - The Obama election

The election of Barack Obama triggered excitement and enthusiasm because a non-white became President of the United-States for the first time. Presidential elections are a rare event, and not two are exactly alike. This makes the use of statistics tricky: which past events should be included in one’s sample? How do we describe present and past events? In particular, is "race" an important feature? We claim that the precedent of Obama’s election didn’t only change the statistics – with one non-white president as opposed to zero – but also changed the way we do statistics: it showed that "race" was not an important variable in judging the similarity between events.

 

People, especially economists, can predict future events by creating beliefs and probabilities based on the observation of similarities between past events and an ongoing event.

 

2 - The fall of the Soviet bloc

The Soviet bloc started collapsing with Poland, which was the first country in the Warsaw Pact to break free from the rule of the USSR. Once this was allowed by the USSR, practically all its satellites in Eastern Europe underwent democratic revolutions, culminating in the fall of the Berlin Wall in 1989. The single precedent of Poland generated a "domino effect." This paper suggests a belief formation process that explains how a single precedent can have such a dramatic effect even in the absence of informational spillovers and strategic dependency among games.

 

Fall of Berlin Wall - Raphaël Thiémard
Fall of the Berlin Wall, November 1989. Author: Raphaël Thiémard

 

Revolution attempts are typically modeled as coordination games*: the expected utility derived from taking part in an uprising increases in the probability of its success, which in turn increases in the number of participants. For a citizen trying to decide whether to join such an attempt, it is crucial to predict the outcome of the uprising. A natural piece of information to use for such a prediction is the outcome of past revolutions in similar contexts. We suggest that the importance of the successful revolution in Poland didn't lie only in changing the relative frequency of successful revolutions, but also in changing the notion of which past revolution attempts were similar to current ones, hence relevant to predict their outcomes.

Specifically, the case of Poland was the first revolution attempt after the "Glasnost" policy was declared and implemented by the USSR. Pre-Glasnost attempts in Hungary in 1956 and in Czechoslovakia in 1968 had failed. In 1989, one might well wonder, has Glasnost made a difference? Is it a new era, where older cases of revolution attempts are no longer relevant to predict the outcome of a new one, or is it "Business as usual", and Glasnost doesn't change much more than does, say, a leader's proper name, leaving pre-Glasnost cases relevant for prediction?

So how can we learn that the revolution in Poland could help in attempting new, successful revolutions?

If the revolution attempt in Poland were to fail as did previous ones, it would seem that the variable "post-Glasnost" does not matter for prediction: with or without it, revolution attempts fail. As a result, when a person wonders what is the "right" way of judging similarity between past cases, she would likely be led to the conclusion that the variable "post-Glasnost" should be ignored, and that, consequently, the statistics are zero successes out of three revolution attempts. By contrast, because the revolution attempt in Poland succeeded, it had a double effect on the statistics. First, it increased the frequency of successful revolutions from 0/2 to 1/3. While 1/3 is larger than 0, it still leads to pessimistic predictions about successes of future attempts. However, if people also learn how to judge similarity, the single case of Poland leads them to the conclusion that "post-Glasnost" is an important variable. 

How can we learn to judge whether a past event is similar to a current one?

The theory presented in our latest research paper, "Similarity-Nash Equilibria* in Statistical Games", suggests people learn from past events not only what are the frequencies, but also what is the relevant database.

 

Our theory suggests people learn from past events not only what are the frequencies, but also what is the relevant database.

 

Indeed, if we use the Polish revolution as an example, the frequency of successes post-Glasnost, 1/1, differs dramatically from the pre-Glasnost frequency, 0/2. Once this is taken into account, pre-Glasnost events are not as relevant for prediction as they used to be. If we consider the somewhat extreme view that post-Glasnost attempts constitute a class apart, the relevant empirical frequency of success becomes 1/1 rather than 1/3. Correspondingly, other countries in the Soviet Bloc could be encouraged by this single precedent, and soon it wasn't single any more.

How to find the relevant variable among many others to judge if a past situation is similar to a current one?
In a previous paper1, we show that the “empirically optimal similarity function” can be identified under certain conditions. In essence, many observations for few variables make learning easier.

3 - Curbing inflation

As another example, consider a central bank, which redenominates* its currency in an attempt to restrain inflation. Inflation is an equilibrium phenomenon: an economic agent (or individual) who expects others to raise prices of goods and services would be wise to do so herself. Thus, one can think of the inflation game as a price-setting game with multiple equilibria, and redenomination as an attempt to switch from a hyperinflation equilibrium to a low inflation equilibrium2. Will economic agents – consumers and firms, bankers and investors – use the new variable in their belief formation? Will firms assume that prices will no longer increase, when pricing their own goods? Or will they dismiss the redenomination as a "cosmetic change" and believe that inflation will continue to run high? Our analysis suggests that the answer depends on the periods immediately following the redenomination: if in these periods inflation is low, the variable “new currency” will be used for prediction and a new, low-inflation equilibrium can be readached. So it means that if something happens today, then the coming year will be crucial to judge whether the same thing happening means that a similar event is coming.

 

woman in suit adding coins on a pile next to an hourglass - Andrey Popov-AdobeStock
©Tatiana on Adobe Stock

 

By contrast, if in the first periods the inflation rate continues to be high, agents will realize that it’s “business as usual”, so the variable will be judged irrelevant. So people will see that with and without the change, things look the same. As a result, the entire history will be used for prediction, making it very difficult to convince economic agents that the future will differ from the past. Israel switched from a Lira to a Shekel (worth 10 Liras) in 1980 and then to a New Shekel (worth 1,000 Shekels) in 1985. In 1980 the change was not accompanied by fiscal policy changes, meaning that the government didn’t cut expenses and try to finance the deficit by “printing money”, so inflation spiraled into hyper-inflation. According to our account, people realized that, Shekel or Lira, inflation runs high, and then, of course, it did.

By contrast, the change in 1985 was accompanied by budget cuts, and inflation was curbed in the following years. We argue that the real change in fiscal policy gave meaning to the nominal change* of redenomination: the New Shekel, which was perceptibly different from its predecessor the Shekel, suddenly seemed to actually behave differently. Hence, rational, economic persons who ask themselves, “which are the periods from the past that are relevant to construct beliefs to predict future events?” found that the older periods were not so relevant. This gave a chance to believe in a low-inflation equilibrium.

A word to the experts

For standard economic theory, with perfectly rational individuals in economics, it is hard to explain currency redenomination: it is a purely nominal exercise that all agents should view as irrelevant. Psychological accounts, on the other hand, can explain why people react differently to different nominal sums, but may be challenged in explaining the difference between successful and unsuccessful redenominations. Our account takes a middle ground: our agents may be perfectly rational, but, realizing that they are playing a coordination game with others, they do take into account perceptions that may be used to select an equilibrium, even if, in and of themselves, they are economically irrelevant. Thus, erasing three zeroes from all monetary sums is a noticeable change. It will have economic meaning only if most agents think it has economic meaning. And here, we claim, come the learning of the similarity function: if the perceptual change is accompanied by real policy changes, a new equilibrium may be selected.

 

Our account takes a middle ground: agents may be perfectly rational, but, realizing that they are playing a coordination game with others, they do take into account perceptions that may be used to select an equilibrium, even if they are economically irrelevant.

 

 

*Keywords:

Equilibrium: In economics, an equilibrium is a situation in which agent’s optimal actions and prices are such that supply and demand are equal. 

Nash equilibrium: In game theory in economics, the Nash equilibrium is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy. (Source: Osborne, Martin J.; Rubinstein, Ariel (12 Jul 1994). A Course in Game Theory. Cambridge, MA: MIT. p. 14)

Coordination games: In game theory, coordination games are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies. 

Nominal change: In economics, the nominal value, rate, or level of something is the one expressed in terms of current prices or figures, without taking into account general changes in prices that take place over time (Source: Reverso). A “nominal” change would be one where we say “from now on, one (new) euro is worth what 100 old euros used to be worth”. Economists call this “nominal” because there is no real change in the economy – it’s just a change of name. If I used to get 100,000 euros a month and spend 60,000 at the supermarket, and now I get 1,000 euros and pay only 600, nothing “real” has changed. A “real” change would happen if, for instance, the government buys less on the market, or employs less workers etc. (Itzhak Gilboa) 

Redenomination: The process of exchanging old currency for new currency, or changing the face value of existing notes in circulation. 

 

1Argenziano, R. and I. Gilboa, "Second-Order Induction in Prediction Problems", PNAS, 116 (2019). Find the filmed interview of Itzhak Gilboa here.

2See Mosley (2005): "(...) redenominations often occur after economic crises, as governments attempt to convince citizens and markets that hyperinflation is a thing of the past. In some cases, the timing is correct, in that redenomination caps off high levels of inflation. In other cases, governments are not able to reign in inflation immediately after redenomination, and they may make multiple efforts (...)."

Article by HEC Paris Professor Itzhak Gilboa, based on latest research publication, “Similarity-Nash Equilibria in Statistical Games” (full paper) by Rossella Argenziano of the University of Essex and Itzhak Gilboa. This research work has benefited from the support of the HEC Foundation through the "F Project".
See structure

Part 8

Is It Rational to Stockpile in Times of Crisis?

Decision Sciences
5 minutes

The health crisis caused by COVID-19 has triggered an economic one. We observe a significant portion of the population fearing shortage of primary consumption goods and marked stockpiling behavior. Because such behavior increases the risk of shortage, several stores have decided to ration some goods, and governments have had to make public announcements to reassure consumers that there would be no shortage. Avoiding consumer stockpiling is hence one of the key aspects of the management of this crisis. But is it rational to stockpile in times of crisis? We review and discuss the rational and irrational aspects of such behavior.

someone cares a big pile of cardboard stacks

Photo Credits: ©BillionPhotos.com / Adobe Stock

Although over-purchasing in times of crisis might be considered as irrational, scholars in economics, operations research and marketing have proposed theoretical models explaining when and how individuals rationally decide to stockpile. Besides rational motives, many behavioral aspects can also motivate over-purchase decisions. 

Stockpiling as a rational decision involving risk and time

Decisions to purchase and store quantities in prevision of future hazards are not infrequent, and may concern not only individuals, but also states and companies. At the State level, decisions to stockpile goods such as oil, weapons, medical masks and drugs are highly strategic. It can also be in the interest of the companies and consumers to stockpile primary or consumption goods, as an insurance against variations of future prices (as in the case of shortage risks).

It can be in the interest of the companies and consumers to stockpile primary or consumption goods, as an insurance against variations of future prices.

In all these contexts, the decision can be analyzed using the same framework. Stockpiling is a safe but costly option: the costs relate to purchasing additional quantity at the present time rather than smoothing the expense across time, as well as to storage costs (e.g. warehouse space and guarding). Not stockpiling is a risky option that exposes the decision maker to future price variations. 

The best option, or optimal amount of stockpiling is therefore a decision involving risk and time and as such involves many factors: the perceived risk of price variations, the attitude towards time (how the decision maker values future consequences) and the risk attitudes (how the decision maker values risky consequences). In rational decision making these factors are combined using a model called “discounted expected utility”. For example, under this model, a consequence received at a future time period t with a perceived probability p is valued p exp(-rt)u(x), where r is the discount rate that captures attitudes towards the future, and u is a utility function that characterizes risk attitudes. Assuming that the decision maker has well-defined risk perception, discount rate and risk attitudes, the model makes recommendations about how much to stock pile.

Decision to stockpile depends on the perceived probability of shortage, risk aversion, discount rate and storage costs.

As one could expect, recommended stockpiling will increase with the perceived probability of shortage and risk aversion; it will decrease with the discount rate and storage costs. 

The discounted expected utility model can be used to study many other decisions involving risk and time in various domains such as strategy, finance, marketing and industrial organization.

Deviations from the rational decision-making model

Beyond its normative appeal, the model underlying such recommendations cannot satisfactorily describe observed behavior. See Machina (1987) for violations of this model in the context of risk, and Loewenstein and Prelec (1992) for the context of time. We have investigated several of these anomalies in a recent laboratory experiment where subjects had to make decisions involving both risk and time with real possible gains. We observed systemic deviations from the predictions of the rational model. As previously observed, subjects did not exhibit stable risk attitudes. They took more risks in decisions involving small probabilities than in decisions involving medium or large probabilities. Another result regards the impact of time. Here again, time preferences were not constant.

empty shelf in a supermarket
"Observing that other people stockpile creates a social pressure" ©zephy p on AdobeStock

More impatience was observed towards the near future than concerning periods further away in time. This pattern is responsible for several anomalies in decisions involving time, such as reversal of preferences over time or procrastination. Though well documented in the literature, several scholars have hypothesized that this pattern would disappear in decisions involving both risk and time. Our results, recently published in Games and Economic Behavior(1), show that this pattern holds even in these more general contexts. 

Another source of irrational decisions regards the way people perceive risks when probabilities are not available (e.g. Tversky and Kahneman 1974). For example, in their evaluation of the likelihoods of uncertain future events, people generally tend to overestimate rare events and underestimate frequent ones. In another recently published paper, we propose a method for measuring people's beliefs about uncertain events(2) from simple choices. The method allows to put beliefs into numbers and to test if peoples’ perception is accurate.

Another important research question in the decision sciences relates to how people formulate and update their beliefs in the light of available evidence. In the context of stockpiling, decision makers can also be influenced by the behavior of their peers. 

The social dimension of stockpiling: an analogy with bank runs

Stockpiling is an individual decision that can have dire social consequences. Indeed, in the context of shortage risk, individuals deciding to overpurchase effectively contribute to the risk. This kind of situation is called “self-fulfilling prophecy”.

Like bank runs, stockpiling decisions show two equilibria: one where decision makers stay calm, one where they panic, leading to a catastrophic situation.

When considered as a game involving many players, the decision to stockpile can be studied in game theory and is analogous to bank run games. These games have two equilibria: one where decision makers stay calm and do not overpurchase; another where decision makers panic and decide to overpurchase, leading to a catastrophic situation of real shortage. The first equilibrium is obviously better than the second one. Nevertheless, regarding individual rationality, both are “Nash equilibria”, meaning that when one sees that other people start to stockpile, individual rationality recommends you to stockpile too! In a social context, stockpiling can therefore be considered as a rational but selfish decision. 

The role of herding behavior

Considering stockpiling as a social game introduces the fact that the beliefs and actions of each decision maker can be influenced by the actions of the other decision makers. Updating beliefs after observing the behavior of the others can be rational. Such situations are called information cascades. But behavioral studies reveal that people are sensitive to the behavior of others, even when it is uninformative or even misleading! In particular, people tend to conform to the dominant behavior, even in the absence of rational reasons to do so. In the present case of COVID 19, we can speculate that the sudden but notable stockpiling of toilet paper was due to herding. 

People can probably easily convince themselves that, even if there were a major economic collapse, toilet paper is not the good that need be given the highest priority. However, observing that other people stockpile creates a social pressure: “it is not possible that so many people behave so irrationally: there must be a good reason for them to do so”.

 

Decision science suggests that stockpiling can be rational from an individual perspective. But in practice, people do not stockpile optimally because of individual irrationality and group pressure.

 

Overall, decision science focusing on both individual decision making and game theory suggests that stockpiling can be rational from an individual perspective. However, in practice, there are many reasons to think that people do not stockpile optimally because they violate the rules of individual decision rationality or are irrationally influenced by the behavior of others. 

 

References 

(1)Abdellaoui, M., Kemel, E., Panin, A., & Vieider, F. M. (2019). Measuring time and risk preferences in an integrated framework. Games and Economic Behavior, 115, 459-469.
(2)Abdellaoui, M., Bleichrodt, H., Kemel, E., & L’Haridon, O. (2017). Measuring beliefs under ambiguity. Operations Research, in press.
Loewenstein, G., & Prelec, D. (1992). Anomalies in intertemporal choice: Evidence and an interpretation. The Quarterly Journal of Economics, 107(2), 573-597.
Machina, M. J. (1987). Decision-making in the presence of risk. Science, 236(4801), 537-543.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. science, 185(4157), 1124-1131.

 

Listen to Emmanuel Kemel in this podcast (in French):

 

Emmanuel Kemel HEC professor
Emmanuel Kemel
CNRS Research Professor
See structure

Part 9

Decision Making That Reflects Your Strategy

Decision Sciences

Business decisions are not always in line with company strategy. Researchers Olivier Sibony et al. explore what lies behind counterproductive business decisions and outline guidelines for designing better strategic decision processes.

Decision Making Business Strategy - Sibony - HEC Paris ©Rawpixel.com

Strategic decision-making is an integral part of running a business. And yet, often, a company’s decisions do not reflect the strategy laid out by those in charge. In some cases, a firm that wants to take risks, and be highly entrepreneurial and innovative, will note that its managers nevertheless make restrictive, conservative decisions. On the other hand, a firm that has not explicitly decided to place big bets may make risky choices, such as a large capital investment or the launch a new line of products. Professor Olivier Sibony asks, “Why is there often a disconnect between what a company wants to achieve, and the decisions it makes to achieve it?” 

 

Why is there often a disconnect between what a company wants to achieve, and the decisions it makes to achieve it? 

 

To answer this question and provide solutions for business executives, Sibony et al. explored how behavioural strategy can help ensure the right business decisions are made. Noting that cognitive and behavioural biases often play a part in decisions that go against company strategy, the researchers define guidelines for designing decision-making processes that promote greater alignment with an organization’s overall business strategy.

Daily decisions drive the strategic direction of companies 

Some everyday decisions have, in aggregate, a big impact on how a business functions. Such decisions include, for example, a consumer goods firm deciding which products to launch or a pharmaceutical firm managing its drug development pipeline. Sibony et al. note that these decisions are not always considered as part of a company’s overall strategic plan, but viewed, instead, as mere functional routines.

However, these decisions shape the future of the company. “These processes make up the core strategic decision architecture of a firm,” he says. “Some processes are common to most companies, such as budget or investment processes. Others are specific. Decisions made during these processes can have a knock-on effect to other strategic processes and drive company strategy in a particular direction.”

Designing core strategic decision-making processes to reduce bias

As such, it is the decisions made during these core strategic processes that affect a company’s ability to achieve its goals. If a company’s core strategic processes can be identified and designed more intentionally, therefore, Sibony argues that it can minimise the risk of behavioural and cognitive biases. “Decision routines are not set in stone. Companies can and should design them actively to minimize bias and produce the outcomes they hope for,” he explains. 

3 types of decision processes to redesign: investment, resource allocation and blue sky 

Sibony et al. identify 3 types of strategic decision-making processes and the biases that tend to emerge in each unless you design against them: 

1. Investment: In general, investment processes have a tendency to result in too much risk-taking on big decisions and not enough on small ones. Effective design of investment processes addresses these two contradictory biases and results in more or less conservatism where relevant.

2. Resource allocation: When it comes to the allocation of resources, such as budget or personnel, the natural tendency is to replicate past allocations – for instance, by continuing to over-resource declining businesses. Rather than making marginal adjustments to existing allocations, processes should be designed to start as much as possible from a blank slate.

3. Blue sky: To achieve breakthroughs, companies must foster creativity. But not all companies pursue radical innovation. Depending on the degree of innovation they want, the decision processes to achieve it will look very different. 

3 types of strategic decision processes - Sibony HEC Paris

7 design levers

Sibony et al. outline 7 levers that can be considered in designing strategic decision-making processes: formality; layering; information; participation; incentives; debate; and closure. “All 7 levers can be used to fine-tune a given decision process and achieve a company’s required level of risk taking, agility and innovation.”

7 levers to designing strategic decision processes

 

Applications

Ensuring alignment with overall company strategy requires intentional design of your core strategic decision-making processes. “A manager needs to think about how to apply the 7 levers to ensure high quality decision making outputs,” Sibony explains. “There are very practical steps to improve decision processes.” For example, in a strategy debate, a manager can ask: Who should be involved? Does everyone have an equal voice? How can a productive confrontation be orchestrated to get the most ideas? How can unproductive conflict be avoided? At the executive level, Sibony encourages company leaders to think through the strategic decision architecture of their company. “Executives must design a decision architecture that will help them achieve their desired strategy.” To do this, he stresses the importance of thinking beyond a firm’s overall strategic plan. Instead, executives need to be aware of the core strategic decision-making processes on which the execution of that strategic plan depends. They must ask themselves whether they are happy with the output of these processes: Is the firm taking the right amount of risk? Are resources allocated effectively? Is the firm as agile and innovative as it should be? If not, they need to rethink the design of these processes and use the 7 levers as tools to achieve the desired outcome.
Based on an interview with Olivier Sibony, on his paper “Behavioural strategy and the strategic decision architecture of the firm” (California Management Review, 2017), co-authored with Dan Lovallo and Thomas C. Powell. To learn more about what behavioural strategy can tell us about strategic decision making and how to design processes to make the right decisions, read the full paper here. Find here the latest book published by Olivier Sibony and his colleagues Bernard Garrette (HEC Paris) and Corey Phelps (McGill University) on problem-solving: Cracked It! How to Solve Big Problems and Sell Solutions like Top Strategy Consultants.

Related content on Decision Sciences

black swan on a lake - vignette - Tatiana-AdobeStock
Economics

A New Theory in Economics Helps Predict Future Events

By Itzhak Gilboa

Fan Wang Profile
Fan Wang
Ph.D. Student
Itzhak Gilboa
Decision Sciences

The Uncertainty Across Disciplines Project

By Brian Hill

someone cares a big pile of cardboard stacks - thumbnail
Decision Sciences

Is It Rational to Stockpile in Times of Crisis?

By Emmanuel Kemel

hourglass on the grass
Decision Sciences

How to Deal with Severe Uncertainty?

By Brian Hill

Newsletter knowledge

A monthly brief in your email box and 3 issues of the book per year.

follow us

Insights @HECParis School of #Management

Follow Us

Support Research

Our articles are produced thanks to our reader's support