- AI applications in business often overlook ethical design considerations
- Poor data quality and human bias are frequently built into algorithmic decisions
- Algorithmic decisions can negatively impact trust, governance, and accountability
- Human intervention, including bias mitigation, is essential for AI success
- Design practices must shift to emphasize transparency and responsibility
Ethical Questions
As artificial intelligence is being used in an increasing number of ways - from improving customer experience to helping business leaders make better-informed decisions - questions are being asked about its ethical implications.
This includes how decisions made by AI tools can impact people’s lives, often with little or no human involvement. In our research, we looked at how AI can be developed to increase transparency, responsibility and accountability, particularly in areas such as resource allocation, scheduling and procurement. We also examined the extent to which human biases are being built into AI, which can affect the quality of decisions it makes.
Why Algorithmic Decisions Carry Hidden Risks
Some companies are already using AI systems to help in areas such as HR, with AI tools assessing CVs and identifying suitable job candidates.
However, researchers have expressed concerns about how these systems are designed and what they are doing. Algorithmic decisions have been described as being plagued by flawed assumptions, poor-quality data and bias.
In some cases, they are opaque and hard to understand, with little or no opportunity for redress. In an analysis of the use of AI in the recruitment process, we found that many of these systems are not neutral.
They can easily replicate the biases and limitations of their developers. In short, bias in, bias out. The ethical implications of AI decision-making also raise concerns around a lack of trust, governance and accountability. Organizations are often unclear about how the algorithms they use have been developed and how they work, which can limit their effectiveness.
Why Human Oversight Still Matters
increases transparency, fairness and responsibility.
We found that the use of algorithmic decision-making must be supported by human supervision and oversight, particularly in high-risk areas. Organizations using AI need to define who will be held responsible and accountable if things go wrong.
They also need to think carefully about how the data being used by AI tools is collected and prepared. In the area of procurement, for example, we found that data was often entered by different people, which increased the risk of errors and poor-quality data. Ensuring the integrity of this data is essential to making sure that the AI system works effectively.
How We Can Design Algorithms More Ethically
We also found that algorithmic decisions must be designed with fairness and transparency in mind. This means establishing clear protocols about how they work and how they should be used. One organization we looked at had introduced rules that enabled supervisors to reject the outcome of an algorithmic decision if they believed it was unfair.
It is also vital that organizations provide training to staff so they understand how these systems work. They should be able to explain algorithmic decisions clearly and know how to act if things go wrong. Our research found that some organizations were already taking action to address the challenges around algorithmic decision-making.
One company had developed an AI system for scheduling workers, which included a button that allowed individuals to challenge a scheduling decision and ask for it to be reviewed by a supervisor. These types of examples show how AI systems can be designed in ways that promote transparency, accountability and fairness.
Why We Must Bridge Design and Practice
In our research, we argue that there is a disconnect between the way AI systems are designed and the way they are used in practice. We developed a framework that explains the role of human intervention in shaping algorithmic decisions, and which can be used to improve the design of future systems.
We believe that this framework could help companies and organizations develop AI systems that are more effective, more transparent and more aligned with human values.The future of AI depends on the ethical principles we bake into the systems we build now. As AI continues to reshape industries, we must ensure that the decision-making it enables does not repeat - or reinforce - our worst habits. Instead, responsible design must prioritize human accountability, fairness and trust.
Applications
This research applies to any organization or public agency that relies on algorithmic decision-making, particularly in areas like HR, logistics, procurement or finance. Practitioners should develop oversight systems that enable staff to question or intervene in AI recommendations, and they should ensure that training, protocols and audit mechanisms are in place to track algorithmic impact.
Methodology
We conducted a multi-year qualitative study using data from a global humanitarian logistics organization. Our team conducted interviews, reviewed documentation, and observed operational practices involving algorithmic scheduling and resource allocation. Findings were synthesized into a framework that highlights how design choices and organizational structures interact with human biases and technological implementation.
Sources
Based on an interview with Professor Shirish C Srivastava on his HEC Foundation-funded research “To Be or Not to Be… Human? Theorizing the Role of Human-Like Competencies in Conversational Artificial Intelligence Agents,” co-written with Dr Shalini Chandra from the S P Jain School of Global Management in Singapore and Dr Anuragini Shirish from Institut Mines-Télécom Business School in Paris, published in the Journal of Management Information Systems, December 2022. This research has been selected to receive funding from the HEC Foundation's donors.