They’ve branded it the most widespread miscarriage of justice in the history of the United Kingdom – perhaps the world! The consequences on human lives have been catastrophic and inquiries are ongoing on how to repair them. Experts say the Horizon affair could end up costing the British taxpayer almost €500 million in compensation payments. Nevertheless, after 20 years, the victims have won a legal battle to reverse the verdicts which had led to criminal conviction, prison and fines. So, what does this teach us about the risks of faulty IT systems?
Listen to the podcast:
Doctor Wang, part of your research centers on developing AI tools to improve our understanding of accounting data and proposing intelligent solutions to some real-world challenges in businesses which are undergoing huge changes in the digital world. We saw one of these challenges at the heart of the UK’s Post Office computer scandal. How do you respond to this affair which is the result of a computer system called Horizon, which a High Court judge said was not “remotely robust”?
Aluna Wang: I was certainly shocked by this miscarriage of justice. First of all, we can see that hundreds of Post Office workers were falsely accused of theft and false accounting after Horizon was introduced and incorrectly showed shortfalls on the corporate accounts. If the whole story were told as a movie, even the movie critics would think that the plot was too implausible. It’s tough for me to fathom why the UK Post Office, which is partly owned by the British government, accused so many innocent employees of theft and misreporting rather than explore the possibility that the IT system may be faulty and malfunctioning. Moreover, we don’t see a single high-placed representative from the Post Office, the IT supplier Fujitsu, or Parliament has been truly held accountable for the decisions based on the incorrect information provided by the Horizon system.
As you mentioned earlier, I have experience working with audit partners and banking executives in developing intelligent anomaly detection systems. Usually, they were highly concerned about the false positives generated by the detection systems. Because they know that if they rely on the detection system and the system gives too many false positive alarms, they waste a lot of resources investigating those false-positive cases. In this sense, false positives can be very costly.
They were financially ruined, put out of work, locally shunned, driven into poor health, and saw their families destroyed.
But here, in this Post Office scandal, we see that without rigorous monitoring of the IT system, and serious investigations into the alarms raised by the IT system, there could be even more severe costs to society. More than 700 Post Office workers were wrongfully prosecuted. Their lives and the lives of thousands of others were torn apart. They were financially ruined, put out of work, locally shunned, driven into poor health, and saw their families destroyed. This whole incident made me think more about not only the design and deployment of IT systems and AI solutions, but also how to manage the risk of using those technological solutions and how to build accountability into those solutions.
With hindsight, what could have been done to prevent such errors?
There are undoubtedly many things that could have been done to prevent this scandal. I would like to speak more from the risk management perspective. The UK Post Office could have set a clear tone at the top regarding the transparency and integrity of the IT systems put into place. It could have conducted a thorough investigation of any potential defects in the Horizon system before signing the contract with Fujitsu and made a robust risk management and monitoring plan of the Horizon system after implementing it.
Moreover, the Post Office should have taken a whistleblower, Alan Bates, more seriously. Bates reported the problems linked to the Horizon system to the Post Office management team in the early 2000s. Unfortunately, his reports were not only taken seriously but his contract with the Post Office was terminated.
Given my field of research, I actually think one of the AI solutions I developed with my collaborators can be helpful in this case. We have been working on an anomaly detection system designed for internal audit, risk management, and compliance purposes.
We have been working on an anomaly detection system designed for internal audit, risk management, and compliance purposes.
When you put accounting data into the detection system, it can assign anomaly scores to each financial transaction. It tells you why certain transactions or patterns of transactions are anomalous based on the metadata and the accounting structure of transactions. In this case, our detection system should be able to detect the changes in transaction patterns after implementing the Horizon system and flag many of the incorrect records generated by the Horizon system as highly anomalous. Furthermore, our algorithm can also generate explanations concerning how the anomaly scores were assigned based on the algorithm’s logic.
But still, we would need the Post Office management to take the red flags of the Horizon system seriously and investigate accordingly. After all, this miscarriage of justice is not only about a flawed IT system, but also about how the Post Office deals with it.
Also, since this scandal concerns severe legal enforcement actions, I think there is also a lesson for Fujitsu and other tech companies. Fujitsu should not only be more effective in reducing product defects but also look at how its clients are using the output of its systems. Horizon is a point-of-sale system that records transactions, but the Post Office also uses the data output for prosecutions. More attention should have been paid to the data output at that point. Perhaps, Fujitsu should not have handed over data packs to the UK Post Office as court evidence.
Finally, Dr. Wang, could you share with us some of the latest research you are conducting in the Hi!PARIS context? After all, your explorations involve developing machine learning-based tools to improve our understanding of accounting data, research that seeks intelligent solutions to real-world challenges like the ones we saw in the UK Post Office affair…
Our Hi! PARIS center is a research center for science, business, and society. It really aims to combine the expertise of people from different fields of specializations to address important questions at the intersection of science, technology, and business, while developing new education programs and fostering innovation.
I personally would like to put AI research into three categories: the first one is about “AI solutions”, which is what you called “intelligent solutions”. For this type of research, we engineer AI solutions addressing business and societal problems. For example, my collaborators and I have designed algorithm packages for risk management of financial institutions. Our graph-based machine learning algorithms can be used for anti-money laundering, email communication monitoring, and fraud detection purposes.
Our graph-based machine learning algorithms can be used for anti-money laundering, email communication monitoring, and fraud detection purposes.
I would like to call the second category “AI for Social Science”. We can leverage AI techniques to understand better economic phenomena. For instance, my collaborators and I are currently working on using graph mining techniques to investigate the knowledge spillover in the open-source community.
And, finally, I call the third category of research “Social Science for AI”. For this type of research, we can use our social science research methods to examine how AI and digital transformation affect our human behaviors and business models. My collaborators and I are currently working on analyzing the human-algorithm interactions on the social platforms and figuring out how we can design algorithms to improve the information environment of social platforms.