Media - News

  • Media
  • SLATE VIII: Presumption vs Reality – Unpacking Legal Trust in AI Analysis

SLATE VIII: Presumption vs Reality – Unpacking Legal Trust in AI Analysis

September 18, 2025 | In the News

On 16 September 2025, Professor Stephen Mason, Visiting Professor at NUS Law, delivered the 8th seminar in the Seminars on Law and Technology (“SLATE”) series with a timely talk titled “Presumption vs Reality: Unpacking Legal Trust in AI Analysis”. The seminar, moderated by Associate Professor Daniel Seng, examined the long-standing legal presumption that computers are reliable and asked whether this assumption still holds in an era increasingly shaped by artificial intelligence (AI).

In England and Wales, the evidential presumption of computer reliability can be traced to the Law Commission’s 1997 Report and the repeal of section 69 of the Police and Criminal Evidence Act 1984. Courts often treated computers as trustworthy “mechanical instruments,” placing the burden on parties to prove unreliability. Yet, high-profile cases such as the Seema Misra prosecution (2010) and the recent British Post Office-Horizon IT scandal showed how misplaced trust in software could lead to grave miscarriages of justice.

To address these risks, Professor Mason urged clearer distinctions between evidence captured by digital systems and evidence generated by algorithms. He proposed a Code of Practice and a two-stage authentication process: first, agreement on undisputed facts; then, focused resolution of contested issues supported by technical disclosure and expert evaluation. He further highlighted the importance of integrating digital evidence training into legal curricula and judicial workshops to build competency in handling such cases.

Professor Mason also challenged the very use of the term “AI,” suggesting it obscures the fact that technologies like neural networks and language models are essentially prediction tools. International case studies, such as Tesla’s Autopilot software lawsuits involving fatal crashes in Europe and the United States, illustrated how courts worldwide are struggling with numerous issues such as appropriate disclosure and technical expertise in evaluating software-based evidence.

Professor Mason closed with a sobering reminder: the challenges of trust in AI and digital systems are not just legal but also practical. The scarcity of experts, the expense of technical evaluations, and the evolving nature of digital evidence demand careful reconsideration of how courts approach software-based outputs.