Stephen 
MASON

 
Visiting Researcher

Stephen Mason was called to the Bar by the Honourable Society of the Middle Temple in 1988.

He is the joint editor, with Professor Daniel Seng, of Electronic Evidence and Electronic Signatures (5th edition, Institute of Advanced Legal Studies for the SAS Humanities Digital Library, School of Advanced Study, University of London, 2021), open source https://uolpress.co.uk/book/electronic-evidence-and-electronic-signatures/, and editor of International Electronic Evidence (British Institute of International and Comparative Law, 2008).

FULL BIOGRAPHY

In Residence

1 September 2023 to 31 August 2024

Stephen Mason was called to the Bar by the Honourable Society of the Middle Temple in 1988.

He is the joint editor, with Professor Daniel Seng, of Electronic Evidence and Electronic Signatures (5th edition, Institute of Advanced Legal Studies for the SAS Humanities Digital Library, School of Advanced Study, University of London, 2021), open source https://uolpress.co.uk/book/electronic-evidence-and-electronic-signatures/, and editor of International Electronic Evidence (British Institute of International and Comparative Law, 2008).

He founded the open-source international journal Digital Evidence and Electronic Signature Law Review, which has become an international focal point for researchers in the area http://journals.sas.ac.uk/deeslr/

Stephen has acted as the academic external marker in postgraduate degrees dealing with electronic evidence: LLM, University of Oslo (2006); PhD, College of Social Sciences and International Studies, University of Exeter (2013); PhD, Law School, Queensland University of Technology (2015); PhD School of Law, University of Aberdeen (2018).

His is interested in why lawyers and judges appear to continue to misunderstand technology, and how to make it clear to the legal world that ‘artificial intelligence’ (AI) (software run on machines) is not conscious. When a person teaches other people, that is clearly an interaction between people that are conscious. AI models are trained by human beings, and the software then uses the ‘weights’ or the strength of connections between different variables in the model to provide an output. The methods used are in part statistical, although the AI in LLMs is more accurately described as ‘generative’.

The problem remains: how and why we should trust computer systems, regardless of whether they are procedural, functional, or statistical. Calling it AI does not change the underlying issues, except to magnify it, because humans have decided to arrogate sensible decision making to non-transparent machines.

Presentations

‘Autonomous vehicles and evidence’, Attorney-General’s Chambers, Thursday, 3 October 2024

‘Electronic evidence and the Post Office horizon scandal’, Attorney-General’s Chambers, Tuesday, 5 March 2024