Publications

  • Publications
  • Artificial Intelligence and the Problem of Autonomy

Artificial Intelligence and the Problem of Autonomy

Year of Publication: 2019
Month of Publication: 9
Author(s): Simon Chesterman
Research Area(s): Information Technology and the Law
Name of Working Paper Series:

NUS Law Working Paper

WPS Paper Number: LAW-WPS-1916
Abstract:

Artificial intelligence (AI) systems are routinely said to operate autonomously, exposing gaps in regulatory regimes that assume the centrality of human actors. Yet surprisingly little attention is given to precisely what is meant by "autonomy" and its relationship to those gaps. Driverless vehicles and autonomous weapon systems are the most widely studied examples, but related issues arise in algorithms that allocate resources or determine eligibility for programs in the private or public sector. This article develops a novel typology of autonomy that distinguishes three discrete regulatory challenges posed by AI systems: the practical difficulties of managing risk associated with new technologies, the morality of certain functions being undertaken by machines at all, and the legitimacy gap when public authorities delegate their powers to algorithms.

Scroll to Top