Publications
- Publications
- Weapons of Mass Disruption: Artificial Intelligence and International Law
Weapons of Mass Disruption: Artificial Intelligence and International Law
Cambridge International Law Journal (forthcoming)
NUS Law Working Paper No. 2021/009
The answers each political community finds to the law reform questions posed artificial intelligence (AI) may differ, but a near-term threat is that AI systems capable of causing harm will not be confined to one jurisdiction — indeed, it may be impossible to link them to a specific jurisdiction at all. This is not a new problem in cybersecurity, though different national approaches to regulation will pose barriers to effective regulation exacerbated by the speed, autonomy, and opacity of AI systems. For that reason, some measure of collective action is needed. Lessons may be learned from efforts to regulate the global commons, as well as moves to outlaw certain products (weapons and drugs, for example) and activities (such as slavery and child sex tourism). The argument advanced here is that regulation, in the sense of public control, requires active involvement of states. To coordinate those activities and enforce global ‘red lines’, this paper posits a hypothetical International Artificial Intelligence Agency (IAIA), modelled on the agency created after the Second World War to promote peaceful uses of nuclear energy, while deterring or containing its weaponization and other harmful effects.