Media - News

  • Media
  • SLATE IX: Do we need a new International AI Bill of Human Rights?

SLATE IX: Do we need a new International AI Bill of Human Rights?

January 20, 2026 | In the News

On 16 January 2026, Prof. Yuval Shany, Hersch Lauterpacht Chair in International Law at the Hebrew University, delivered the 9th seminar in the Seminars on Law and Technology (“SLATE”) series with a timely talk titled “Do we need a new International AI Bill of Human Rights?” The moderator is Professor Ernest Lim, the Chan Sek Keong Chair in Private Law at the NUS Faculty of Law. The seminar examined whether we need a new International AI Bill of Human Rights to tackle various challenges brought by artificial intelligence.

Professor Shany opened the lecture by stressing that AI should not be viewed as a technology that replaces or “swallows” human rights. Instead, its significance lies in the way it reshapes the environment in which rights are exercised. Because AI systems operate at unprecedented scale, speed, and scope, practices that were once limited and manageable, such as surveillance, profiling, or administrative decision-making, can now be deployed widely and automatically, often with far-reaching consequences.

He illustrated this tension through examples from healthcare and public services. AI-assisted medical diagnosis may outperform human doctors and significantly improve patient outcomes, potentially creating an obligation for states to make such technologies available. Yet the same systems may also produce serious errors, and when they do, it is often unclear who should be held responsible. The opacity of “black box” systems sometimes cannot be fully understood even by their developers, which further complicates questions of accountability and effective remedies.

A central part of the lecture focused on equality and non-discrimination. Professor Shany explained how AI systems can reproduce and even intensify existing social inequalities, not only through biased training data, but also through design choices and the use of proxy variables that indirectly disadvantage certain groups. Because these forms of discrimination are often hidden within complex technical processes, they are harder to detect and challenge using traditional legal tools. Similar concerns arise in the area of privacy, where the growing appetite for data enables constant monitoring, blurs the line between data and metadata, and erodes the distinction between public and private life, facilitating new forms of social control.

Looking beyond existing rights, Professor Shany argued that AI also exposes gaps in current human rights frameworks. He highlighted the need for stronger claims to transparency and explainability, not as abstract technical ideals, but as practical rights for individuals to understand, challenge, and seek redress against AI decisions that affect their lives. He also discussed the importance of preserving meaningful human decision-making and interaction in the areas of adjudication, welfare, and healthcare, where empathy, dignity, and moral judgment remain central. Then Professor Shany situates these concerns within current regulatory responses in Europe, the United States, and beyond. While these initiatives address many AI-related risks, Professor Shany noted that they often remain fragmented, heavily qualified by exceptions, and framed in regulatory rather than human rights terms.

Lastly, Professor Shany advanced a more ambitious proposal: the development of an international Bill of Human Rights for AI. Envisaged, at least initially, as a soft-law instrument, such a framework could help articulate shared principles, clarify responsibilities among multiple actors, and ensure that human rights remain central as AI increasingly shapes social and institutional life.