An Exploration of AI Evaluation Regimes in Singapore and Worldwide (Part 2)
ARTIFICIAL INTELLIGENCE & ROBOTS - June 2024

An Exploration of AI Evaluation Regimes in Singapore and Worldwide (Part 2)

By Dr Stanley Lai, SC, David Lim, Linda Shi, and Justin Tay (Allen & Gledhill LLP)

I.   Introduction

The unbridled and seemingly limitless potential of AI has led to many jurisdictions undertaking the necessary job of establishing guidelines and best practices for the development and deployment of AI, with the goal of minimising risks such as the spreading of misinformation and prejudice, invasions of privacy and other adverse consequences, whilst allowing room for innovation to flourish. Yet, just as AI technology is rapidly developing and ever-changing, the world of AI-regulation is complex and largely uncharted territory, and several jurisdictions have established unique approaches to dealing with this mercurial technology. These approaches are worthy of comparison, as they differ significantly in ways such as whether AI evaluation is mandatory or voluntary (including whether penalties are imposed for non-compliance), which industry players each regime seeks to target, and the nature of the evaluations proposed or imposed.

Having outlined the AI evaluation regime in Singapore in the first part of this article (published at https://law.nus.edu.sg/trail/exploration-ai-evaluation-regimes), the second part will broadly compare the Singapore regime with that of other major jurisdictions. In doing so, the authors will attempt to cover significant developments in the AI evaluation space in the United States, a world leader in the AI industry and home to prominent AI developers such as OpenAI, and in the European Union, which has for the past few years been developing and refining the EU AI Act, reported by media as the “first major act to regulate AI”[1] and the “most restrictive regime on [AI] development”.[2] Also noteworthy are the regimes in the United Kingdom, which has a relatively similar voluntary evaluation regime to that of Singapore, and China, which passed some of the world’s earliest regulations on generative AI and is seeking to compete with the US on AI primacy.

II.  Brief Overview of the AI Evaluation Regimes in Major Jurisdictions

1. United States (“US”)

The Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (the “EO”) was issued by the Biden administration on 30 October 2023.[3]

In general, the EO directs federal agencies to develop guidelines in areas such as healthcare, employment, and government use of AI. Notably, the EO uses the definition of “AI” as set out in 15 U.S.C. 9401(3): “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” Accordingly, the scope of the EO is broad enough to include not only generative AI but other types of “machine-based system” that can “make predictions, recommendations or decisions”, i.e. traditional AI.

In particular, the specific directions in the EO which are targeted at generative AI, include:

  • The Department of Commerce will develop guidance on digital content authentication and AI-generated content detection measures. This will include best practices for detecting AI-generated content and authenticating official content, as well as labelling AI-generated content (e.g. watermarking) to protect consumers against fraud and deception, and will also include guidance on prevent generative AI from producing Child Sexual Abuse Material or non-consensual intimate imagery of real individuals.
  • The Secretary of Homeland Security is to make recommendations to agencies regarding safeguards for generative AI outputs, and reasonable steps to watermark or otherwise label generative AI output.
  • The US Patent and Trademark Office (“USPTO”) is to publish guidance to patent examiners and applicants addressing inventorship and the use of AI, including generative AI, in the inventive process.
  • The Copyright Office is to perform a 270-day study of the copyright issues raised by AI, including the “scope of protection for works produced using AI and the treatment of copyright works in AI training”.

On 29 April 2024, the Department of Commerce made several announcements relating to the EO. For instance, the Department’s National Institute of Standards and Technology (NIST) has released four draft publications intended to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems.[4] It was also announced on 29 April 2024 that the USPTO is publishing a request for public comment seeking feedback on how AI could affect evaluations of how the level of ordinary skills in the arts are made to determine if an invention is patentable under U.S. law.[5] This had shortly followed the USPTO’s publication of inventorship guidance for AI-assisted inventions earlier in the year.[6]

Separately, in accordance with the Defense Production Act (“DPA”), the EO also directs the imposition of mandatory reporting requirements for companies developing dual-use foundation AI models to notify the federal government when training the model, and to share the results of all red-team safety tests.[7]

2. European Union (“EU”)

The European Commission first issued its proposal for an EU AI Act[8] (the “Act”) in April 2021. On 13 March 2024, the Act was adopted by the European Parliament.[9] The Act was approved by the EU Council on 21 May 2024 and will be published in the EU’s Official Journal in the coming days, and enter into force 20 days after such publication,[10] i.e. sometime in June 2024. The Act will thereafter take effect in phases. In this regard, most provisions of the Act address “high-risk” AI systems and will become applicable two years after its entry into force.

Article 2 of the Act sets out its scope, which broadly applies to AI system providers (inside or outside the EU) that are placing AI systems on the EU market, AI system deployers located within the EU, providers and deployers (inside or outside the EU) where the AI system output is used in the EU, as well as importers, distributors or manufacturers of AI systems in the EU.

The Act regulates AI based on the AI’s capacity to cause harm to society following a risk-based approach. AI systems will have to comply with obligations applicable to their risk categorisation, as follows:[11]

  • Unacceptable risk: AI systems considered a clear threat to fundamental rights of people are prohibited (e.g. social scoring systems);
  • High risk: AI systems that affect safety or fundamental rights (e.g. non-banned biometrics, critical infrastructure) are required to comply with specific legal requirements and undergo a conformity assessment before the system is released on the market and after each time the system is substantially modified, and may also be required to be registered in the EU database for high-risk AI systems;
  • Limited / transparency risk: AI systems which directly interact with natural persons (e.g. chatbots) are required to comply with certain transparency requirements;
  • Minimal risk: The vast majority of AI applications as of 2021 (e.g. AI-enabled video games and spam filters) were perceived to be minimal risk and unregulated. However, such systems are still recommended to comply with voluntary codes of conduct.

In addition to risk-based obligations, the Act contains a dedicated chapter on general-purpose AI (“GPAI”) models, which was added in subsequent negotiations following the surge in generative AI use in recent years. The provisions concerning GPAI will apply 12 months after the Act’s entry into force.

GPAI models are defined as “an AI model… that displays significant generality and is capable of competently performing a wide range of distinct tasks… and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”.[12] Generative AI systems such as GPT-4 therefore fall within this definition and are subject to additional obligations under the Act.

GPAI models are subject to a separate classification framework, which differentiates between GPAI models with systemic risk and other GPAI models. Generally, providers of GPAI models are required to:

  • Draw up technical documentation, including training and testing process and evaluation results;
  • Draw up information and documentation to supply to downstream providers that intend to integrate the GPAI model into their own AI system in order that the latter understands capabilities and limitations and is enabled to comply;
  • Put in place a policy to comply with EU copyright law;
  • Draw up and make publicly available a sufficiently detailed summary about the content used for training the GPAI model.

Further, GPAI models with systemic risk are subject to additional obligations. For example, a GPAI model is classified as “systemic risk” when it has high impact capabilities evaluated on the basis of appropriate technical tools. For example, a GPAI model is presumed to have high impact capabilities if the cumulative amount of computation used for its training, measured in floating point operations (“FLOPS”), is greater than 1025.

The additional obligations include:

  • Perform model evaluations (including conducting and documenting adversarial testing);
  • Assess and mitigate possible systemic risks, including their sources;
  • Track, document and report serious incidents and possible corrective measures without undue delay; and
  • Ensure an adequate level of cybersecurity protection.

Notably, the AI Office is also empowered to supervise, investigate, enforce and monitor providers of GPAI models. This includes the power to conduct evaluations of GPAI models to assess compliance with the Act when information otherwise gathered is insufficient, or to investigate systemic risks posed by a model particularly following a report from the scientific panel.[13] The AI Office may request access to a model through APIs or further appropriate technical means and tools, including source code,[14] and may appoint independent experts to conduct evaluations on its behalf.[15] The providers concerned are also required to grant access pursuant to such a request.[16]

The AI Office is also able to request GPAI model providers to take appropriate measures to comply with its obligations under the Act, implement mitigation measures where the evaluation has given rise to serious and substantiated concern of a systemic risk, and/or restrict the availability on the market, withdraw or recall the GPAI model.[17]

3. United Kingdom (“UK”)

On 6 February 2024, the UK Government issued its response to last year’s White Paper consultation on regulating AI, “A pro-innovation approach to AI regulation: government response” (the “Response”).[18]

Under the Response, existing regulators are to apply five cross-sectoral principles within their own domains to progress AI innovation, by applying existing laws and issuing supplementary regulatory guidance:

  • Safety, security and robustness;
  • Appropriate transparency and explainability;
  • Fairness;
  • Accountability and governance;
  • Contestability and redress.

The Response also notes that whilst a voluntary approach is useful at present, “all jurisdictions will, in time, want to place targeted mandatory interventions on the design, development, and deployment of such systems to ensure risks are adequately addressed.”[19]

Whilst the regime is non-binding, the UK Government has directed various regulators within their industries to publish their strategic approach by 30 April 2024, including but not limited to the main digital regulators forming the Digital Regulation Cooperation Forum, along with key regulators responsible for healthcare, finance, education and the enforcement of human rights.[20] The UK Government has also issued an initial guidance to support regulators with developing tools and guidance to implement the principles.[21]

As of the date of this article, the regulators have published their respective updates, which the UK Government has said will “inform our adaptative approach to AI”.[22] The AI Safety Institute (“AISI”) also recently announced on 10 May 2024 the launch of an open-source AI safety evaluation platform called Inspect, which aims to enable groups to develop AI evaluations and boost collaboration with researchers and developers.[23]

4. China

China was possibly one of the first countries to regulate AI, and its regime encompasses two sets of earlier regulations known as the “Provisions on the Management of Algorithmic Recommendations in Internet Information Services”[24] (2021) and “Provisions on the Administration of Deep Synthesis Internet Information Services”[25] (2022).

Following on from the earlier regulations, the “Interim Measures for the Management of Generative Artificial Intelligence Services”[26] (the "Interim Measures") came into effect on 15 August 2023, which regulates generative AI services that are provided to the “public” in China. Some examples of the obligations imposed on service providers under the Interim Measures include:

  • Providers shall be responsible for AI-generated content, including ensuring that the content is in line laws and regulations, non-discriminatory, with accurate and respects IP rights.
  • Providers are required to suspend and take down illegal content, and take measures against users engaging in illegal activities using generative AI services.
  • Providers are required to ensure AI-generated content is labelled.
  • Specific compliance requirements for training generative AI models, e.g. ensuring that source data used for training does not infringe IP rights, and complies with the Cybersecurity Law, Data Security Law and the Personal Information Protection Law.
  • Providers are responsible for protecting users’ input information and usage records, and shall not collect unnecessary personal information or illegally retain input data and usage records that can infer the identity of a user.
  • Generative AI services “with public opinion properties or the capacity for social mobilization” are subject to a security assessment and record-filing of algorithms.

Subsequently, the National Technical Committee 260 on Cybersecurity of Standardization Administration of China issued the “Basic Safety Requirements for Generative Artificial Intelligence Services” (the “Basic Requirements”),[27] providing detailed guidance on how AI developers can implement the Interim Measures and evaluate AI models. The Basic Requirements are not enforceable, but some expect that regulators will treat them as law.[28]

Apart from requirements for the safety of generative AI services (such as training data corpus safety and model safety), the Basic Requirements also set out requirements for carrying out safety assessments on generative AI services.  Such safety assessments “may be carried out by the provider itself or may be entrusted to a third-party assessment agency”.[29] Interestingly, the Basic Requirements specifically identify 31 safety risks as set out in Appendix A, which includes not only well-recognised metrics such as data privacy, disclosure of confidential information and IP infringement, but also issues that are specific to China, such as “content that violates the socialist core values concept”.

To this end, Sections 9.2 to 9.4 provide specific methods for assessing training corpus safety, generated content safety, and question refusal. For example, one requirement is that an AI model must refuse to answer “questions that are obviously extreme” or would “induce the generation of illegal and unhealthy information”. [30] To assess this, service providers must create a test question bank built around questions which the model should refuse to answer, and conduct random sampling from the test question bank to ensure that the model refuses to answer not less than 95% of the aforesaid questions.[31]

III.  Comparisons to the Singapore AI Evaluation Regime

In the first part of this article, we stated that two of the main features of Singapore’s AI evaluation regime are (1) its non-binding, advisory nature; and (2) its split between traditional and generative AI. We address each aspect in turn.

1. Are there mandatory rules on AI?

The country with the most similar approach to Singapore appears to be the UK, as both regimes are voluntary and do not, as yet, impose penalties. The UK’s AISI plays a similar collaborative role to Singapore's AI Verify Foundation and works with major AI companies on a voluntary basis.[32] Various documents list evaluation principles that contain similarities with the approach adopted in Singapore, and both jurisdictions have developed a compendium of AI assurance techniques.[33] Furthermore, the UK likewise focuses on building understanding of and trust in AI systems, not on approving or banning them: the AISI is “not a regulator” and will not provide a “pass/fail test”.[34] Nonetheless, the chair of the UK’s AISI recently said that “where risks are found, we would expect [developers] to take any relevant action ahead of launching”.[35]

By contrast, the EU, China, and the US all apply a mixture of formal AI-specific rules and voluntary commitments such as guidelines and best practices. Of the three abovementioned regimes, the most prescriptive is the EU regime, as the Act imposes heavy fines for non-compliance. For example, failure to give the AI Office model access for an evaluation under Article 82 will result in a fine of up to EUR 15 million or 3% of worldwide turnover.[36]

On the other hand, the US and China regulations do not appear to contain prescribed penalties for non-compliance, although it is possible that penalties under other existing legislation may still be applicable (e.g. the reporting provisions in the US EO which specifically invoke the DPA).

2. How is AI categorised?

Both China and Singapore have established frameworks dealing specifically with generative AI, suggesting that there is a distinction between “traditional” and generative AI regulations. Conversely, the UK regime does not clearly differentiate between different kinds of AI, but instead focuses on allowing regulators to promulgate sector-specific approaches to AI. Likewise, the US EO is largely structured around federal agencies.

On the other hand, whilst the EU Act does contain a dedicated chapter for GPAI models, categorisation is based broadly on risk assessment based on use cases. In this regard, both the EU and US regimes consider factors such as AI model size to assess risk, with reference to computing power used in training and measured in FLOPS.[37]

IV.  Concluding Comments

The first part of this article found that Singapore’s AI evaluation regime has numerous distinctive characteristics, such as its non-binding nature, its split between “traditional” and generative AI, and its high degree of specificity and industry adoption. This article has shown that other significant regimes differ from Singapore’s regime as much as they converge with it.

The UK regime is the most similar - it is non-binding, focused on fostering international dialogue, and centred around a single evaluation institute which has already worked closely with industry. Nonetheless, it has important differences, including a lack of clear distinction between different kinds of AI, more evident national security concerns, and the possibility of legislating on AI sooner than expected.

China, like Singapore, made an early start on AI rulemaking, has published detailed evaluation metrics, and sets distinct rules for generative AI. However, China has focused on self-evaluation rather than on collaborative evaluation, and the scope of its regulations has been narrower, focused on content control and industry protection within China.

The US has no federal AI legislation, but the EO requires certain AI model developers to disclose safety information. The EO is also prompting reams of new AI rules to be written, including on evaluation and standards, such that the full picture of US AI evaluations will only become clear with time.

The EU’s AI Act may be the most widely discussed AI regime, but of all those discussed in this piece it is also the most different from Singapore’s. It features a distinctive risk-based segmentation of AI and requires developers to prepare extensive self-assessments, whilst the AI Office will generally only get involved in evaluations in cases of danger. Whether the EU regime achieves the smooth industry cooperation enjoyed by other jurisdictions remains to be seen.

Given the current proliferation of different regimes in the world, it would appear that we are still many years away from international harmonisation. In the meanwhile, AI continues to develop at an unabated pace.

AUTHOR INFORMATION:

Dr Stanley Lai, SC is a Partner at Allen & Gledhill, where he is also the Head of the Intellectual Property Practice and Co-Head of the Cybersecurity & Data Protection Practice.
Email: stanley.lai@agasia.law

David Lim is a Senior Associate in the Intellectual Property Practice of Allen & Gledhill.
Email: david.lim@agasia.law

Linda Shi is an Associate in the Intellectual Property Practice of Allen & Gledhill.
Email: linda.shi@agasia.law

Justin Tay is an Associate in the Intellectual Property Practice of Allen & Gledhill.
Email: justin.tay@agasia.law

* The authors would like to thank Joseph Court, a Visiting Practice Trainee at Allen & Gledhill, for his assistance in the production of this article.

REFERENCES

[1]                 https://www.cnbc.com/2024/03/13/european-lawmakers-endorse-worlds-first-major-act-to-regulate-ai.html#:~:text=World's%20first%20major%20act%20to%20regulate%20AI%20passed%20by%20European%20lawmakers,-Published%20Wed%2C%20Mar&text=The%20European%20Union's%20parliament%20on,the%20forefront%20of%20tech%20investment (this and all other website links accessed 22 May 2024).

[2]                 https://www.ft.com/content/d5bec462-d948-4437-aab1-e6505031a303.

[3]                 https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/; see also https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

[4]                 https://www.commerce.gov/news/press-releases/2024/04/department-commerce-announces-new-actions-implement-president-bidens.

[5]                 https://www.uspto.gov/about-us/news-updates/uspto-publishes-request-comments-regarding-impact-ai-certain-patentability.

[6]                 https://www.federalregister.gov/documents/2024/02/13/2024-02623/inventorship-guidance-for-ai-assisted-inventions.

[7]                 Above note 3.

[8]                 https://artificialintelligenceact.eu/the-act/.

[9]                 https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law.

[10]                https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-intelligence-ai-act-council-gives-final-green-light-to-the-first-worldwide-rules-on-ai/#:~:text=Today%20the%20Council%20approved%20a,society%2C%20the%20stricter%20the%20rules.

[11]                https://artificialintelligenceact.eu/high-level-summary/.

[12]                Above note 8, Act, Article 3(63).

[13]                Above note 8, Act, Article 92(1).

[14]                Above note 8, Act, Article 92(3).

[15]                Above note 8, Act, Article 92(2).

[16]                Above note 8, Act, Article 92(5).

[17]                Above note 8, Act, Article 93(1).

[18]                https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response.

[19]                Above note 18, [5.2.3].

[20]                https://www.gov.uk/government/publications/request-for-regulators-to-publish-an-update-on-their-strategic-approach-to-ai-secretary-of-state-letters.

[21]                https://www.gov.uk/government/publications/implementing-the-uks-ai-regulatory-principles-initial-guidance-for-regulators.

[22]                https://www.gov.uk/government/publications/regulators-strategic-approaches-to-ai/regulators-strategic-approaches-to-ai.

[23]                https://www.gov.uk/government/news/ai-safety-institute-releases-new-ai-safety-evaluations-platform.

[24]                https://www.gov.cn/zhengce/zhengceku/2022-01/04/content_5666429.htm (Chinese); https://www.chinalawtranslate.com/en/algorithms/ (English).

[25]                https://www.gov.cn/zhengce/zhengceku/2022-12/12/content_5731431.htm (Chinese); https://www.chinalawtranslate.com/en/deep-synthesis/ (English).

[26]                http://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm (Chinese); https://www.chinalawtranslate.com/en/generative-ai-interim/ (English).

[27]                https://www.tc260.org.cn/upload/2024-03-01/1709282398070082466.pdf (Chinese); https://cset.georgetown.edu/publication/china-safety-requirements-for-generative-ai-final/ (English).

[28]                Above note 29.

[29]                Above note 27, Basic Requirements, [9.1(a)].

[30]                Above note 27, Basic Requirements, [7(g)(2)].

[31]                Above note 27, Basic Requirements, [9.4(b)].

[32]                Above note 18, [5.2.2].

[33]                https://www.gov.uk/guidance/cdei-portfolio-of-ai-assurance-techniques.

[34]                Above note 18, [5.2.2].

[35]                https://www.ft.com/content/105ef217-9cb2-4bd2-b843-823f79256a0e.

[36]                Above note 8, Act, Article 101(1)(d).

[37]                Above note 3, EO, Section 4.2(b)(i); above note 8, Act, Article 51(2).