Media - News

  • Media
  • CBFL Seminar Series: The Use of Algorithms and Artificial Intelligence in Commercial Transactions: Guiding Principles for ‘Algorithmic Contracting’

CBFL Seminar Series: The Use of Algorithms and Artificial Intelligence in Commercial Transactions: Guiding Principles for ‘Algorithmic Contracting’

February 9, 2023 | Programmes, Research

On 9 February 2023, the Centre for Banking & Finance Law (CBFL) organised a hybrid seminar on AI and the use of algorithms in commercial transactions chaired by Associate Professor Dora Neo. The speaker was Teresa Rodríguez de las Heras Ballel, Associate Professor of Commercial Law at the University Carlos III of Madrid (Spain) and Visiting Associate Professor at NUS Law and CBFL. Prof Rodríguez de las Heras Ballel explored the intersection of technology and contract law, and provided critical insights into some of the most pressing legal and ethical issues related to the use of algorithms and AI in the context of commercial transactions. She presented her proposal for guiding legal principles to assist policymakers and legislators at the international and regional levels in establishing a robust, consistent, predictable legal framework applicable to automated decision-making (ADM) in commercial transactions.

The increasing use of autonomous algorithmic and AI-based systems is a subject of significant policy debate in the digitalisation of the economy and society. Private organisations and public authorities employ AI systems for various tasks, including negotiating, concluding, and enforcing contractual agreements. While automation presents risks and opportunities, legal systems struggle to catch up with technological advances as they lack a reliable and secure foundation to ensure legal certainty and predictability in AI-driven commercial transactions. From this perspective, the Seminar explored the

ability of current legal frameworks to address fundamental legal issues and uncertainties caused by automation, where traditional legal concepts and rules may not be sufficient, posing challenges in adapting to ADM-based contracting. It is necessary to examine emerging legal gaps and assess the need to create new legal principles and norms that can better address the critical concerns introduced by automation and legal situations where it can be challenging to define and treat the role of humans.

Prof Rodríguez de las Heras Ballel’s presentation was structured in two main parts. The first part started with assessing the legal issues induced by delegating tasks to AI systems in the contract lifecycle. It examined the legal implications of contractually relevant ADM-based outcomes and decisions, focusing on business-to-consumer transactions and legal relationships. The discussion unveiled the additional legal issues for consumer protection that arose with ADM-based contracting and whether the principles and rules of consumer law can be adapted to algorithm-driven contractual agreements and negotiations with little or no human intervention by the contracting parties. In ADM-dominated markets, existing consumer protection laws may fail to address the specific risks and related legal issues posed by increasingly autonomous AI systems as agents, exposing consumers to unprecedented vulnerabilities. The analysis showed that the logic and concepts behind consumer protection law might need to be re-contextualised for a new digital marketplace in the context of AI-based transactions. One illustration was how automation could circumvent established concepts and traditional rules governing contract formation and performance (such as contract validity, liability, etc.).

In the second part of the seminar, Prof Rodríguez de las Heras Ballel took a closer look at the contractual ecosystem in which ADM has to operate and all the kinds of contractually relevant interactions that take place between the different actors involved. She emphasised that any assessment must start with a clear understanding of AI and related technologies (eg, Internet of Things, Big Data, etc.) that underpin the functioning of ADM systems. Not all AI applications carry the same risks. The ‘one-size-fits-all’ approach cannot provide a sound conceptual framework for addressing the technical and legal issues associated with ADM. As such, policymakers and lawmakers are now called upon to make fundamental decisions based on technical aspects of AI technology that may have substantial implications for how AI is implemented and regulated. Given all the complexity embedded in ADM in the contractual context, the law may need to find creative solutions to provide a safe and reliable regulatory framework. Prof Rodríguez de las Heras Ballel observed that a recent proposal, ‘Guiding Principles for Automated Decision-Making in the EU’, prepared under the auspices of the European Law Institute, had aimed to promote internationally recognised principles in response to the ethical and legal issues mentioned above to guide legislators, practitioners, and developers of AI in algorithmic contracting.

As a starting point, ADM should not require any specific legal recognition as opposed to non-ADM-based decisions with equivalent legal effects. ADM-based contracts should not be denied legal effect, validity, or enforceability merely because the decisions are automated (Principle 2). In the absence of this principle, this risks unjustifiably discouraging ADM. By contrast, its application can allow us to extrapolate legal concepts and rules from traditional situations and apply them to the ADM context. Thus, the central assumption is that the law must continue to function in the case of automation and ADM. However, additional requirements may be needed, including ensuring that ADM can comply with all existing and applicable legislation (Principle 1). Legal requirements should also guide AI operators towards responsible and reliable use in the specific socio-economic context in which automated technologies are embedded, given the associated risks (Principle 11). The use of ADM should not allow parties to circumvent existing legislation but instead, promote good practices to ensure the responsible use of innovative technologies in specific areas of application domains based on the associated risks to be mitigated and in the light of the legal and societal values to be pursued. Prof Rodríguez de las Heras Ballel highlighted the critical role of operators as ‘gatekeepers’ in relation to vulnerable parties (ie consumers). Operators should be held accountable for ADM-based decisions (Principle 3) and be liable for wrongdoing or harm caused by ADM (Principle 7). The underlying assumption is that the party using the ADM is in the best position to control it, and who otherwise benefits from it, should be accountable and liable. Only then can the operator demonstrate that other relevant stakeholders in the AI production line (eg, systems, data, and upgrade providers) share in liability. This approach can therefore serve to better allocate the risks associated with ADM.

More generally, the use of ADM should always be disclosed to individuals who interact with or are subject to decisions by such systems, provided that this requirement enables individuals to exercise their rights in a meaningful way (Principle 4). To this end, ADM systems should be designed and operated in a way that decisions can be traced (Principle 5). Traceability primarily concerns technical aspects and solutions. As such, the appropriate level of traceability depends on the complexity of specific ADM systems. As a common denominator, traceability requirements enable identifying human responsibility and allocating risks between different stakeholders. The mere fact that ADM is used cannot prevent users from escaping their obligation to provide reasoned decisions for ADM-based outcomes, especially given the complexity, opacity, and/or unpredictability that ML-based ADM systems may generally entail (Principle 6). The latter consideration becomes even more relevant when public authorities use AI and ADM to support/automate decision-making in the provision of public services. Another critical issue in the context of ADM is ensuring that any affected individual retains the ability to exercise their rights and have access to legal remedies. In this regard, individuals should have the same rights when faced with human-based decisions or ADM and should be allowed to use human-based pathways in exercising their rights (Principle 8). Other critical safeguards must address the problematic exercise of balancing automation and the human component, including the role and actual scope of human oversight and intervention (cf Principle 9). In addition, the requirement for human review of ADM-based decisions also warrants careful consideration, especially in view of specific use cases and associated risks (cf Principle 10). Finally, a risk-based approach should be followed in applying these guiding principles (Principle 12).

The seminar concluded with an insightful and thought-provoking Q&A session moderated by Prof Dora Neo. Some interesting and challenging questions arose, such as: How to define ‘responsible’ AI? How to deal with complex legal relationships when concepts such as AI ‘user’, ‘operator’, and ‘provider’ may not be easily distinguishable? What does the concept of ‘control’ in AI really mean? What is the interaction/trade-off between different principles, eg, ‘transparency’ and ‘traceability’? Can there be conflicts between them? If so, how to balance the potentially competing legal interests involved?

View the event flyer here: https://law.nus.edu.sg/cbfl/events/cbflss230209/