Media - News
- Media
- TRAIL-CBFL Seminar: Regulating Autonomous Trading: Addressing AI‑Driven Market Manipulation in Australia
TRAIL-CBFL Seminar: Regulating Autonomous Trading: Addressing AI‑Driven Market Manipulation in Australia

On 23 March 2026, the Centre for Technology, Robotics, Artificial Intelligence & the Law and the Centre for Banking and Finance Law jointly hosted a seminar by Professor Mimi Zou, Head of the School of Private and Commercial Law at the University of New South Wales. Professor Zou spoke on “Regulating Autonomous Trading: Addressing AI-Driven Market Manipulation in Australia.”
Professor Zou opened by setting her remarks against the backdrop of today’s financial markets, where algorithmic trading now accounts for the lion’s share of activity across major asset classes in Australia. From there, she posed a pointed question: as trading systems grow ever more sophisticated, could AI agents come to operate entirely on their own, and if so, what risks might follow?
The seminar’s central concern was what Professor Zou termed “learned market abuse.” She explained that advanced AI systems, particularly those built on reinforcement learning, may independently develop harmful trading strategies with no explicit direction from human operators. These include forms of learned collusion, where algorithms tacitly coordinate to move market prices, and learned manipulation, such as spoofing or benchmark distortion. Crucially, such behaviours arise from within the AI’s own optimisation processes rather than from any deliberate human design, and this poses a direct challenge to conventional assumptions about accountability.
Professor Zou then turned to the regulatory picture. She mapped out three broad stages of intervention available under existing legal frameworks: preventative measures aimed at stopping harmful trades before they occur; act-based interventions that operate in real time during trading; and harm-based interventions triggered after market distortions have already taken hold. While these mechanisms remain important, she was candid about their shortcomings in dealing with AI-driven trading, where harmful conduct can be opaque, emergent, and stubbornly difficult to attribute to any particular actor.
On the question of regulatory responses, Professor Zou examined the Australian Securities and Investments Commission’s Consultation Paper 386, which proposes revisions to market integrity rules in light of growing automation. The proposals include a clearer definition of trading algorithms, stronger requirements around testing and governance, and broader obligations on trading participants to ensure their systems do not undermine market integrity. These reforms reflect a continued preference for technology-neutral regulation — targeted adjustments rather than wholesale AI-specific overhaul.
Even so, Professor Zou was clear that formidable challenges remain. Regulators face a persistent detection gap, particularly where manipulative strategies are designed — or happen — to resemble legitimate trading. The framework continues to place heavy responsibility on human deployers, despite the fact that those deployers often have limited insight into opaque, self-modifying systems. And enforcement faces real evidentiary hurdles: tracing observed market outcomes back to a specific algorithmic strategy is no easy task when there are no requirements for explainability or auditability.

The seminar closed on a forward-looking note. AI-enabled trading brings genuine efficiency gains, Professor Zou acknowledged, but she stressed that regulators will need sharper tools to keep pace with emergent harms. She suggested that particular attention should be paid to system-level interactions and cross-market dynamics — the terrain where the most complex and unpredictable algorithmic behaviours are most likely to emerge.
