AI Trading and the Limits of the EU MAR Enforcement Regime

Alessio Azzutti (University of Hamburg)

Abstract

Thanks to technological and regulatory innovations, algorithmic trading (AT) has become a fundamental component of everyday trading activities in global financial markets. The financial trading industry certainly was a pioneer in adopting artificial intelligence (AI) solutions. Today, the most innovative machine learning (ML) methods applied to financial trading promise to further revolutionise financial trading, leading to several efficiency gains for society. Nevertheless, financial innovation may also come with some drawbacks. When considering the technical specificities of specific ML methods that allow for approximating truly autonomous trading agents (i.e. the “black box” problem), some technological-related risks are emerging. The latter, if not properly regulated, could ultimately impair the fair and smooth functioning of EU capital markets, as well as undermine their stability.
Hence, this study makes two fundamental claims in analysing the implications of ML-powered AT strategies for market integrity. First, the current liability framework under MAR can fail to deter AI market manipulation effectively. And this highlights the inability of EU financial law to force wrongdoers to internalise the costs of their unlawful conduct, thus causing the market to bear the negative externalities of market manipulation by AI trading. Second, the EU lacks a sound policy strategy to guide technological innovation in finance towards enhancing social welfare, as shown by an already outdated regime of market conduct supervision and other enforcement mechanisms. In light of this, this study questions the efficacy of the EU MAR/MAD enforcement regime to ensure credible deterrence.
Generally, AI trading poses severe challenges for effective detection (i.e. given EU capital markets’ cross-border and fragmented nature). Moreover, the autonomous, self-learning and black-box nature of specific AI applications add an additional layer of complexity for liability attribution for AI misconduct.
Delegating agency to AI can frustrate the safe application of traditional legal concepts of liability. The law usually requires prosecutors to prove the scienter (or other relevant mental states) to count misconduct as a crime. With this in mind, this study explores and discusses the merits of a number of possible changes to the EU legal systems to update it to evolving market dynamics and achieve legal certainty and credible deterrence:
1. Abandon the scienter-based assessment of market manipulation in favour of a new legal definition and test that emphasises market harm.
2. Adopt new liability rules and further harmonisation of enforcement regimes within the EU.
3. Revise existing supervisory arrangements towards enhanced centralisation of powers on ESMA and introduce innovative market-based solutions to MAR enforcement (i.e. “bounty-hunters”).
Overall, these proposals aim to reform the current EU enforcement regime to achieve credible deterrence vis-à-vis AI market manipulation to safeguard EU capital markets’ integrity and stability, in view of effectively attaining the Capital Market Union project.

Download the file

©2024 Italian Society of Law and Economics. All rights reserved.