Abstract
This research considers the benefits of centralization and decentralization of regulation of artificial intelligence (AI) from a comparative law and economics perspective. We consider the costs and benefits of AI regulation and the implications of having a convergence or divergence in the regulation of AI within federal system. Using Van den Bergh model of the benefits of centralization and benefits of decentralization within federal systems, we examine the US and EU approaches to regulation of AI. This examines shows there is a distinct difference in risk preferences related to the regulation of AI between the EU and US as well as a higher likelihood of negative externalities related to the regulation of AI emerging in the US than in the EU while also recognizing that the US approach of decentralized regulation of AI is more likely to lead to the identification of novel approaches to the regulation of AI which may eventually lead to positive externalities.
This examination focuses on the costs and benefits of centralization and decentralization within federal systems, i.e. systems where competencies to regulate are divided between the federal level and the state level. The US and EU provide useful examples of how federal approaches to regulating AI diverge. From a quick look at the regulation of AI from a comparative perspective, we observe a divergence in approaches to the regulation of AI within borders between states inside federal systems (within the US) and across borders between different federal systems (between the EU and US). There are important distinctions between the EU and US approach to federalism. While EU Member States (MS) maintain national sovereignty, and the EU itself is arguably not a fully sovereign state under international law as it has limited competencies, states within the US cannot be considered as sovereign nations. While individual US states maintain competency to regulate on matters which are reserved to them under the US Constitution, they are subject to the supremacy of federal law and subject to limitations on regulating economic activities under the Commerce Clause. Narrowing our review to the US and EU allows us to identify the potential costs and benefits associated with each approach to regulating AI within the two largest federal systems in the world in terms of economic output. Considering the size of the EU and US economies and their role in developing and regulating AI, these two federal systems prove to be the most significant jurisdictions to consider when looking at how AI regulation within federal systems has developed.
The topic of regulation of AI is at the forefront of legal scholarship as the use of AI has the potential to change nearly every aspect of human society. The potential for AI to impact society makes the regulation of AI particularly relevant to legal scholars, legal practitioners, and lawmakers. Along with the potential benefits of AI come potential cost of using AI. Potential benefits of AI include increased economic productivity, improved educational services, improved health outcomes, improved use of resources, benefits for scientific research, increased efficiency in government, and improvements to humans’ everyday lives, among many. Some potential risk of AI include: economic disruption which include job displacement, AI operating with bias, decrease in privacy, the use of AI to produce misinformation, threats to state security from AI empowered bad actors, overdependence on the use of AI, high environmental costs of running AI, regulatory lag for AI, and existential threats to humanity, among others.
While we can think of some uses of AI to have a low probability of leading to negative outcomes, i.e. they are less risky, there are other uses of AI which have a high probability of leading to negative outcomes, i.e. they are riskier. Thus, we can also consider within this framework of centralized and decentralized regulation of AI, the potential for risk preferences to play a role in the development of regulation related to AI. The EU approach and the approach of some individual states within the US concerning the regulation of AI are specifically designed to categorize different uses of AI into risk categories. This reasoning behind this risk-based approach is highly related to the potential for specific uses of AI to create negative externalities, for example the use of AI in financial markets creates a specific type of macroprudential risk or systemic risk related to the stability of financial markets. Importantly, there is a potential for negative externalities from the use of AI to cross borders, meaning the risk has an associated cost to those who can be considered as third parties, i.e. not users or developers of AI within the state where the AI is created or used.
There is also a regulatory competition aspect to this analysis. This concerns how states or nations compete over the provision of regulations to attract business incorporation. Given the likelihood of increased regulatory arbitrage by firms developing AI technology, given a divergence in regulation, it is possible that such regulatory competition will result in some negative externalities emanating from the regulation of AI.
Given the divergence in approaches to regulating AI, the potential for harm emanating from regulatory competition in the regulation of AI must be addressed by legal scholars and researchers. This article seeks to address this unique type of risk related to the divergence of regulatory approaches in federal systems and looks to build on our previous work related to federalism and regulation.
This examination focuses on the costs and benefits of centralization and decentralization within federal systems, i.e. systems where competencies to regulate are divided between the federal level and the state level. The US and EU provide useful examples of how federal approaches to regulating AI diverge. From a quick look at the regulation of AI from a comparative perspective, we observe a divergence in approaches to the regulation of AI within borders between states inside federal systems (within the US) and across borders between different federal systems (between the EU and US). There are important distinctions between the EU and US approach to federalism. While EU Member States (MS) maintain national sovereignty, and the EU itself is arguably not a fully sovereign state under international law as it has limited competencies, states within the US cannot be considered as sovereign nations. While individual US states maintain competency to regulate on matters which are reserved to them under the US Constitution, they are subject to the supremacy of federal law and subject to limitations on regulating economic activities under the Commerce Clause. Narrowing our review to the US and EU allows us to identify the potential costs and benefits associated with each approach to regulating AI within the two largest federal systems in the world in terms of economic output. Considering the size of the EU and US economies and their role in developing and regulating AI, these two federal systems prove to be the most significant jurisdictions to consider when looking at how AI regulation within federal systems has developed.
The topic of regulation of AI is at the forefront of legal scholarship as the use of AI has the potential to change nearly every aspect of human society. The potential for AI to impact society makes the regulation of AI particularly relevant to legal scholars, legal practitioners, and lawmakers. Along with the potential benefits of AI come potential cost of using AI. Potential benefits of AI include increased economic productivity, improved educational services, improved health outcomes, improved use of resources, benefits for scientific research, increased efficiency in government, and improvements to humans’ everyday lives, among many. Some potential risk of AI include: economic disruption which include job displacement, AI operating with bias, decrease in privacy, the use of AI to produce misinformation, threats to state security from AI empowered bad actors, overdependence on the use of AI, high environmental costs of running AI, regulatory lag for AI, and existential threats to humanity, among others.
While we can think of some uses of AI to have a low probability of leading to negative outcomes, i.e. they are less risky, there are other uses of AI which have a high probability of leading to negative outcomes, i.e. they are riskier. Thus, we can also consider within this framework of centralized and decentralized regulation of AI, the potential for risk preferences to play a role in the development of regulation related to AI. The EU approach and the approach of some individual states within the US concerning the regulation of AI are specifically designed to categorize different uses of AI into risk categories. This reasoning behind this risk-based approach is highly related to the potential for specific uses of AI to create negative externalities, for example the use of AI in financial markets creates a specific type of macroprudential risk or systemic risk related to the stability of financial markets. Importantly, there is a potential for negative externalities from the use of AI to cross borders, meaning the risk has an associated cost to those who can be considered as third parties, i.e. not users or developers of AI within the state where the AI is created or used.
There is also a regulatory competition aspect to this analysis. This concerns how states or nations compete over the provision of regulations to attract business incorporation. Given the likelihood of increased regulatory arbitrage by firms developing AI technology, given a divergence in regulation, it is possible that such regulatory competition will result in some negative externalities emanating from the regulation of AI.
Given the divergence in approaches to regulating AI, the potential for harm emanating from regulatory competition in the regulation of AI must be addressed by legal scholars and researchers. This article seeks to address this unique type of risk related to the divergence of regulatory approaches in federal systems and looks to build on our previous work related to federalism and regulation.