Abstract
The integration of generative artificial intelligence into digital platforms presents unprecedented challenges to traditional competition law frameworks, particularly the economic models of consumer behavior that underpin market definition and consumer welfare analysis. The Google AI Overview case, filed with the European Commission in June 2025, serves as a critical juncture for examining how behavioral economics can reshape our understanding of market competition in the artificial intelligence era.
Therefore, this research develops a comprehensive behavioral economics analysis of the antitrust implications of AI-driven search systems, demonstrating how generative AI exploits cognitive biases in ways that traditional economic assumptions about consumer substitution and market dynamics struggle to address. Through an interdisciplinary approach combining legal doctrinal analysis with empirical behavioral research, this study argues that the economic models of consumer behavior embedded in EU competition law frameworks – especially in market definition exercises and consumer welfare assessments – render existing approaches inadequate for addressing the sophisticated behavioral manipulation enabled by AI systems. The research proposes novel regulatory approaches that explicitly account for cognitive biases and choice architecture manipulation in AI-mediated environments while maintaining legal certainty and practical enforceability.
Therefore, this research develops a comprehensive behavioral economics analysis of the antitrust implications of AI-driven search systems, demonstrating how generative AI exploits cognitive biases in ways that traditional economic assumptions about consumer substitution and market dynamics struggle to address. Through an interdisciplinary approach combining legal doctrinal analysis with empirical behavioral research, this study argues that the economic models of consumer behavior embedded in EU competition law frameworks – especially in market definition exercises and consumer welfare assessments – render existing approaches inadequate for addressing the sophisticated behavioral manipulation enabled by AI systems. The research proposes novel regulatory approaches that explicitly account for cognitive biases and choice architecture manipulation in AI-mediated environments while maintaining legal certainty and practical enforceability.