Abstract
Abstract
The growing use of algorithms in administrative procedures is profoundly reshaping the relationship between public authorities and citizens. While automation promises greater efficiency, it also raises significant legal concerns related to transparency, comprehensibility, intelligibility, and, above all, the liability of public entities relying on such tools.
The current legal framework, both at the national and European level, already provides key reference points. Article 97 of the Italian Constitution enshrines the principles of good administration and impartiality, which must also be observed in automated procedures. The Digital Administration Code (Legislative Decree No. 82/2005) regulates the use of digital technologies by public administrations, imposing obligations of accessibility and transparency. At the supranational level, Article 22 of Regulation (EU) 2016/679 (GDPR) grants citizens the right not to be subjected to decisions based solely on automated processing, unless adequate safeguards and human oversight are in place.
A particularly problematic issue concerns administrative liability. Article 1 of Law No. 241/1990 requires administrative decisions to be properly reasoned, yet this principle risks being undermined by the opacity of so-called black box algorithms. Additional transparency obligations have been introduced by Legislative Decree No. 36/2023, implementing Directive (EU) 2019/1024, which emphasizes the publication and accessibility of data, including those relating to automated decision-making systems employed by public administrations.
This contribution therefore aims to analyze the current regulatory framework, highlighting the challenges and legal responsibilities arising from the procedural use of artificial intelligence, with particular attention to the need to reconcile innovation with administrative fairness.
The growing use of algorithms in administrative procedures is profoundly reshaping the relationship between public authorities and citizens. While automation promises greater efficiency, it also raises significant legal concerns related to transparency, comprehensibility, intelligibility, and, above all, the liability of public entities relying on such tools.
The current legal framework, both at the national and European level, already provides key reference points. Article 97 of the Italian Constitution enshrines the principles of good administration and impartiality, which must also be observed in automated procedures. The Digital Administration Code (Legislative Decree No. 82/2005) regulates the use of digital technologies by public administrations, imposing obligations of accessibility and transparency. At the supranational level, Article 22 of Regulation (EU) 2016/679 (GDPR) grants citizens the right not to be subjected to decisions based solely on automated processing, unless adequate safeguards and human oversight are in place.
A particularly problematic issue concerns administrative liability. Article 1 of Law No. 241/1990 requires administrative decisions to be properly reasoned, yet this principle risks being undermined by the opacity of so-called black box algorithms. Additional transparency obligations have been introduced by Legislative Decree No. 36/2023, implementing Directive (EU) 2019/1024, which emphasizes the publication and accessibility of data, including those relating to automated decision-making systems employed by public administrations.
This contribution therefore aims to analyze the current regulatory framework, highlighting the challenges and legal responsibilities arising from the procedural use of artificial intelligence, with particular attention to the need to reconcile innovation with administrative fairness.