Abstract
This paper investigates the quality of judicial decision-making in the criminal justice system in light of the increasing incorporation of algorithmic tools, with a specific focus on the preliminary stage of criminal proceedings and, in particular, on decisions regarding pre-trial preventive detention in the Italian legal system. While the general debate on the use of Artificial Intelligence (AI) in the judiciary has emphasized the potential of such tools to enhance efficiency, reduce case backlogs, and support consistency in decision-making, the application of AI to precautionary measures raises a distinct set of normative, institutional, and constitutional concerns.
The use of algorithms in criminal justice represents a clear manifestation of the broader digital transformation affecting legal institutions. The idea that judicial decisions could be implemented, integrated, or even replaced by automated procedures has gained traction in both economic and legal debates. In particular, the use of algorithms in the context of preventive detention constitutes a benchmark for evaluating the impact of AI on judicial processes. Unlike the trial or adjudicative phase, the precautionary stage occurs at a point where guilt has not been established and the available evidence is often incomplete. Consequently, it becomes crucial to balance the defendant’s personal liberty with the general interest. This intermediate procedural position makes the precautionary phase a privileged vantage point for examining how criminal law may be affected by the introduction of algorithmic technologies.
From a legal standpoint, precautionary measures are exceptional and provisional: their purpose is not to punish, but to prevent specific risks, such as reoffending, flight, or the tampering with evidence. Their application is governed by fundamental principles such as the rule of law, suitability, and proportionality. These principles imply that judges retain a degree of discretion in assessing the criteria underpinning the imposition of such measures. In this scenario, the use of algorithms may affect judicial discretion by producing recommendations based on statistical data regarding the likelihood of recidivism or flight risk.
We propose the incorporation of an objective function to be maximized, subject to a set of legal and institutional constraints specific to the Italian criminal justice system, where decisions on pre-trial detention must comply with strict standards of proportionality, necessity, and judicial motivation. In this context, the trade-off between efficiency and fairness becomes particularly acute, as algorithmic tools may expedite decision-making while also risking the erosion of individual rights and procedural safeguards.
The potential advantages of algorithmic support are clear in terms of resource optimization, reduction of procedural delays, and improved consistency. For example, machine learning models trained on historical data may assist in evaluating flight risk or danger to society, and can support magistrates in applying legal standards more uniformly. However, the deployment of such systems in this sensitive procedural phase is fraught with systemic risks: biased training data may reproduce or exacerbate existing inequalities; the opacity of algorithmic reasoning may conflict with the duty to provide reasoned decisions; and excessive reliance on probabilistic assessments may result in unjustified restrictions on personal liberty.
The next step is to formalize these tensions within a model that assesses the optimal degree of algorithmic integration in judicial decision-making. Specifically, the model should account for false positives and false negatives in algorithmic risk assessments, their impact on legal safeguards, and their interaction with the presumption of innocence. By treating the pre-trial decision as a constrained optimization problem, we offer a structured approach to support policymakers and judicial authorities in defining regulatory boundaries and accountability mechanisms for the legitimate use of AI in criminal proceedings.
Our contribution provides a conceptual and analytical foundation for a normative and quantitative reflection on the compatibility between AI-assisted decision-making and the fundamental principles of criminal justice in a liberal democracy. We also suggest directions for further empirical research aimed at evaluating the actual performance of algorithmic systems in the Italian judiciary and their implications for legal culture, institutional trust, and the protection of fundamental rights.
The use of algorithms in criminal justice represents a clear manifestation of the broader digital transformation affecting legal institutions. The idea that judicial decisions could be implemented, integrated, or even replaced by automated procedures has gained traction in both economic and legal debates. In particular, the use of algorithms in the context of preventive detention constitutes a benchmark for evaluating the impact of AI on judicial processes. Unlike the trial or adjudicative phase, the precautionary stage occurs at a point where guilt has not been established and the available evidence is often incomplete. Consequently, it becomes crucial to balance the defendant’s personal liberty with the general interest. This intermediate procedural position makes the precautionary phase a privileged vantage point for examining how criminal law may be affected by the introduction of algorithmic technologies.
From a legal standpoint, precautionary measures are exceptional and provisional: their purpose is not to punish, but to prevent specific risks, such as reoffending, flight, or the tampering with evidence. Their application is governed by fundamental principles such as the rule of law, suitability, and proportionality. These principles imply that judges retain a degree of discretion in assessing the criteria underpinning the imposition of such measures. In this scenario, the use of algorithms may affect judicial discretion by producing recommendations based on statistical data regarding the likelihood of recidivism or flight risk.
We propose the incorporation of an objective function to be maximized, subject to a set of legal and institutional constraints specific to the Italian criminal justice system, where decisions on pre-trial detention must comply with strict standards of proportionality, necessity, and judicial motivation. In this context, the trade-off between efficiency and fairness becomes particularly acute, as algorithmic tools may expedite decision-making while also risking the erosion of individual rights and procedural safeguards.
The potential advantages of algorithmic support are clear in terms of resource optimization, reduction of procedural delays, and improved consistency. For example, machine learning models trained on historical data may assist in evaluating flight risk or danger to society, and can support magistrates in applying legal standards more uniformly. However, the deployment of such systems in this sensitive procedural phase is fraught with systemic risks: biased training data may reproduce or exacerbate existing inequalities; the opacity of algorithmic reasoning may conflict with the duty to provide reasoned decisions; and excessive reliance on probabilistic assessments may result in unjustified restrictions on personal liberty.
The next step is to formalize these tensions within a model that assesses the optimal degree of algorithmic integration in judicial decision-making. Specifically, the model should account for false positives and false negatives in algorithmic risk assessments, their impact on legal safeguards, and their interaction with the presumption of innocence. By treating the pre-trial decision as a constrained optimization problem, we offer a structured approach to support policymakers and judicial authorities in defining regulatory boundaries and accountability mechanisms for the legitimate use of AI in criminal proceedings.
Our contribution provides a conceptual and analytical foundation for a normative and quantitative reflection on the compatibility between AI-assisted decision-making and the fundamental principles of criminal justice in a liberal democracy. We also suggest directions for further empirical research aimed at evaluating the actual performance of algorithmic systems in the Italian judiciary and their implications for legal culture, institutional trust, and the protection of fundamental rights.