Guilty or Innocent? The Use of Algorithms in Bail Reform

Guilty or Innocent? The Use of Algorithms in Bail Reform
By Rachel Kramer | March 10, 2019

There are efforts being made across the country to reform our criminal justice system; to incarcerate fewer people of color, to revise or remove the system of bail, to change how drug and other non-violent offences are treated in our courts. One major path of reform many states are traveling is through technology: implementing risk assessment algorithms as a way to mitigate human bias, error, or inability to systematically compile many pieces of information on a person in order to statistically infer a range of specific outcomes. These risk assessment algorithms, while containing a lot of diversity in and of themselves, all perform the same basic function: they take past information on a defendant and, using machine learning, predict the likelihood of a future event that the court wants to prevent, such as fleeing the state, not showing up to court dates, or being arrested for violent or non-violent crimes after pretrial release. Judges use these risk score outputs to decide a variety of outcomes for the defendant at every stage of the criminal justice system, including, as this post focuses on, pretrial decisions such as bail and how the defendant is monitored before sentencing.

The purpose of bail as it stands now is to assess the dangerousness of the defendant to the public if they are released back into society, and to set bail that is in line with that dangerousness. In extreme cases, the court can withhold bail and mandate pre-trial imprisonment. The original purpose of bail was to incentivise defendants to show up to their court dates and to discourage the accused from fleeing the jurisdiction. Over the years, however, the purpose and goal of bail has shifted. While the civil rights movement shepherded in a brief period of bail reform–meant to remedy the poverty-driven high pretrial incarceration rates of populations unable to make bail–sentiments reversed during the conservative Nixon and Reagan eras, landing us back in assessments of a defendantís dangerousness to the public as the primary goal of bail hearings.

Assessment of the threat a defendant poses to the general population is a highly subjective matter, and it is no surprise that most states are beginning to favor a system that is statistical and backed by data. Machine learning represents a shining chance at an objective and neutral decision-maker — or so goes the prevailing sentiment of many industries at the moment. If the criminal justice system is desperate for reform, why should we turn to a decision-maker trained with data from the very system we are trying to reform? John Logan Koepke and David G. Robinson, both scientists at a technology think tank, ask this question in their comprehensive article, “Danger Ahead: Risk Assessment and the Future of Bail Reform.”

As every industry moves toward machine learning and AI applications, the question should be not only how we use the algorithms, but if we should use them. It is well-publicized that our criminal justice system is biased against black and/or impoverished communities. Risk assessment algorithms repeat and enhance these biases because the algorithms learn patterns inherent to the larger justice system, even ones we aren’t aware of enough to name or address. Most risk assessment programs donít use race as an input, but there are so many other predictors of race in our lives and communities that the system learns to disenfranchise based on race even without the explicit information. There is a high chance that algorithms trained on data from a broken system will lead to what the authors call “zombie predictions,” or predictions that reanimate biases or overestimations of risks of certain defendants (usually black) and underestimations for others (usually white). Even if the bias in the training data were to be alleviated or worked around through data science procedures such as bootstrapping or feature weighting, the fix is not strong enough for many reformers, including Koepke and Robinson. Making our punishment systems more efficient ultimately does little to reform the system as a whole.

Koepke and Robinson suggest that the system can and should be reformed without algorithms. Such reform ideas include automatic pretrial release for certain categories of crime, different standards for imposing pretrial detention, or replacing or cabining money bail entirely, like the recent law in California ruling cash bail unconstitutional. Many pretrial arrests are due to infringements of pretrial restrictions set out in bail hearings, and failure to show up to court dates is often due to the defendant being unable to miss work, find childcare, or access transportation. Simple processes can alleviate these problems, such as court-funded ride services or text reminders about appointments. Reforms at the police level are also vital, though outside the scope of this post.

If machine learning algorithms are here to stay in our justice system, which is likely the case, there are actionable ways to improve their performance and reduce harm and injustice in their use. Appallingly, many of the algorithms in use have not been externally validated or audited. Beyond guaranteeing accountability in the software itself, courts could follow up on defendants to compare the systemís predictions against the real outcomes specific to their jurisdiction. This is especially important to repeat after any bail reforms have been put into place. The algorithms need to be trained on the most recent local data available, and importantly, data coming from an already reformed or reforming system. Recently in New York, to improve their flawed bail system, the city’s Criminal Justice Agency announced they would train an algorithm using data–but this data came from the stop-and-frisk era of policing, a policy now ruled unconstitutional. Egregious oversights like these can further marginalize already vulnerable populations.

Our focus in data science has generally been on improving and refining the tools of our trade. This paper, along with other reform movements, invite data science and its associated fields to take a step back in the implementation process. We need to ask ourselves what consequences an algorithmic or machine learning application could engender, and if there are alternative ways to address change in a field before leaning on technologies whose impacts we are only just beginning to understand.

——-

Source:
(1) http://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1849/93WLR1725.pdf

Leave a Reply