As distrust divides citizens and the police, developers have been working on a software that would be able to predict an individual’s likeliness of committing a crime based on an algorithmic score.
COMPAS, an acronym for Correctional Offender Management Profiling for Alternative Sanctions, is an analytical tool developed by Northpointe Inc.—a branch of the software company Volaris Group—to evaluate a subject’s risk of recidivism, or relapsing back into criminal behavior after receiving sanctions or undergoing a crime-related intervention. The goal is for COMPAS to produce a more accurate prediction than that of a person.
The way it works seems simple: defendants are asked a special set of questions and their answers are given to the COMPAS, which are used to generate individual scores. The computerized system then offers legal advice in regards to the placement, supervision, and case management of the subject.
In an evaluation of the COMPAS’s efficiency, Center for Public Policy Research researchers Jennifer L. Skeem and Jennifer Eno Louden found that it is rather easy to use and seems to assess characteristics relevant to the probability of re-offense as well as compares other offenders in the same jurisdiction to provide an accurate score.
However, Skeem and Louden found three main cons to COMPAS. First, they doubted the legitimacy and validity of the used assessment scales. Second, there was not enough evidence that would prove COMPAS’s actual ability to predict relapse into crime; it does not combine essential components, like risk factors, to finalize one score. Third, there’s doubt about whether the COMPAS can analyze “change over time in criminogenic needs,” which are individual traits that may lead to recidivism.
Meanwhile, in a New York State COMPAS-Probation Risk and Need Assessment Study, prepared by Dr. Sharon Lansing, believed the COMPAS’s Recidivism Scale was overall efficient in its predictions, stating that it “achieved an acceptable level of predictive accuracy” and “worked effectively with respect to study cases overall.”
Another team of researchers, Jeff Larson, Surya Mattu, Lauren Kirchner and Julia Angwin — all supported by non-profit newsroom ProPublica—tested the software’s algorithm for bias towards specific groups, such as certain ethnic groups. The researchers examined over 10,000 defendants in Broward County, Florida. They found that defendants who were black “were twice as likely as white defendants to be misclassified as a higher risk of violent recidivism,” while “white recidivists were misclassified as low risk 63.2 percent more often than black defendants.
The ProPublica study also found that only 20 percent of those flagged as “high risk” for committing violent crimes within 2 years actually did, meaning that the remaining predicted 80 percent did not re-offend.
These varying results and conclusions beg the question: Can we trust computers to make inferences on a statistical level?
Currently, use of the COMPAS is not mandated by the federal government. States choose whether to implement the device. Some precincts are already using PredPol, a similar predictive software.
While the COMPAS may present a person’s expectancy of recidivism, it cannot take into account the personality of a human, the circumstances they are under, or the psychological aspects that can’t be defined by a number. Algorithms help us only to a certain extent. Humans cannot be “scored” or captured by a single number.
By Cesar Zafra