Algorithms and Future Crimes: Welcome to the Racial Profiling of the Future
Across the country, large police departments have been developing their ability to track where crime will happen next using predictive software. Known as “predictive policing,” the practice has made waves in the media over the last few years, capturing the imagination of futurists and tough-on-crime zealots, while offending the sensibilities of basically everyone else.
Proponents describe the program in techno-pragmatist terms, arguing that it uses data to make smart inferences about the future in much the same way meterologists do. Opponents compare the idea to hellishly dystopian stories like The Minority Report, where innocent people are rounded up because a computer said there was a chance they would break the law in the future.
There is one major feature of predictive policing that the libertarian critique often glosses over: it's unmistakably racist.
Any attempt to predict future criminality will be based on the crime rates of the past. It's well known that blacks and Hispanics are arrested at a higher rate than whites and comprise the majority of the prison population. If that's the reality that is supposed to inform who we criminalize in the future, won't initiatives like predictive policing just perpetuate the racist criminal justice policies and practices of the present?
The Verge took these questions to Chicago to examine the most developed and well-financed iteration of predictive policing in the country. The Chicago Police Department users data on past crimes, information about disturbance calls and calls regarding suspicious persons to create a crime map that "highlights neighborhoods of the city that might soon be at risk of an uptick in crime."
Keeping with the dry data-babbling sell, the predictive analyst behind Chicago's program, Dr. Miles Wernick, compares it to his previous work in weather forecasting. "The recommendations of the mapping system will not replace the expertise of police officers, but instead [will] highlight potential concerns so police can take them into account," he says.
CPD has also created a "heat list" comprised of around 400 Chicagoans who are "most likely to be involved in violent crime." Police have already visited the homes of 60 people on the list, warning them like a schoolteacher warns a class clown that if they screw up, the law will be watching, and there will be serious consequences.
Hanni Fakhoury, a staff attorney from the Electronic Frontier Foundation, summed up concerns about CPD's use of predictive policing:
Are people ending up on this list simply because they live in a crappy part of town and know people who have been troublemakers? How many people of color are on this heat list? Is the list all black kids? Is this list all kids from Chicago’s South Side? If so, are we just closing ourselves off to this small subset of people?
For the moment, those questions cannot be answered because the CPD blocked an attempt by The Verge to access the heat list through a request filed under the Freedom of Information Act.
Wernick insists, delusionally, that predictive policing "evaluates the risk of violence in an unbiased, quantitative way," reaching for a smoking analogy to justify his claim:
[It is] similar manner to how the medical field has identified statistically that smoking is a risk factor for lung cancer. Of course, everybody who smokes doesn't get lung cancer, but it demonstrably increases the risk dramatically. The same is true of violent crime.
Wernick and the CPD want to put already blighted communities in their crosshairs for enhanced police presence. Imagine that if instead of targeting them for more patrolling, they were targeted for more schools, social workers, and community-building resources.
Surely, that too would have an impact on the future of crime.