Discussion Prompt: When, if ever, is predictive policing effective, fair, and legitimate? What is the role of data reliability in this?

See all contributions to this question.

Trial after trial is showing that there is no clear relationship between the use of predictive policing tools and crime reduction. Yet, many police forces still jump on the bandwagon, not wanting to appear unmodern, curious for new policing technologies, or simply falling prey to sales pitches. Across Europe, predictive policing has also been the result of political crises which gave rise to more interventionist, and at times racialised notions of security. With the risk of increased stigmatisation and over-policing of specific communities, we must therefore not only examine the tools themselves but the entire legitimacy of such forms of state intervention.


In the last decade, predictive policing tools have increasingly been tested and deployed in police forces across Europe. These tools come with the promise that analysing historical and real-time data allows for the prediction of when and where a crime is most likely to occur or who is most likely to engage in or become a victim of criminal activity in the near future. In theory, this should allow for police to more efficiently deploy their limited resources and engage in preemptive interventions, taking action before a crime has occurred and, as such, be more effective in reducing crime.

The belief that technology will improve existing decision-making is at the heart of the argument that predictive policing can be used effectively, fairly, and legitimately. Critical research has indicated, however, that predictive policing tools perpetuate, reinforce, and obscure existing inequalities, and ethics discussions have explored how to assess these technologies and under which conditions these tools can be deployed. What is often missing in these debates — and what this article will outline — is the context in which police are turning to these tools and insight into the reasons why they invest in them.

The Covid-19 pandemic — a global crisis unfolding on top of existing structural discrimination and inequality in society — has emphasised the urgency of understanding state intervention and tech solutions in their context. This act of contextualising predictive policing means understanding the conditions in which these tools are advanced, i.e. highlighting that judging a tool solely on its effectiveness, fairness, and legitimacy is a false premise. Instead, this judgment should also be informed by looking at larger police structures and incentives. 


Ignoring the seeming ineffectiveness of predictive policing

Predictive policing has been sold as a tool that promises to make policing more ‘effective’, but this claim is increasingly disputed, particularly if we equate effectiveness with a reduction in crime. While a range of tools aimed at predicting crime are still being developed and deployed across Europe, several police units are discontinuing the programs for this reason. Take for example the Kent police in the UK, who have cancelled their commercial contract with predictive policing software vendor Predpol. Explaining the decision, the superintendent of Kent Police was quoted saying: “Predpol had a good record of predicting where crimes are likely to take place. What is more challenging is to show that we have been able to reduce crime with that information”.

In Germany, there are also clear signs that predictive policing tools show disappointing results. In Stuttgart, the State Minister of the Interior explained that after experimenting with Precobs they found that actual crime rates were too low for the instrument to make predictions. At the same time, he stressed that there were clear benefits in using new technologies and that police should not shy away from it in the future. This sentiment was found to be more common across German policing; in-depth research found that the most significant change for police forces who were testing predictive policing tools was not related to the effectiveness of these tools nor how it transformed the nature of policing from reactive to pre-emptive. The use of these tools primarily reinforced the belief of police in, and desire to work with, data.

In the Netherlands, the HIC (High Impact Crimes) rates are also considered by some to be too low for these tools to be used in real-time. At best, these tools function alongside a range of other support tools — as input for police planning for the coming days and weeks. What is striking is that even after the Dutch Police Academy concluded in 2017 that they could not find evidence that using the predictive policing tool Crime Anticipation System (CAS) led to a decrease in crime, the Dutch police still decided to roll it out across 90 local teams.

The contradiction between the continuous interest and investment in predictive policing tools by European police forces and the rising questions about its actual effectiveness to reduce crime rates indicates that there might be other reasons to privilege its use. When organisations, like national and regional police forces, adapt new technology, they are often driven by a combination of curiosity and a fear of falling behind and not modernising, especially when seeing other countries using it. The Anglosaxon police culture, or smaller police units with a dedicated IT budget, have also in the past been more susceptible to the promises of commercial sales pitches.


The full force of the state

Beyond trying to predict location and time of future crime, another strand of predictive policing tools also seeks to determine the likelihood that someone will become a perpetrator or victim of crime. There are a range of such programs. For example, the Top X lists in the Netherlands focus on identifying the most prolific High Impact Crime (HIC) offenders, the Integrated Offender Management (IOM) model in the UK aims to predict which perpetrators will escalate from low- to high-harm crime, and then there are models like RADAR-iTE in Germany that aim to predict which potentially dangerous person is most likely to commit a violent Islamist terrorist attack.

Exploring the origins of some of these programs shows that they are often a result of political priorities and choices that happen after a moment of crisis. The UK Metropolitan Police’s use of the Gang Matrix, a database through which political leaders constructed the notion of gang violence and offered the police a mandate to engage in preventative actions against “gang nominals”, was the result of a “highly-politicised response to the 2011 London riots”. The Top600 in Amsterdam started in 2011 as a political response to a number of violent HIC incidents in the city. RADAR-iTE was the response to a number of Islamist terrorist attacks in Germany.

What is unique about the European predictive policing programs that focus on crime, not terrorism, is that they aim to identify individuals who fit the characteristics of a predefined target group. These individuals will be subjected to the collective intervention of a range of public authorities, which can include police, municipalities, educational institutions, public health services, and parole officers. As such, the intervention to prevent an individual from continuing down a path of crime is theoretically aimed at combining care and control. This is in line with what critical criminologists call the move from understanding crime as a flaw of an individual towards seeing criminal behaviour as the result of an unequal distribution of power, material resources, and life changes in society.  

It is important to recognise that being on these lists is often stigmatising to the individual and their families and generates additional state interference in one’s personal life. The coordinating approach signifies that the total force of the state is directed towards an individual and/or their family, which increases the asymmetrical power relation between them. The legitimacy question of predictive policing should not only focus on the technology but also weigh if this interference in an individual’s private life by the state is proportionate to the severity of the crimes committed by a person and the impact these crimes have on society.


Fairness by whose standards?

Racial justice scholars critique these interventions based on the fact that they run the risk of creating constructed identities, such as ‘gang member’ or ‘terrorist’, that tie specific racialised communities to a complex process of criminalisation. Here ‘the young black men’, ‘the young Islamic men’, and ‘the young Roma’ are identities that are tied to crime phenomena in the political discourse by police, in schools, and on the streets.

The White Collar Crime Risk Zones project flips the predictive policing narrative and raises the million-dollar question: Is it socially acceptable for police and other public authorities to take preventative care and enforcement actions — currently applied to those individuals identified in predictive policing tools — on white-collar criminals and their families? This could include increasing stop and search actions in the financial districts targeting white, middle-aged men in suits, which are the individuals who would fit the constructed profile of a white-collar criminal. Most likely the answer is no.

Context matters: The type of crime to which these predictive models are applied is the result of contemporary and historic police priorities which are politically and socially informed. This is an important reflection. By choosing to profile specific crime areas but not others and to group offenders based on data characteristics, society runs the risk of criminalising individuals and the families of individuals who belong to a specific racialised or lower social-economic community.

As such, this article argues that in the decision-making around whether or not to use predictive policing, it is crucial to look beyond the issues of the tool itself and critically reflect on its perceived added value and the issues around the fairness and legitimacy of the entire intervention, not just the tool. Debating the issue, we must ask whether the desire to innovate or tackle a specific security problem can come at the expense of individual and collective fundamental human rights. Furthermore, when challenging these technologies it is equally important to understand the incentives that drive police to turn to these technologies.