Discussion Prompt: When, if ever, is predictive policing effective, fair, and legitimate? What is the role of data reliability in this?

See all contributions to this question.

The roll out of GDPR in Europe and scandals like Cambridge Analytica bear testimony to a widespread public mistrust of ‘data’ and what it is being used for. However, companies like Chorus Intelligence believe the answer is not to shy away from data-based policing tools — nor to think of them as a silver bullet — but to use them smartly to augment the nuanced judgements of experienced law enforcement officers. Police have a responsibility to ensure data and tools used possess enough integrity to stand trial in court and win back the public’s trust.


“With great power comes great responsibility”, was the famous line uttered by Spiderman’s Uncle Ben. But despite the links to the Marvel superhero film, it rings true for the technology that is now available to law enforcement. It also sounds a warning to those who are developing technology that claims to be able to predict future offending and therefore enable police to prevent crime before it happens.

It’s important at this point to define what we mean by ‘predictive policing’, because it means different things to different people. Traditionally, policing professionals have sought to predict where and when crime may be committed, and who is most likely to commit crime using a combination of experience, local knowledge, data, and that most mysterious of things: ‘gut instinct’. This combination has generally served policing well, and whilst it is never 100% accurate, an experienced, multi-disciplinary team will correctly predict those places, times, and people.

However, the term ‘predictive policing’ is now increasingly being used to describe a second definition – whereby a technological solution fed by large volumes of data is built by teams of data scientists, using the latest machine-learning methodologies, to let the ‘machine do the thinking’ and tell us the answers to policing questions.

Chorus supports the former definition. Every experienced police officer knows that past behaviour is a good predictor of future behaviour, and Chorus products paint the picture of past behavior better than anything else. Our software can support traditional predictive policing by showing the patterns in communications, the movement of vehicles and electronic devices, and the relationship between groups of offenders and their victims. The products generated from Chorus tools then support targeted, intelligence-led operations to locate and arrest suspects quickly and thus protect the public before crime and terror attacks occur. 

It is almost impossible to commit crime in the 21st century without leaving an indelible electronic footprint. Offenders will try to cover their tracks by limiting their use of devices, swapping and discarding devices, or use coded language, but they almost always make mistakes or get lazy. This is where our products help the police to make sense of what would otherwise be an impossible, confusing blizzard of data.


A Bad Rep

The second definition of predictive policing has been the recipient of bad public relations recently, partly due to some uses being perceived as potentially discriminatory. For example, the Durham HART model was designed to assist in making custody decisions, but it was criticised for potentially penalising detainees from poorer communities. The MPS and South Wales Police use of facial recognition technology was legally challenged for generating too many ‘false positives’, and the use of PredPol software by US law enforcement agencies received adverse feedback on the basis that it simply reinforced geographic bias.

At Chorus we believe that now more than ever, data-driven methodologies are critical to the law enforcement world. Using data and powerful analytics tools to work smarter and make better decisions with scarce resources must be part of the solution to the growth in serious and organised crime in the UK and abroad. However, we need to ensure that users understand how these tools work and that they can easily explain them to the general public or to members of a jury.


Building Public Trust

Before addressing the questions around effectiveness and legitimacy, a larger piece of work is first required by law enforcement and technology providers to improve public trust in the police’s use of data. Full support of the public is required to ensure that these methods are non-discriminatory and that they are aware of the data types being used, how they are used and why. Since the advent of GDPR and scandals such as Cambridge Analytica, public mistrust in data and what it is being used for has increased. However, it would be a mistake to shy away from the opportunities afforded by data exploitation in policing, which is why policing needs to fully explain the use of personal data and demonstrate that it is unambiguously being done for the benefit of the public.

Every day of the week, law enforcement professionals make ‘predictions’ based on a combination of data and professional judgement to prevent bad things from happening. It is often the cases that don’t make the news where positive results have occurred. Commissioner of the London Metropolitan Police Service Cressida Dick, and former MI5 boss Andrew Parker last year spoke out on the very necessary role data and data sharing plays in tackling terror-related situations. Using intelligence to predict where attacks might occur and who is behind them in order to prevent them, can only be a good thing.

However, if police forces are going to embrace artificial intelligence (AI) and machine-learning technologies, they will need to rigorously test them first to ensure that they don’t reinforce bias on specific groups of people or locations. These technologies can appear attractive, like a ‘silver bullet’ to solve your policing problems, however they should only ever be seen as an aid to human decision-making, rather than a ‘Black Box’ solution.


Ethical Frameworks Required

Establishing a framework for ethical practice is key to ensuring its legitimacy. Some forces, such as the West Midlands Police with their National Data Analytic Solution (NDAS) project, have already started on this journey, with data insights and methods overseen by the West Midlands PCC Ethics Committee. With NDAS receiving further funding from the Home Office in July 2019, to continue testing innovative technology and getting it independently scrutinised to ensure its ethical soundness. The UK government Centre for Data Ethics and Innovation (CDEI) has an important role in this scrutiny and setting national standards.  

Organisations such as the Royal United Service Institute (RUSI), the Committee on Standards in Public Life, and the Alan Turing Institute have reported extensively on this issue and are offering guidance to forces on AI and data ethics. There is also a wealth of private sector knowledge and experience that can be leveraged to support law enforcement in their quest.


The Importance of Integrity

Data reliability is paramount for any operation or investigation, especially if it leads to a criminal justice outcome. This is the main reason that Chorus exists as a company, helping law enforcement to cleanse their data of noise and misleading events that could see a case overturned.

Any intelligence gathered from analysing data will only be as good as the raw data itself, so it is in the police’s and public’s interest to ensure its integrity. Sharing of data between departments and forces across the UK helps to build a stronger and more reliable picture of what is going on before acting on that intelligence. More needs to be done to allow forces to share data and overlay it, especially for cross border crime such as ‘county lines’ drug gangs and terrorism. Forces need to be careful that they don’t fall into the trap of using unreliable data, thus causing intelligence-led blunders, missed opportunities or unsafe convictions.  Likewise, with more resources sent to a geographical area or targeting certain demographics, police are more likely to uncover more crime, thus reinforcing bias and stereotyping of certain communities.


Augmenting Decision Making

The position of predictive policing technology is currently very finely balanced. Uses by some forces have been deemed discriminatory while other forces are going the extra mile to onboard ethics committees and ensure that their use of innovative tech is independently validated.

One element in the equation that must not be overlooked are the roles of experienced police officers, analysts and investigators on the receiving end of intelligence. Technology will never replace people in law enforcement, but its ability to augment their decision making, quickly and based on fact, will undoubtedly help prevent crime and protect the public.

The reality, however, is that currently there is very little evidence that ‘predictive’ technologies are capable of replacing the nuanced judgements of intelligent, experienced human beings. Life is messy. People are extraordinarily complex and unpredictable. Most of all, citizens don’t want to find themselves targeted by the police because of the colour of their skin, their past mistakes, or their geographic postcode. Fairness and justice demand that they be judged on the basis of what they have actually done, and proving this is what Chorus does best.