Racialised policing and ethnic profiling is an everyday experience for many people, groups and communities across Europe. The physical, emotional and social harm is significant. The introduction of new technologies by police forces in recent years raises significant questions around safety, privacy, and the extent to which these tools are further entrenching racial discrimination.
Across Europe police and law enforcement increasingly use technologies to support their work. Yet very little consideration is given to the potential misuses of these technologies and their impact on racialised communities.[1] In a context where racialised communities are already over-policed and under-protected, resorting to data-driven police technology may further entrench existing discriminatory practices such as racial profiling, and the construction of ‘suspicious’ communities, as a new report published by the European Network Against Racism (ENAR) and the Open Society Justice Initiative shows.
The use of systems to profile, to surveil, and to provide a logic to discrimination is not new. What is new is the sense of neutrality afforded to data-driven policing. The report shows that law enforcement agencies present technology as ‘race’-neutral, independent of bias, and objective in their endeavour to prevent crime and offending behaviour. But such claims overlook the overwhelming evidence of discriminatory policing against racialised minority and migrant communities across Europe. For people of African, Arab, Asian, and Roma descent, and religious minority communities, encounters with law enforcement agencies of many European countries are higher than for majority white populations. European criminal justice systems police minority groups according to myths and stereotypes about the level of ‘risk’ they pose, rather than their behaviour.
Surveillance technologies and racialised criminalisation
In this context, racialised communities will disproportionately feel the impact of new technologies used to identify, surveil, and analyse — such as crime analytics, the use of mobile fingerprinting scanners, social media monitoring and mobile phone extraction, as they are already over-policed.
Indeed, we must consider how data is used to construct and further embed ideas of ‘suspicion’ and ‘risk’ for racialised communities. For example, law enforcement agencies across Europe are using technology to support and justify the collection of ‘non-criminal’ information about individuals and their associations (friends, family members, romance links, etc.) who may engage in behaviours which in isolation are ‘non-criminal’ but are viewed with suspicion by law enforcement (e.g. appearing in a rap video or belonging to a gang). Police increasingly use this data to develop priority or suspect lists that include the identification and surveillance of racialised non-criminal individuals. Practices such as social media monitoring, mobile phone extraction, facial recognition technology online and/or in public spaces, or police body-worn cameras all contribute data to the development of such lists.
In the United Kingdom for instance, social media is used to keep track of ‘gang associated individuals’ within the “Gangs Matrix”. If a person shares content on social media that refers to a gang name, or to certain colours, flags, or attire linked to a gang, they may be added to this database, according to research by Amnesty International. Given the racialisation of gangs, it is likely that such technology will be deployed against racialised people and groups.
Another technology, the use of automatic number plate recognition (ANPR) cameras, leads to concerns that cars can be ‘marked’, leading to increased stop and search. The Brandenburg police in Germany used the example of looking for “motorhomes or caravans with Polish license plates” in a recent leaked internal evaluation of the system. Searching for license plates of a particular nationality and looking for ‘motorhomes or caravans’ suggests a discriminatory focus on Travellers or Roma.
Similarly, the use of mobile fingerprint technology enables police to check fingerprints against existing police and government databases (including immigration records). This disproportionately affects racialised communities given the racial disparity of those stopped and searched as well as the disparity of previous biometric databases like DNA databases.
Biased algorithms and predictive policing
Another way in which new technology negatively impacts racialised communities is the fact that many algorithmically driven identification technologies, such as automated facial recognition, disproportionately misidentify people from black and other minority ethnic groups — and in particular black and brown women. This means that police are more likely to wrongfully stop, question, and possibly arrest them.
Finally, predictive policing systems are likely to present geographic areas and communities with a high proportion of minority ethnic people as ‘risky’ and subsequently, a focus for police attention. Research shows that data-driven technologies that inform predictive policing resulted in an increase in levels of arrest for racialised communities by 30%. Indeed, place-based predictive tools take data from police records already based on practices of over-policing certain communities, and forecast that based on the higher rates of police intervention in those areas, police should prioritise policing those areas further.
The problem is that predictive policing systems rely heavily on historical data held by police, which can contain biases. When a system is trained on data that contains bias, any subsequent police method or strategy based upon such data is inclined to reproduce those biases in its results. Further, the distortion could get worse each year if police services rely on the evidence of last year’s data in order to set the following year’s targets.
Ways forward
We often discuss the ethical considerations of new technologies; but we also urgently need to take into consideration how they affect and target racialised communities, particularly in a broader context of over-policing.
We need to challenge these injustices through collective resistance and organising, building coalitions between anti-racist and digital rights activists, academics and lawyers to improve our data security and raise awareness of how police are using technologies. We need to address the current lack of public scrutiny and accountability, which means that governments and policy makers need to develop rigorous monitoring processes holding law enforcement agencies and technology companies accountable for the consequences and effects of technology-driven policing,
All of this should be informed by a Europe-wide understanding of the utility and impact of police technology on minority groups. As we have seen, technology is not neutral or objective and therefore, unless guarded against, it will exacerbate racial, ethnic, and religious disparities in European justice systems.
[1] We refer to ‘racialised people or communities’ as a term inclusive of people of colour, minorities (racial, ethnic and religious) and all those affected by a process of racialisation, i.e. the process of attributing negative characteristics to groups based upon their belonging to a specific ethnic or racial group. Racialisation recognises power relations as a historical socio-political feature of any given society and therefore helps us to understand why different groups at different times are portrayed as problematic in different European countries.