Discussion Prompt: Should we ban the use of automated 

See all contributions to this question.

The EU Council’s 2008 Prüm Decisions outline rules for the exchange of data between police in different EU Member States, namely DNA, fingerprint, and vehicle registration data. The regulation’s new iteration is currently being drafted and will add facial images among the exchanged data. With this inclusion comes a slew of accuracy challenges around image quality and age, and database sizes, as well as the problematic establishment of national databases through backdoor means – without parliamentary debate or scrutiny. 

Facial Recognition has increasingly become an additional biometric tool in forensics, due to its potential added value in criminal investigations for the identification of unknown perpetrators. As the next generation Prüm framework — which regulates exchanges of DNA, fingerprints, and vehicle registration data (VRD) among EU Member States — is currently designed, it is proposed that facial images are included amongst the exchanged data, so that national law enforcement authorities may apply facial recognition technology.

In this article, I examine the current state of play regarding the comparison of facial images at EU level and reflect on privacy and non-discrimination challenges stemming from this forthcoming development in the Prüm framework. In a nutshell, these include:

  1. setting up databases through the backdoor (i.e. with limited national parliamentary debate and scrutiny);
  2. the number of both false positive and false negative image matches; especially when linked to the size and categorisation of image databases as well as the age & quality of images;
  3. and racially biased algorithms.

Next generation Prüm: a sketch

In 2005, seven EU Member States signed the so-called Prüm Convention on the “stepping up of cross-border co-operation, particularly in combating terrorism, cross-border crime and illegal immigration”. According to Council Decisions 2008/615/JHA and 2008/616/JHA, which incorporated the Convention into the EU acquis, Member States must ensure the availability of DNA, fingerprints, and vehicle registration data from their national databases for automated searches by authorities in other Member States. In case of a match (hit), traditional channels of mutual legal assistance are activated, which are not part of the Prüm regime. The implementation of the Prüm Decisions has been a prolonged process, mainly because it necessitated the establishment of dedicated databases at the national level, including enacting national legislation to that end.

Facial images in the next generation Prüm

With the implementation of Prüm coming to an end, the aspiration for a next generation Prüm  — with a broadened scope and updated technical and legal requirements — has come to the forefront. The emergence of new technologies and investigation tools has led to calls for the introduction of new data categories in Prüm, particularly facial images. According to a focus group created in 2019[1] within the Council of the EU, this addition shall enable law enforcement authorities to check images (for example, taken by surveillance cameras near crime scenes) of unknown criminal offence perpetrators against the national reference image databases, as provided for and governed by national legislation. A feasibility study on the new generation Prüm found that as with DNA analysis files and fingerprints, not all Member States currently hold national electronic image databases with reference images or national facial recognition software. However, the focus group noted that a (non-defined) number of Member States are in the process of implementing such databases and facial recognition.

As for trace images or images of unknown perpetrators, not a single Member State has already set up such databases, which could be used as a search data pool in addition to a reference picture data pool, but some Member States plan on setting up such a gallery of unidentified offenders. Should the revised Prüm legal framework be adopted prior to all Member States implementing a central electronic image database and facial recognition software at the national level — as is expected — then the setting up of databases containing facial images will become mandatory, as was the case with the first generation Prüm. This practice of establishing national databases through the back door, i.e. with limited parliamentary debate and scrutiny, whereby Member States export national issues so they are regulated at EU level, is common, yet highly problematic.

Accuracy & image quality

Furthermore, facial images constitute biometric data, a special category of personal data under Article 10 of the Law Enforcement Directive and within the remit of Article 8 ECHR (for example see Peck v UK and Gaughran v UK). A high degree of accuracy in facial recognition technology is vital to minimise the risk of false positive matches, namely results that may be unrelated to the investigation or false negative results when the facial recognition algorithm fails to identify correct matches. This is crucial, as facial recognition technology will be used in the course of criminal investigations with the aim of identifying unknown perpetrators, therefore national authorities will perform 1:N searches, which are searches on the basis of a facial image (a ‘mugshot’ or a probe retrieved from a camera) against the full content of other national databases and the top results are then ranked. False positive matches in particular may have important consequences for individuals, who may be bothered by the police because of incorrect matching or be subject to criminal investigation and even discriminatory practices by national authorities.

Accuracy will be dependent on the quality of facial images and to ensure that they are high quality will also be in line with the Prüm framework (Article 28 of Decision 2008/615/JHA) and Article 4(1)(d) of the Law Enforcement Directive. In order to ensure a minimum level of accuracy across Member States, facial images must be as high-quality as possible. A ‘mugshot’ style image for example, which is subject to certain quality standards, will ensure high confidence matching, however probe images (latent, wild, or trace images of unidentified persons) will be of lower quality. Overall, having bad image data in the national gallery will affect all requests, whereas a probe image of lower quality will impact that specific request only.

If a Member State operates a high-quality database, then the expected results can be more reliable. At the same time, the quality of facial images already collected by Members may not be high enough, thus affecting the reliability of results. The National Institute for Standards and Technology (NIST) has found that the risk of false negative matches when using databases storing up to 1.6 million ‘mug shots’ and ranking the top 20 results is very low (0.15%). However, this testing concerned images of high quality that followed specific technical standards. Indeed, unrefined or ‘wild’ data sourced from various places and contained in a database of 1.1 million datasets produces a much higher 4% rate for false negative matches. In order to mitigate this challenge, the feasibility study rightly suggests that pre-existing images in existing image galleries, as well as images from surveillance cameras, be separated from the ‘primary’ higher mugshot-quality database. Despite this suggestion, the focus group on facial recognition asserts that “splitting the database in different qualities would require disproportionate technical effort and has no apparent added value”. However, since Member States have yet to establish their own trace image galleries, and in view of the risk of false positive matches, the idea to separate images of various quality should not be discarded.

Accuracy & database size

The size of national databases may also impact accurate identification; the higher the number of images which may be of insufficient quality, the higher the possibility of false matches. In the present case, this possibility may increase considering that the stock of images (i.e. image galleries) available within the national law enforcement authorities is larger than the ones for the fingerprints and DNA. The feasibility study notes that in cases of databases storing up to 12 million ‘mugshots’, current technology is relatively reliable in comparison, but as mentioned, the next generation Prüm will also allow searches of facial images that will not align to specific technical standards. As a result, the reliability and accuracy of comparison will suffer. Another factor that may impact the accuracy of the results is the age of the facial image; there is a gradual increase in the possibility of a false match as the years since the capture of a facial image pass by. The feasibility study suggests a series of safeguards to minimise the risk of false matches, namely:

  1. laying down a maximum limit of 50 results for all requests;
  2. dictating non-matched data to be deleted within a limited timeframe;
  3. and allowing Member States to lower the number of candidate matches at their request.

The focus group has nevertheless pointed out that the number of required candidates depends on the quality of the images and that in cases of terrorism or other serious crimes more results may have to be displayed, namely up to 100 results.

Limitations of facial recognition

Finally, the inherent limitations of facial recognition should also be underlined. Research by the NIST demonstrates that the algorithms embedded in facial recognition produce high rates of false positives in cases of black people, particularly black women. Therefore, in the course of investigations, people of colour may find themselves disproportionately often troubled by police authorities. To correct algorithmic bias human intervention in establishing a hit must therefore be ensured at all times and any automated exchange of data based on image matching should not be allowed.


Due to high pressure from Member States, it is foreseen that facial images will be introduced in the next generation Prüm. This article aimed to show that facial recognition technology is far from infallible; the addition of facial images into the Prüm framework may have significant implications for the protection of the right to private life and the principle of non-discrimination. It is certain that factors such as the quality and age of facial images or the size of databases will significantly impact accurate identification, which is exacerbated by the algorithmic bias of facial recognition. Despite the fact that facial recognition technology is not mature enough, a Commission proposal for the next generation Prüm is forthcoming. We can only hope that the proposed legislation will embed high data quality safeguards, for example by separating high and low quality images in databases, and prevent automated exchange of additional data without expert verification.

[1] This focus group comprises 10 Member States, including Italy and the UK. As these two did not operate Prüm at the time it is doubtful whether they should have participated at all. This is because the weight of the focus group’s recommendations seems to be quite high; among other things, it scrutinised the findings of a feasibility study on the new generation Prüm with the aim of providing a clear framework within which a revision of Prüm should take place.