Discussion Prompt: Will AI solve the "information overload" challenge for intelligence agencies?

See all contributions to this question.

Artificial intelligence might be good at finding empirical patterns in data points. However, not only do we not fully understand its working (which creates significant accountability problems), it also inherently lacks a faculty that is essential for our liberal civilisation: normative judgment and, thereby, a concept of justice. As the case of predictive policing exemplifies, a complex phenomenon such as crime can and should not be left to machines alone.


While security and intelligence services (SIS) as well as law enforcement agencies (LEAs) all over the world complain about criminals “going dark” due to encryption and other privacy enhancing technologies, this happens in times when virtually the entirety of our lives is digitally recorded by default. While it might be impossible to intercept phone calls with the same technical methods that were effective over thirty years ago, the people in these ‘hidden’ conversations produce unprecedented amounts of metadata and other ‘data exhaust’ that can be used for investigative purposes. The success of investigatory platforms such as Bellingcat, which predominantly uses open-source information for their investigations into politically highly sensitive areas and subjects, proves this point. But what is, and should be, the role of artificial intelligence in the face of data abundance?


The case of predictive policing

Since more digital information is available than ever before, the actual challenge is not to find it but to manage and analyse it. While it is overwhelming for humans to sort through massive amounts of data and identify relevant themes, actors, and aspects, AI-driven systems are typically incapable of understanding context. One good example to illustrate this is Predictive Policing (PP), which we studied in the EU-funded research project Cutting Crime Impact. PP is an innovative tool using data and statistical methods to forecast the probability of crime and deploy resources more effectively. However, it is based on many underpinning assumptions that reduce the complex phenomenon of crime. In essence, PP systems offer ‘another perspective on crime’ which is certainly neither the only one, nor the ‘real one’. Many assumptions are made when deciding with which data to train the system, how to visualise predictions, how to interpret them, how to train SIS and LEAs in charge of that interpretation, and ultimately how to put them into action. In other words, the trail from ‘raw data’ to action is neither self-evident nor objective. To date, it remains empirically unproven that the use of PP reduces crime rates at all. At the same time, and as we discuss in more detail in a paper on the subject, several ethical, legal, and social issues remain surrounding data selection and machine bias, visualisation and interpretation of forecasts, transparency and accountability, time and effectiveness, as well as the stigmatisation of individuals, environments, and community areas.

If these concerns are not sufficiently addressed through a sound data management culture and corresponding training, they could create a severe lack of trust in government institutions. The missing transparency in deploying PP in Los Angeles, where it was first implemented in 2011, recently led to demands of civil society to stop its use. Some cities have already abandoned the use of PP for lack of effectiveness or rejected it altogether over concerns it might constitute yet another form of racial profiling. In dozens of other cities across the US and Europe, however, predictive policing contracts are still active, sometimes in secrecy, and the number continues to grow.


AI alone does not create security

The designers and users of AI systems frequently do not understand why their systems work. Success is usually defined by the ‘right’ or ‘expected’ outcome. In other words, as long as AI-driven image recognition, for instance, can determine with a high degree of confidence that a picture of a cat does indeed show a cat, it is considered useful for a lot of applications. But the system does not understand the essence or meaning of ‘cat’, a task with which even a human developer might struggle. Similarly, a system predicting the likelihood of crime will sometimes succeed in identifying the circumstances in which security might be threatened, but it will not predict crime as such. It will identify situations potentially allowing for burglaries, vehicle theft, or maybe even violent crime. It does not, however, understand nor address the motivations for such behavior. From an ethical perspective this is deeply troubling. Just like a good mathematics teacher will want to see the student’s working, and not just the result, accountable decision-making is predicated on an intimate understanding of the process that informs it. While for a student, failure to show one’s math makes for a lower grade, a law enforcement or intelligence agency that cannot explain why it reached decisions with significant human consequence will run the risk of forfeiting democratic legitimacy.

There is yet another reason why the use of AI is inherently problematic, and it reaches back to the long-standing debate in legal philosophy between legality (“Rechtmäßigkeit”) and justice (“Gerechtigkeit”). As the legal philosopher Hans Kelsen emphasised in his pure theory of law, it is possible to envision the legal system as purely positivist, working self-sufficiently based on a hierarchy of applicable laws. This enables seemingly clear decisions, disregarding ‘blurry’ notions such as morality, rationality, evidence, and justice. Jean-Jacques Rousseau, on the other hand, demands that the ultimate purpose of the legal system is not to ‘correctly’ apply norms but to achieve justice, embodied by individuals and civil authority fulfilling their moral obligation to each other. Legality is only a means to achieving justice, it is not the end goal of the legal system as such.

Rather than follow strictly from how the world is, or seems to be, seeking justice also requires a normative consideration of how the world ought to be. A healthy, fair, and kind democracy depends on these difficult, subjective, and non-quantifiable deliberations.  Even if we understood complex machine-learning algorithms, and even more so that we don’t, we should not leave important decisions in the security and intelligence space to them alone. The statements they produce are exclusively based on how the world (probably) is, and not on how it should be. As such they are inherently blind to how to make it better.


Law and policy are not empirical disciplines, nor will they ever be

Once we acknowledge the existence of a gap between empirical analysis and normative justice, the essential question is: How could an AI-driven system produce ‘legitimacy by default’? My answer is: it won’t. It seems that this gap is persistent. It is the task of law- and policymakers to fill it through discourse, which takes empirical findings and translates them into a world of values and normative aspirations. If, as a society, we transfer too much power over this process into the empirical domain (i.e. the actors developing AI systems), we will live in a world where autonomous systems address complex concepts such as security, justice, freedom, and dignity based on simplified and reduced assumptions. In such a world “code is law.


Security enhanced (not replaced) by AI

We all want to be safe, and we all have a duty to contribute to safety and security in society. However, the rapid pace with which technologies such as PP are being implemented at mass-scale often make it impossible to consider in what kind of society we want to be safe. If this is a society in which individuals are entitled to thrive and co-exist in a dignified manner, it must also be clear that stability cannot trump and justify every measure. Since WW2, Western societies have managed to have both: freedom and security. If AI can become the basis for tools capable of enhancing such a complex concept of security, it is a welcome addition to the existing toolkit. However, this would require from actors developing and deploying such systems to embrace complexity, as well as accepting that to better understand how the world is says little about how it should be.

Privacy Preference Center