Discussion

Integrating cross-sectoral and transnational perspectives is vital when discussing intelligence in Europe (we make the case for why we believe that in our mission statement). In order to facilitate a conversation in that spirit, this section curates contributions as responses to key discussion questions. That means that experts from a diverse array of backgrounds are invited to comment on the same discussion question for any given topic so as to emulate a conversation that is both issue-driven and heterogeneous.

Discussion Prompt: Should we ban the use of automated video-surveillance?

Facial recognition technology has captured both the imagination and the concern of many. But it is only one form of biometric surveillance, which itself is merely one application of automated video-surveillance. The field of video analytics is vast, and can include nearly every type of action and occurrence imaginable, even human sentiment. The pair of human eyes once tasked with passively watching CCTV footage has been replaced with artificial intelligence programmes. Law enforcement and other security agencies, which increasingly resort to automated video-surveillance, tout the technology’s aid in reducing crime and increasing public safety, but critics have long raised the alarm. They may highlight its supply-driven market background or point to all the problems AI itself is fraught with — racial biases, false positives, and algorithmic inscrutability, among others. Fundamentally, they worry that full-scale automated video-surveillance in public spaces will create a point of no return, after which we will be unable to live our lives anonymously and assert our essential civil liberties, such as freedom of assembly or freedom of speech, with dire consequences for democracy. With the spotlight usually being on how to regulate technology after it has already been introduced, we want to take a step back and ask: Do we even want to allow this kind of technology? How can we enforce our democratic will in the face of ever-faster technological change?

Discussion Prompt: What existing national security legislation, new bulk analysis efforts, and emergency measures have different states deployed to curb the spread of Covid-19?

We are facing a global health emergency. Governments around the world have responded by making use of existing contingency plans in their public health and emergency laws but also turned to new measures to slow the spread of the pandemic. What changes to the legal framework, policy, and use of surveillance has Covid-19 triggered across different states? Are these changes necessary and what can be said about their democratic legitimacy? What legal and technical safeguards are being addressed or should become the norm to render surveillance-based government responses to the health crisis proportionate?

Discussion Prompt: When, if ever, is predictive policing effective, fair, and legitimate? What is the role of data reliability in this?

Police departments worldwide are increasingly exploring predictive policing tools. Whether this is because they are trying to harness new technology to provide a 21st century police service, trying to cut costs and do more with less, or perhaps merely jump on the shiny tech bandwagon — algorithmic analysis tools are proliferating. While they vary in method, many seek to anticipate future crimes (types, geographic areas, and time windows) and identify possible victims and offenders. Despite contested evidence for the effectiveness of the approach, more and more police departments around Europe are adopting predictive policing tools, often in the absence of clear regulation on the use of data analytics in our criminal justice systems. While proponents claim algorithmic tools eliminate human bias, voices flagging the self-fulfilling nature of using historical crime data, which can lead to over-policing and profiling of racial minorities, are growing louder. How these tools sit with the right to presumption of innocence and civil liberties is yet to be determined.

Discussion Prompt: To what extent can and should surveillance technology be subject to export control?

For tyrants, and those who would like to be one, digital technology has ushered in a new era of social and political control. Around the world, authoritarian regimes persecute journalists and dissidents and violate the rights of minorities with sophisticated surveillance tools, from government malware to facial recognition and from Stingrays to IMSI catchers. European countries, particularly the UK, Germany, and France, are among the key suppliers of these technologies. Despite the implementation of stricter export controls for European companies since 2011 after the Arab Spring exposed the degree to which European technology aided the crackdown of protests much government oppression continues to be “Made in the EU”. With global trade in AI-enabled surveillance flourishing, what are the regulatory options for ensuring that surveillance technology produced in Europe will not be used to assail fundamental human rights elsewhere? And what are the practical obstacles to their effective implementation and enforcement? 

Discussion Prompt: Is productive engagement on intelligence law, policy and oversight possible between the secret and civilian world and what can be gained from it? Reflections on best practice, lessons learned, and plans for the future.

The issue of intelligence in public debate has arrived at a noteworthy conundrum: on the one hand, we are experiencing a normalisation of intelligence politics the Snowden revelations and the subsequent response by parliaments, governments, and agencies have had their share in that on the other hand, many countries are still treating intelligence politics as a special, if not unique, realm of policy, one that necessitates secrecy by default. This prerogative leads to the exclusion of large swaths of institutionalised public life (from civil society to business, and from academia to tech industry) from the political and legislative process around intelligence. Weighing the need for national security and civil liberties should not be left to one sector alone. Rather than preclude the input of different stakeholders, we posit that sound intelligence policy and practice requires a plurality of cross-disciplinary inputs and partnerships. This discussion question seeks to investigate the practical possibility and the potential of reaching across the ‘aisle of secrecy’ by hearing from experts who have done that. 

Sitting on the steel fence: my dialogue with the intelligence world

Professor Peter Sommer combines academic and public policy work with commercial cyber security consultancy, with a strong focus on legal issues. His first degree is in law, from Oxford University. He is currently a part-time Professor of Digital Evidence at Birmingham City University and a Visiting Professor at de Montfort University. Until 2011 he was a Visiting Professor in the Department of Management at the London School of Economics. He has consulted for OECD, UN, European Commission, UK Cabinet Office Scientific Advisory Panel on Emergency Response, UK National Audit Office, Audit Commission, and the Home Office. He has carried out external audits of the Internet Watch Foundation hotline. The OECD work, written with Ian Brown, addressed the cyber aspects of Future Global Threats. He has further given evidence to the Home Affairs and Science & Technology Select Committees, the Joint Committee on the Communications Data Bill, and to the Intelligence and Security Committee. He was a Specialist Advisor to the old Trade and Industry Select Committee and to the Joint Committee on the Draft Investigatory Powers Bill (now an Act). During its existence Peter was the joint lead assessor for the digital speciality at the UK Home Office-sponsored Council for the Registration of Forensic Practitioners and has advised the UK Forensic Science Regulator and the Home Office on communications data. He has acted as an expert in many important criminal and civil court proceedings in the UK and international courts usually where digital evidence has been an issue including Official Secrets, terrorism, state corruption, assassination, global hacking, DDoS attacks, murder, corporate fraud, privacy, defamation, breach of contract, professional regulatory proceedings, harassment, allegations against the UK military in Iraq, “revenge porn” on social media and child sexual abuse. Particular themes have been situations where technologies need to be interpreted in legal terms and assessments of quantum and extent of damage. Peter is the author, pseudonymously, of The Hacker's Handbook, DataTheft and The Industrial Espionage Handbook, and under his own name, Digital Evidence, Digital Investigations and E-Disclosure (IAAC) now in its 4th edition and the Digital Evidence Handbook. He is a Fellow of the British Computer Society and also a Fellow of the Royal Society of Arts. http://www.pmsommer.com

Discussion Prompt: Will AI solve the “information overload“ challenge for intelligence agencies?

In a 1993 white paper, the US Scientific and Technical Intelligence Committee (STIC) spelled out the need for analytical “paradigm shifts” to cope with the rapidly expanding “global production of information”, widely dubbed ‘the information overload’. 25 years later, self-learning algorithms, commonly referred to as Artificial Intelligence, are often heralded as that very technological revolution, suited to provide for either a “new kind of security” or a new age of government surveillance, or both. Against this backdrop, we try to understand the current and projected role of artificial intelligence in the work of intelligence agencies. Is it the technological breakthrough agencies have been seeking? Has it propelled us into a new intelligence governance dimension, where we require specific and updated regulation and control? Or is it a continuation of technology support systems which simply aid decision makers.

Discussion Prompt: Why don’t intelligence oversight bodies cooperate as well as intelligence agencies? And is there reason to believe that could be changing?

A series of terror attacks, most notably 9/11 and the 2015 Paris attacks, has led to an ever-closer cooperation among European intelligence agencies. The bodies tasked with monitoring these agencies, however, rarely engage in direct cooperation, let alone conduct joint investigations into intelligence cooperation. This discrepancy engenders an oversight gap, whereby intelligence data and activity eludes nationally mandated review as it crosses national borders. Simply speaking, transnational intelligence practice and national oversight have historically been an accountability mismatch, which in turn undermines the democratic legitimacy of intelligence agencies and their work. This discussion question interrogates why oversight bodies don’t have similarly extensive international relationships and what the likely trajectory is for intelligence oversight in an increasingly transnational security context.