Discussion

Integrating cross-sectoral and transnational perspectives is vital when discussing intelligence in Europe (we make the case for why we believe that in our mission statement). In order to facilitate a conversation in that spirit, this section curates contributions as responses to key discussion questions. That means that experts from a diverse array of backgrounds are invited to comment on the same discussion question for any given topic so as to emulate a conversation that is both issue-driven and heterogeneous.

Discussion Prompt: Are civil liberties and democratic accountability sufficiently incorporated into the priorities of surveillance research programs? 

Outputs of science and innovation policies are bound to profoundly affect our societies. This is particularly true of surveillance technologies, which can have an adverse impact on citizens’ rights and freedoms. Yet, it is often only at a later stage — when these technologies are market-ready — that any meaningful public debate actually takes place.

Think of automated video-surveillance and applications like “facial recognition” for instance: the technology is now being rolled-out and is stirring controversy in many European countries, but for years such technologies have been developed and tested by public and private organisations in the context of publicly-funded research projects. How, and by whom, are such research agendas decided upon? On what basis and according to which priorities? Are fundamental rights aspects actually taken into account by the consortia researching surveillance technologies, and if so, how exactly?

These questions appear all the more pressing considering the European Union’s commitment to the research and development (R&D) of surveillance technology has risen steadily in the context of the Horizon 2020 research program, which represents 50% of the overall public funding for security research in the EU. Overall, EU funding of security-related technologies more than doubled in recent years, from about 3.8 billion euros for the 2007-2013 budget cycle to 8 billion euros for 2014-2020. As for the 2021-2027 period, recent budget discussions secured a 30% increase for current research and innovation programmes.

In view of the fast-paced growth of public R&D programmes for surveillance and security-related technologies, and given the many pressing questions regarding potential right infringements of such research products, what should be done to ensure proper democratic legitimacy of future security research programs? What oversight mechanisms need to be in place to ensure that research in surveillance technologies can be reconciled with fundamental rights?

The EU’s R&D process: unaccountable, unethical, even illegal?

Dr. Gemma Galdon-Clavell is a tech policy analyst working on the social, ethical, and legal impact of data-intensive technologies and algorithmic auditing. She is the Founder and Director of Eticas Consulting and was a 2017 EU Women Innovators Prize finalist. She has ongoing research contracts and grants from the European Commission (FP7 and H2020 programs), the European Agency for Fundamental Rights and the Open Society Foundation, among others. Dr. Galdon-Clavell has led research as a Principal Investigator in more than 10 large projects. She is a scientific and ethics expert at the Directorate General for Research and Innovation at the European Commission and sits on the board of Privacy International and Data & Ethics. She was recently shortlisted for the Booking.com Technology Playmakers Award. Her work is focused on building socio-technical data architectures that incorporate legal, social, and ethical concerns in their conception, production, and implementation. She is a policy analyst by training and has worked on projects relating to Artificial Intelligence and human rights and values, the societal impact of technology, smart cities, privacy, and crisis management tech. Her recent academic publications tackle issues related to the impact of COVID on digitalisation and society, AI and the future of work, the proliferation of data-intensive technologies in urban settings, security and mega-events, and the relationship between privacy, ethics and technology, and smart cities. She completed her PhD on surveillance, security, and urban policy at the Universitat Autònoma de Barcelona, where she also received an MSc on Policy Management, and was later appointed Director of the Security Policy Programme at the Universitat Oberta de Catalunya (UOC). Previously, she worked at the Transnational Institute, the United Nations’ Institute for Training and Research (UNITAR) and the Catalan Institute for Public Security. She teaches topics related to her research at several foreign universities and is a member of the IDRC-funded Latin-American Surveillance Studies Network. Additionally, she is a regular analyst on TV, radio, and print media. Previous posts (selected): - Universitat Oberta de Catalunya (UOC) - Institut de Govern i Polítiques Públiques (IGOP-UAB) - United Nations (UNITAR) - Catalan Institute for Public Security (ISPC) - Transnational Institute (TNI) - Department of Applied Economics (UAB) Teaching: - Security and Technology (Universitat de Girona) - Technology and Privacy (Universitat de Girona) - Public policy (Universidad Autónoma de Ciudad Juárez, Mexico) - Urban Management (Erasmus Universiteit, Rotterdam) Media: - Contributor at El País http://elpais.com/autor/gemma_galdon_clavell/a/ - Contributor at Eldiario.es http://www.eldiario.es/autores/gemma_galdon_clavell/ - Contributor at PrivacySurgeon.org

Discussion Prompt: To what extent does Germany’s new BND draft bill provide a rights-based and modern framework for foreign intelligence?  

In May 2020, the German Constitutional Court ruled that key provisions in the current legal framework on the German foreign intelligence service (BND Act) are unconstitutional and that the Bundestag has until December 2021 to rectify a long list of deficits. The basic premise of the Court’s judgement is that the right to private communication and the right to press freedom under Germany’s Basic Law are rights against state interference that ought to extend to foreigners in other countries, too. In its new foreign intelligence bill, German state authority must honor these rights not just with respect to its own citizens and residents but also with regard to non-nationals the world over. 

Drafting a new legal framework for Germany’s foreign intelligence collection requires a substantial overhaul of the provisions on the surveillance of foreign telecommunications, on the sharing of intelligence thus obtained with other bodies, and on the cooperation with foreign intelligence services as well as the design of effective judicial and administrative oversight. Many legal, technical and political decisions that now need to be made are open questions in other countries, too. This concerns, for example, the mandate for bulk collection, oversight requirements, the rights and protections afforded to non-nationals, or special protections for journalists. 

Does this BND reform 2.0 manage to protect both fundamental rights and security? And will it therefore see Germany enter the small club of liberal democracies paving the way for better rights-based intelligence conduct in the world or will it only be a matter of time before this reform, like its predecessor, will be squashed in court? With much at stake, the variety of perspectives featured in this panel is intended to provide answers to these questions and help both the members of the Bundestag and the general European public to form their opinion on a truly consequential piece of security legislation. 

Discussion Prompt: Should we ban the use of automated video-surveillance?

Facial recognition technology has captured both the imagination and the concern of many. But it is only one form of biometric surveillance, which itself is merely one application of automated video-surveillance. The field of video analytics is vast, and can include nearly every type of action and occurrence imaginable, even human sentiment. The pair of human eyes once tasked with passively watching CCTV footage has been replaced with artificial intelligence programmes. Law enforcement and other security agencies, which increasingly resort to automated video-surveillance, tout the technology’s aid in reducing crime and increasing public safety, but critics have long raised the alarm. They may highlight its supply-driven market background or point to all the problems AI itself is fraught with — racial biases, false positives, and algorithmic inscrutability, among others. Fundamentally, they worry that full-scale automated video-surveillance in public spaces will create a point of no return, after which we will be unable to live our lives anonymously and assert our essential civil liberties, such as freedom of assembly or freedom of speech, with dire consequences for democracy. With the spotlight usually being on how to regulate technology after it has already been introduced, we want to take a step back and ask: Do we even want to allow this kind of technology? How can we enforce our democratic will in the face of ever-faster technological change?

Discussion Prompt: What existing national security legislation, new bulk analysis efforts, and emergency measures have different states deployed to curb the spread of Covid-19?

We are facing a global health emergency. Governments around the world have responded by making use of existing contingency plans in their public health and emergency laws but also turned to new measures to slow the spread of the pandemic. What changes to the legal framework, policy, and use of surveillance has Covid-19 triggered across different states? Are these changes necessary and what can be said about their democratic legitimacy? What legal and technical safeguards are being addressed or should become the norm to render surveillance-based government responses to the health crisis proportionate?

Discussion Prompt: When, if ever, is predictive policing effective, fair, and legitimate? What is the role of data reliability in this?

Police departments worldwide are increasingly exploring predictive policing tools. Whether this is because they are trying to harness new technology to provide a 21st century police service, trying to cut costs and do more with less, or perhaps merely jump on the shiny tech bandwagon — algorithmic analysis tools are proliferating. While they vary in method, many seek to anticipate future crimes (types, geographic areas, and time windows) and identify possible victims and offenders. Despite contested evidence for the effectiveness of the approach, more and more police departments around Europe are adopting predictive policing tools, often in the absence of clear regulation on the use of data analytics in our criminal justice systems. While proponents claim algorithmic tools eliminate human bias, voices flagging the self-fulfilling nature of using historical crime data, which can lead to over-policing and profiling of racial minorities, are growing louder. How these tools sit with the right to presumption of innocence and civil liberties is yet to be determined.

Discussion Prompt: To what extent can and should surveillance technology be subject to export control?

For tyrants, and those who would like to be one, digital technology has ushered in a new era of social and political control. Around the world, authoritarian regimes persecute journalists and dissidents and violate the rights of minorities with sophisticated surveillance tools, from government malware to facial recognition and from Stingrays to IMSI catchers. European countries, particularly the UK, Germany, and France, are among the key suppliers of these technologies. Despite the implementation of stricter export controls for European companies since 2011 after the Arab Spring exposed the degree to which European technology aided the crackdown of protests much government oppression continues to be “Made in the EU”. With global trade in AI-enabled surveillance flourishing, what are the regulatory options for ensuring that surveillance technology produced in Europe will not be used to assail fundamental human rights elsewhere? And what are the practical obstacles to their effective implementation and enforcement? 

Discussion Prompt: Is productive engagement on intelligence law, policy and oversight possible between the secret and civilian world and what can be gained from it? Reflections on best practice, lessons learned, and plans for the future.

The issue of intelligence in public debate has arrived at a noteworthy conundrum: on the one hand, we are experiencing a normalisation of intelligence politics the Snowden revelations and the subsequent response by parliaments, governments, and agencies have had their share in that on the other hand, many countries are still treating intelligence politics as a special, if not unique, realm of policy, one that necessitates secrecy by default. This prerogative leads to the exclusion of large swaths of institutionalised public life (from civil society to business, and from academia to tech industry) from the political and legislative process around intelligence. Weighing the need for national security and civil liberties should not be left to one sector alone. Rather than preclude the input of different stakeholders, we posit that sound intelligence policy and practice requires a plurality of cross-disciplinary inputs and partnerships. This discussion question seeks to investigate the practical possibility and the potential of reaching across the ‘aisle of secrecy’ by hearing from experts who have done that. 

Sitting on the steel fence: my dialogue with the intelligence world

Professor Peter Sommer combines academic and public policy work with commercial cyber security consultancy, with a strong focus on legal issues. His first degree is in law, from Oxford University. He is currently a part-time Professor of Digital Evidence at Birmingham City University and a Visiting Professor at de Montfort University. Until 2011 he was a Visiting Professor in the Department of Management at the London School of Economics. He has consulted for OECD, UN, European Commission, UK Cabinet Office Scientific Advisory Panel on Emergency Response, UK National Audit Office, Audit Commission, and the Home Office. He has carried out external audits of the Internet Watch Foundation hotline. The OECD work, written with Ian Brown, addressed the cyber aspects of Future Global Threats. He has further given evidence to the Home Affairs and Science & Technology Select Committees, the Joint Committee on the Communications Data Bill, and to the Intelligence and Security Committee. He was a Specialist Advisor to the old Trade and Industry Select Committee and to the Joint Committee on the Draft Investigatory Powers Bill (now an Act). During its existence Peter was the joint lead assessor for the digital speciality at the UK Home Office-sponsored Council for the Registration of Forensic Practitioners and has advised the UK Forensic Science Regulator and the Home Office on communications data. He has acted as an expert in many important criminal and civil court proceedings in the UK and international courts usually where digital evidence has been an issue including Official Secrets, terrorism, state corruption, assassination, global hacking, DDoS attacks, murder, corporate fraud, privacy, defamation, breach of contract, professional regulatory proceedings, harassment, allegations against the UK military in Iraq, “revenge porn” on social media and child sexual abuse. Particular themes have been situations where technologies need to be interpreted in legal terms and assessments of quantum and extent of damage. Peter is the author, pseudonymously, of The Hacker's Handbook, DataTheft and The Industrial Espionage Handbook, and under his own name, Digital Evidence, Digital Investigations and E-Disclosure (IAAC) now in its 4th edition and the Digital Evidence Handbook. He is a Fellow of the British Computer Society and also a Fellow of the Royal Society of Arts. http://www.pmsommer.com

Discussion Prompt: Will AI solve the “information overload“ challenge for intelligence agencies?

In a 1993 white paper, the US Scientific and Technical Intelligence Committee (STIC) spelled out the need for analytical “paradigm shifts” to cope with the rapidly expanding “global production of information”, widely dubbed ‘the information overload’. 25 years later, self-learning algorithms, commonly referred to as Artificial Intelligence, are often heralded as that very technological revolution, suited to provide for either a “new kind of security” or a new age of government surveillance, or both. Against this backdrop, we try to understand the current and projected role of artificial intelligence in the work of intelligence agencies. Is it the technological breakthrough agencies have been seeking? Has it propelled us into a new intelligence governance dimension, where we require specific and updated regulation and control? Or is it a continuation of technology support systems which simply aid decision makers.

Discussion Prompt: Why don’t intelligence oversight bodies cooperate as well as intelligence agencies? And is there reason to believe that could be changing?

A series of terror attacks, most notably 9/11 and the 2015 Paris attacks, has led to an ever-closer cooperation among European intelligence agencies. The bodies tasked with monitoring these agencies, however, rarely engage in direct cooperation, let alone conduct joint investigations into intelligence cooperation. This discrepancy engenders an oversight gap, whereby intelligence data and activity eludes nationally mandated review as it crosses national borders. Simply speaking, transnational intelligence practice and national oversight have historically been an accountability mismatch, which in turn undermines the democratic legitimacy of intelligence agencies and their work. This discussion question interrogates why oversight bodies don’t have similarly extensive international relationships and what the likely trajectory is for intelligence oversight in an increasingly transnational security context.