Discussion Prompt: Are civil liberties and democratic accountability sufficiently incorporated into the priorities of surveillance research programs?

See all contributions to this question.

While most R&D requires some level of experimental latitude and trade secret discretion, how much should be tolerated in publicly funded technological development? In the European Commission’s R&D process, ethical review of legally questionable topics comes far too late – only at a point where funding is too advanced for such projects to be rejected. The civil liberty and democratic accountability concerns stemming from this are vast, and leads us to wonder whether this lack of oversight is merely a fixable ‘bug’ or an intentional feature?


In early 2021, a German MEP submitted a court complaint about a project funded under the European Commission’s research & development (R&D) programme. The project, iBorderCntrl, has received 4.5 million euros in order to develop new technology to detect from facial “micro-expressions” when somebody is lying while answering questions, and to test it on EU borders. According to MEP Patrick Breyer, “the European Union is funding illegal technology that violates fundamental rights and is unethical”, and in this case “pseudo-scientific security hocus pocus” with potential discriminatory impacts among vulnerable communities.


Ethics oversight

It’s not the first time that a project funded with the EU’s R+D funds has received attention for its apparent ethical risks, and the robustness of the system established to oversee what is funded by EU taxpayers. In 2009-10, the INDECT project prompted a debate at the EU parliament about the secrecy of EC-funded research, its effective ethics oversight and the overall aims of some projects funded in the security domain. Funded with almost 11 million euros, INDECT aimed to research “intelligent information system supporting observation, searching and detection for security of citizens in urban environments” but was qualified by some media as the “‘Orwellian’ artificial intelligence plan to monitor public for ‘abnormal behaviour’”.

Examples such as these raise concerns as to the role that civil liberties and democratic accountability play in the process of deciding which technologies get funded and are implemented by EU actors. The EC and its relevant bodies (the Research Executive Agency and European Research Council Executive Agency) have been developing oversight mechanisms for years, which have translated into a mandatory ethics review of all projects that are awarded funding under the Horizon 2020 funding framework and the upcoming HorizonEurope framework.

While the ethics oversight process is a robust mechanism that allows the funder and a set of experts to assess whether an intended tech development may raise ethical issues (linked to the participation of humans and their data, issues of misuse and dual use, of staff safety, etc.), this assessment only takes place once projects have already been selected for funding. There is therefore room to question what happens before projects get to this stage.


The process of prioritisation

In the EU funding programmes, consortiums of organisations (public and private) that aim to apply for research grants request funding along a list of pre-set ‘topics’. These topics are brief descriptions of the priorities established after a long period of consultation with Member States and other actors, who are provided with early drafts of the proposed topics to make comments and suggest ideas, changes, etc. The process is fairly open but also demanding in terms of time for those interested in making specific proposals. 

Moreover, there is no clarity as to the process that the suggestions submitted follow, and so one may find that well-justified changes or suggestions end up in nothing, which disincentivises further work in future calls. While this has an impact on the kind of actors that can devote the time and effort to pursue their contributions to the programme (mostly corporations and Member States), it is nonetheless welcome that the consultation process exists.

Because of this process, the final programme and wording of the topics is the result of multiple conversations and contributions at different levels, as well as some necessary adjustments to the available funding and regional priorities. The selected research areas are therefore the result of a complex process that involves various actors and negotiation spaces.

What the process does not include is a final ethics review before the programme is published. This means that, down the line, ethics reviewers may assess the social, legal, and ethical impact of actual projects which are responding to legally questionable topics, at a point where the funding process is too advanced for such projects to be rejected. An ex-ante evaluation would allow experts to assess whether a topic may promote developing mass surveillance tools, or technologies which can be deployed without consent or transparency.


Justified exceptions?

However, it is unclear whether this is a feature or a bug. What scope for unregulated, experimental innovation should be provided in publicly funded technological developments? There are multiple voices, and indeed a legal framework, that allows for a level of secrecy and experimentation in security research that is not afforded to other domains, where the precautionary principle rules over strict oversight mechanisms (think of medical research, for instance). 

Incorporating issues related to the impact of security technologies on civil liberties or allowing for democratic accountability procedures during innovation processes would require that technological innovation in the field of security is removed from its regulatory exceptionality and instead asked to provide the transparency and oversight that is required from other innovation sectors.

If the lack of robust oversight to ensure that the existing legal framework, necessary precautions, and accountability mechanisms is not a feature but a bug, R+D programmes would only need to move the ethics review procedure to earlier on in the process. But if the lack of accountability is an intentional feature — designed to afford technological innovation a lack of oversight and compliance unheard of in other innovation fields for reasons that are not clear to the public — a broader debate is needed. We would need to assess whether the exceptions afforded to security-related innovations are justified in advanced democratic settings, and in a world where new technologies are increasingly making citizens vulnerable to unaccountable uses of their private information.