Discussion Prompt: Are civil liberties and democratic accountability sufficiently incorporated into the priorities of surveillance research programs?

See all contributions to this question.

In his transparency lawsuit against the EU’s Research Executive Agency, MEP Patrick Breyer aims for a landmark ruling that would allow public scrutiny and debate on unethical publicly funded research. The contentious technology analysing facial microexpressions, deployed at EU borders as part of its ‘iBorderCtrl’ project, is but one example in a series of highly problematic EU research projects that prioritise private profit interests over the general public. Civil liberties and democratic accountability are incorporated in surveillance research programmes in theory, but certainly not in reality.


There is no need to make use of Orwellian analogies and prophecies when describing surveillance research programmes funded by the European Union. At this very moment, several projects utilising Artificial Intelligence (AI) are in different stages of development and threaten fundamental rights of citizens. Under the pretext of protecting the security of the European public, EU funds are distributed to initiatives aimed at exploiting personal data, analysing facial expressions, general appearance, and online activities in order to create profiles and categorisations. The websiteOpen Security Data Europe has a long list of such projects.

However, these projects really only benefit corporations involved in the projects that can subsequently sell unethical technology to the private sector or even authoritarian governments abroad unchecked. Surveillance technologies based on the analysis of our individual body characteristics — such as facial features or movement patterns — turn us into walking barcodes that can be scanned anytime and anywhere. Where conspicuous behaviour is automatically reported to the authorities, they also generate an adaptation pressure incompatible with our fundamental rights. By way of example, let me elaborate on the case of the “video lie detector” component used in the project iBorderCtrl, against the intransparency of which I have filed a complaint with the European Court of Justice (CJEU).


iBorderCtrl violates the Fundamental Rights Charter

The project iBorderCtrl was awarded 3.5 million euros from the EU’s Horizon 2020 (H2020) research programme with the aim to detect deception by automatically analysing facial micro gestures. Travellers to the EU would be questioned by a virtual avatar. During this “interview”, artificial intelligence or machine learning software would analyse non-verbal behaviour to detect knowingly wrong answers. Simultaneously, the software would check law enforcement databases and collect information about the prospective traveller available on their Twitter accounts in order to calculate a risk score. Research and development of iBorderCtrl was concluded in 2020 and the pilot system has been tested at border crossing points in Greece, Hungary, and Latvia.

Invading people’s privacy by analysing their facial gestures and ultimately letting a computer software make assumptions about their criminal potential violates the EU Charter of Fundamental Rights. More precisely, 1) Article 1 about human dignity, 2) Article 7 about respect for private and family life, as well as 3) Article 8 about the protection of personal data. It is clear that civil liberties are not sufficiently reflected in the design of the EU research programmes. They also lack democratic accountability, as the European Research Executive Agency (REA) of the European Commission still casts a veil of silence over most of the projects and treats them as private property of the consortiums. 


The EU Commission refuses transparency over data flows and technology

Whether video lie detection technology works is scientifically highly controversial, which is probably why an ethics consultant took a closer look at the project. However, the REA refuses to give the public access to either the ethics report, the assessment of the legality of the technology, to much of the project’s public relations strategy, nor to the project’s results – despite the fact all are financed from taxpayer’s money. To allow for a scientifically independent assessment of the technology, this information needs to be accessible. The only “scientific” assessments of the technology so far are from Manchester Metropolitan University (MMU), which is part of the iBorderCtrl consortium. MMU scientists have patented the technology and are selling it commercially via a company called Silent Talker Ltd. Since the technology relies on black-box machine learning, the inventors themselves are not aware of the purported signs of deception it relies on.

For stressed, nervous or tired people, such a suspicion-generating machine can easily become a nightmare. Lie detectors are not admissible as evidence in court precisely because they do not work. The widespread deployment of systems for detecting conspicuous behavior risks gradually creating a uniform society of passive people careful not to attract any attention. All of these implications warrant in-depth scrutiny.

In January 2019, my request to access these documents was rejected on the grounds that the documents are “commercial information” of the companies involved and of “commercial value”. This justification demonstrates that EU research funding is all about economic profit. It led me to sue the EU (Case T-158/19). I hope the judgment will redefine the purpose and transparency of EU research funding. When it comes to developing highly dangerous and unethical technologies, the transparency interests of the scientific community and the general public must take precedence over private commercial interests.


H2020 ethics reveal gap between theory and reality

On paper, the Horizon 2020 research & innovation programme prided itself on satisfying the highest standards of ethics and integrity. Nevertheless, the gap between H2020’s intentions when it comes to its Responsible Research & Innovation (RRI) approach and its actual implementation is well documented. Running from 2014 to 2020 with nearly 80 billion euros of funds available, the initiative aimed at driving economic growth, creating jobs, and securing Europe’s global competitiveness. In 2014, the adoption of theRome Declaration intended to incorporate human rights and societal values into all decisions regarding RRI. Main findings of the conference held under the Italian Council Presidency were that technology acceptance cannot be achieved only by way of good marketing and thatearly and continuous engagement of all stakeholders is essential for sustainable, desirable and acceptable innovation

In order to be accepted for the Horizon 2020 Research Programme, applicants were supposed to undergo a thorough examination process of two or three steps to prevent unethical research and funding. First, an applicant’s “Ethics Self-Assessment” was required in the preparatory phase. At a second stage, an “Ethics Review” by ethics experts was conducted for “proposals above threshold and considered for funding”. This review theoretically included an “Ethics Pre-Screening”, as well as an “Ethics Screening” performed by independent ethics experts. Lastly, if these experts deemed it necessary, “Ethic Checks” were set in motion as a third step, which were executed during the running of the research project. In theory, experts were able to reject the proposal on ethical grounds. However, no information is published about the review process, the reviewers, or the results of the reviews.

In 2020, a study by Novitzky et al. analysed 13,644 H2020 projects over six years and found that societal values and ethics translate very poorly to the operational level. Among other things, this was attributed to insufficient training for researchers who did not understand the RRI well enough and to the fact that its framework competes with other H2020 objectives, such as economic value. Novitzky concludes that “multiple agendas led to indecision and compromises that resulted in a failure to consistently integrate societal values into Horizon 2020 operations and research projects”.


CJEU judges question REA’s arguments

Unsurprisingly, in March 2020, Home Affairs Commissioner Ylva Johansson left my parliamentary questions on the reliability and discriminatory effects of the video lie detection technology used in iBorderCtrl unanswered. In the CJEU hearing in February 2021, the REA lawyer argued that while the case raises questions of fundamental importance for EU research funding, “democratic control of research funding is not necessary”. He explained that EU research programmes deliberately did not pursue an open access approach in order to protect competitive advantages of participating companies. Disclosure of the iBorderCtrl project would jeopardise the commercial interests and reputation of the participating companies and institutions. 

I countered that this research project was exemplary for a whole series of highly problematic EU research projects on developing surveillance technology and that this precedent would determine whether private financial sales and profit interests prevail over public transparency interests. The judges questioned the agency’s lawyer intensively and critically for over an hour, for example about whether the entirety of the ethical and legal assessments contained internal business know-how or only parts of them. In the end, the presiding judge raised the question if it would not also be in the interest of the Executive Agency itself to demonstrate that it had nothing to hide.


New mechanisms need to prevent more problematic projects

While the court is yet to decide, the next EU research programme Horizon Europe is already making the news for withholding documents about its 95.5 billion euros fund. Richard L. Hudson, Editor-in-Chief of Science|Business, called leaks distributed to the media and academicsThe Horizon Papers. Furthermore, Horizon Europe sponsors an even more problematic follow-up project to iBorderCtr called TRESPASS with 8 million euros. The “robusT Risk basEd Screening and alert System for PASSengers and luggage” uses research lie detector software and observes border crossing passengers with real-time behavioural analysis and RFID chips. To find out whether the traveller “poses a threat to the internal security of the EU”, TRESPASS accesses several EU databases and gathers information from social media, as well as the “dark web”. When research and testing conclude on 30 November 2021, presumably few documents will be made available for the public.

To put an end to the disregard of ethics in EU-funded research and development, Novitzky urges the EU to do more at the legislative level. For April 2021, the Commission has announced that it will propose legislation on AI, which is a chance to heed the numerous warnings of civil society and human rights organisations. However, this legislation will focus on the deployment of AI rather than its development and sale. Also, it is unlikely that unethical uses of AI, such as for biometric mass surveillance, will be defined and banned. To increase the pressure, the European Citizens’ Initiative “Reclaim Your Face” is calling on the Commission to ban biometric mass surveillance technologies. The initiative is open for signatures for one year and can be supported via theReclaim Your Face website.

All in all, the EU funding for surveillance and control technologies raises fundamental questions: 

  • Should the EU be allowed to fund the development of technology, the use of which would be unlawful in Europe (such as iBorderCtrl)? 
  • Should there be an exclusion for the development of mass surveillance technology or projects that aim at indiscriminately processing everybody’s personal data without consent? 
  • Who should decide on which technologies are developed with EU funds and which are not?
  • Should the programming and decision-making bodies include only government representatives or an equal number of representatives from parliamentary groups, scientists, as well as non-governmental organisations with an expertise in civil liberties and privacy? 
  • Should requests for tender be published without an impact assessment and consultation of the EU Fundamental Rights Agency? 
  • Should the purpose of EU research funding be to help industry generate profits, or should the funding aim at advancing science, EU policies, and European values?

These unresolved issues are aggravated by the new EU Defence Fund, with which the block is expanding its money flow to the development of weapons, i.e. to lethal technology.


My lawsuit aims to be a landmark ruling for transparency

In conclusion, civil liberties and democratic accountability are incorporated in surveillance research programmes in theory, but certainly not in reality. The EU continues to have dangerous surveillance and control technology developed that threatens civil liberties and violates democratic standards it pretends to hold dear. But let me end with a spark of hope, because it is never too late for change. With my transparency lawsuit, I aim for a landmark ruling that will allow public scrutiny and debate on unethical publicly funded research in the service of private profit interests. I want the court to rule once and for all that taxpayers, scientists, media, and Members of Parliament have a right to information on publicly funded research – especially in the case of pseudoscientific and Orwellian technology like the iBorderCtrl video lie detector. Once the public is able to see what is going on, there may be sufficient pressure to put an end to the EU-funded development of repressive technologies for monitoring and controlling law-abiding citizens ever more closely.