Discussion Prompt: Are civil liberties and democratic accountability sufficiently incorporated into the priorities of surveillance research programs?

See all contributions to this question.

When it comes to surveillance research, ethics can only be ensured if researchers and end-users have a good grasp of fundamental rights and are subject to both oversight mechanisms and transparency requirements.


Most research programmes – especially those co-funded by the European Union and particularly if the research at hand aims to develop surveillance technology – include strong ethics requirements. Why, then, is it the case that, after the conclusion of a research project, ethical safeguards aimed at regulating a given technology often appear unable to address the concrete risks posed by this technology?

This situation is traceable to a diversity of causes, but one of them is undoubtedly a phenomenon that is also observed in other areas – particularly in the public debate: a manifest lack of a fundamental rights culture, which affects a large number of people in all sectors of activity, including many lawyers. This may lead to an incomplete or inaccurate assessment of the research outcomes and is responsible for a poor implementation of ethical safeguards, especially in terms of supervision.


Ethics in research: a gap between theory and practice

Theoretically, a strong consideration is dedicated to ethics in most research programmes, and in particular in European ones. As detailed in the H2020 Online Manual, ethical impacts are officially taken into consideration from the beginning to the end of each research project. As in the previous FP7 research programme, this includes self-assessment, external evaluation, and, where needed, dedicated work-tasks. The latter are generally carried out by a team of lawyers, because despite the fact that the notion of ethics has rarely clear boundaries in research programmes, it covers at the very least the preservation of fundamental rights (see art. 19 of Regulation (EU) 1291/2013), also known as legal ethics

At a minimum, this work on ethics is supposed to identify risks to fundamental rights, which, according to the state of the art, should ideally be done via a Privacy Impact Assessment (PIA). Impact assessments should aim at proposing corrective actions and safeguards to be embedded in developed technology as privacy-by-design features and/or to provide guidelines for its use. Following the consortium’s own proposal or due to EU requirements during the grant preparation phase, PIA are frequently reviewed by a person or a team of ethics experts who are independent from the consortium, in addition to the review conducted by the project officer of the European Commission.

However, it turns out that, in practice, several EU co-funded projects deliver technologies that may have an impact on fundamental rights, without accompanying them with sufficient ethical safeguards. There are several causes for this discrepancy between theory and practice, but the main one clearly lies in a lack of understanding of ethics requirements, which often leads researchers to underestimate both ethical risks and the importance of implementing safeguards to mitigate these risks.


Underestimating risks

Firstly, while EU projects are supposed to produce ethics deliverables (such as those I authored for the Mandola Project, see D2.1 to D2.4b), it is not unusual for ethics-related issues and stakes raised by an R&D project to be ill-understood by the members of the consortium. This may lead to ethics deliverables – and to a legal support provided to engineers and developers – that fall short of ethical requirements. Indeed, having a good grasp of privacy ethics implies understanding the fundamental requirements – in other words, the spirit – of legal principles underlying more specific protections of fundamental rights such as the General Data Protection Regulation (GDPR), and which infuse all areas of law at national levels, including general rules of civil liability. Due to the training they received, not all lawyers or data protection experts have this skill or the ability to apply it transversally in all the disciplines mobilised in the context of a given research project, even though they know the GDPR’s letter.


A limited implementation of safeguards

Another extremely frequent challenge is the low level of legal and ethical culture of both the technologists and end-users involved in research projects, even when the latter are law enforcement representatives. As a result, even brilliant legal experts face difficulties in ensuring that technology developers take ethical challenges and requirements properly into account. 

This hurdle is further exacerbated by the general working methodology, which often fails to enable a sustained presence of ethical experts alongside computer scientists. Day-to-day interactions would instead be necessary to enable specialists of different fields, using different vocabulary and notions, to understand each other – all the more when teams are spread across different countries.

Finally, this rather widespread lack of ethics-related knowledge leads to shortcomings in oversight mechanisms, whereas the ethical nature of a technology literally depends on the correct implementation and maintenance of ethical patches and safeguards that have been designed to regulate its use. Instead, very often, ethical and legal assessments are based on the project’s written deliverables rather than on empirical testing. After a project’s conclusion, there is a lack of monitoring of what happens to the developed technology, and therefore no guarantee that ethical patches and safeguards are actually implemented.


Ensuring a culture of fundamental rights

To remedy these oversight gaps, a first crucial step would be to ensure the training of research teams and end-users to the principles of ethics, which is also a culture of fundamental rights and the rule of law. This could be done during the grant preparation by ensuring that members of the consortium and end-users have the necessary knowledge or will acquire it before the beginning of research through dedicated training sessions.

At a more general level, all citizens should get a chance to learn and experience a culture of fundamental rights and the rule of law, through dissemination of educational information as well as during initial and vocational training. The lack of such a democratic culture – where democracy is understood as a society governed by the rule of law – becomes increasingly clear in several areas, and in particular in the public debate in France where political representatives did not hesitate to ask for the suppression of the right of appeal or for a “right of oversight” of law enforcement on judicial decisions. In recent weeks, we have also heard a political journalist state that “the rule of law is obviously a concept to relearn”, while members of the French Senate talked about expanding the state’s data collection practices and claimed that data protection had led to “French taboo”.

It is important to remember that what we consider as being ethics in a democracy was enshrined in 1950 in the principles set out in the European Convention of Human Rights (taken up in the EU Charter of Fundamental Rights). These principles are meant to prevent the re-emergence of totalitarianism. Complying with these principles is both crucial and mandatory for and in the 47 States bound by the Convention – including all EU Member States –, and should of course govern the development of new technologies that are likely to limit people’s rights, such as surveillance technologies. 

Citizens’ knowledge of these principles is equally fundamental since it provides a civic culture that enables any person to understand the law and the challenges posed by digital technologies and, where needed, to challenge them, before parliament, a court or in the public debate. Failing to do so would mean that the very idea of the rule of law as conceived after the Second World War would lose its essence.

To a certain extent, spreading a culture of fundamental rights includes a desacralisation of law. Often seen and presented as a complicated and technical subject, legal matters are usually dreaded by non-lawyers. Yet, despite the technicalities of legal language, the dynamics of the protection of fundamental rights follows a very simple logic, which may be expressed in a few lines through explaining that any initiative or technology that may restrict freedoms must have a specific purpose, must enable to achieve this purpose efficiently, and must be proportionate. One does not need more than thirty minutes to explain the meaning of each of these notions, and a few hours to be able to assess a practical case-study in their light. Further training allows to link this knowledge to the provisions of the GDPR, then to the judge’s and data protection authorities’ interpretations (which sometimes fall short of fundamental requirements), and to the analysis of the content of important rights. Ideally, whenever possible, exchanges between lawyers and computer scientists should be encouraged to foster mutual understanding.


A need for ethics oversight and governance

Research cannot be qualified as being “ethical” on the sole ground that it includes the analysis of ethical issues arising from technological innovation. This qualifying adjective also implies that equal importance is given to the outcomes of both the ethics-related and the technology-related research. The two are interconnected and both should be enforceable. In practice, this means that where the ethical analysis concludes, for example, that the developed technology must be excluded from any use in public spaces, these prohibitions should be enforced. This can only be ensured if transparency and oversight mechanisms are in place.

The existence of a sound monitoring and transparency regime should be a condition for research funding and failure to satisfy ethical safeguards should be sanctioned. Failing that, the ethical compliance of surveillance technologies threatening fundamental rights cannot be ensured. 


Conclusion

Research should be considered ethical if the two minimal conditions I have outlined – a culture of fundamental rights and effective oversight mechanisms and transparency requirements – are met. These imperatives should take precedence over any economic or other considerations. Otherwise, the ethical nature of research is but a mere rhetorical statement, and the public funding of such research unjustified since it would breach the positive obligation of states and European Union institutions to enforce, in their territory, the European Convention on Human Rights.