Discussion Prompt: Will AI solve the "information overload" challenge for intelligence agencies?

See all contributions to this question.

Using AI in a national security context does not need to be contentious. Leaving the buzzword of predictive intelligence’ behind us, the outputs of an AI system should be seen as simply another source of information that human decision-makers factor into their own professional judgements. Sector-specific guidance, regular review and re-assessment of the necessity and proportionality of any potential intrusion, and inclusive oversight are key in ensuring that this form of ‘augmented intelligence’ is ethical and accountable.


With data overload being the biggest technical challenge facing the UK’s intelligence community, automated data analysis has a significant role to play in national security. The question that persists however, is how to strike a balance between the opportunities and risks associated with artificial intelligence (AI). While intelligence agencies see it as a helpful tool for dealing with a surplus of data and making more targeted and reliable decisions, critics are concerned about greater privacy invasions and less accountability.   

The UK’s Royal United Services Institute (RUSI) recently conducted a research study into the use of AI for national security purposes. We found that the role of AI in national security doesn’t have to be contentious. Key is that intelligence agencies use AI to ‘augment’ intelligence analysis work done by humans, not to replace it. In order to ensure the ethical use of this technology, what is needed is 1) additional sector-specific guidance for the use of AI in national security; 2) a human rights & proportionality test yardstick for the use of AI by intelligence agencies; and 3) agile oversight that includes a range of perspectives to be heard beyond that of the security community. 


AI in the real world

The use of data to help solve societal challenges is again in the spotlight as governments around the world try to deal with the impact of Covid-19. But while traditional human-led data analysis is playing a critical part in monitoring the disease’s spread and informing strategic responses, the role of AI has been comparatively muted in this debate.

Far from being a pandemic panacea capable of spotting outbreaks before they occur, suggestions that AI could be used to develop cough prediction apps, for example, have been greeted with a high degree of scepticism. And rightly so – there is a clear gulf between the headlines about AI and its application in the real world.


A human touch 

This gulf is mirrored in our own research into AI and national security, an independent project commissioned by GCHQ. Our findings suggest that it is not the ultra-experimental ‘predictive intelligence’ that could offer the most potential to benefit national security, but the less glamorous context of machine learning.

Applying machine learning in areas like document analysis, natural language processing and audio-visual processing, offers potential to identify patterns and flag anomalies. It can also pinpoint correlations between different bulk datasets. However, in all cases the object should be not to deliver judgements, but to provide suggestions and raise queries for experienced human review.

There is no question that automated data analysis should have a role to play in national security. Our conversations with practitioners and policymakers told us that the single biggest technical challenge facing the UK’s intelligence community (UKIC) is data overload. Confronted by the hostile use of AI via cyber attack or disinformation campaigns, UKIC therefore has a pressing “obligation to innovate”.


‘Augmented intelligence’

Though there has been significant interest in algorithmic prediction in public environments such as policing or social care, our research suggests it will be of limited value in carrying out national security threat assessments. In counter-terrorism, for example, there is no consistent profile of a terrorist, and the appetite to take risks in this field is understandably limited. Rather than trying to predict individual behaviour, innovation should focus on ‘augmented intelligence’ systems which collate information from multiple sources and flag significant data for human review.

Even here, there are risks. The consequences of error in national security assessments could be wide-ranging, particularly if an AI system is integrated into a decision-making process which leads directly to action against an individual. On that basis, making sure that relevant information is not screened out by a system on the basis of statistical insignificance becomes particularly important.

So does ensuring that a process informed by AI analysis maintains human accountability. It must therefore be designed so that non-technical specialists are able to interpret its results whilst understanding margins of error and uncertainty. The outputs of an AI system are simply another source of information to factor into their own professional judgements, and internal oversight procedures should emphasise its supporting role.


A yardstick of human rights 

Crucially, a key yardstick for the use of AI by intelligence agencies must be human rights necessity and proportionality tests. Nor can these tests be static and limited to the time of data acquisition; internal processes must also assess whether it is appropriate to use AI to analyse data previously obtained.

These assessments will inevitably fuel the ongoing debate about whether the use of AI increases intrusion or reduces it. Whilst AI has the potential to minimise wider examination of human data through more precise targeting, machine or human review still amounts to intrusion into personal information. Indeed, automated processes may flag up data which was not previously subject to human assessment.

Then there will be the cumulative risk of AI systems interacting with each other: as the Anderson bulk powers review put it: “…intrusions into privacy have been compared, compellingly, to environmental damage: individually, their impact may be hard to detect, but their cumulative effect may be very significant”.


‘Ethical AI’

Though there is a clear need for internal processes to monitor cumulative impacts — and plenty of noise about ‘ethical AI’ principles — our review found uncertainties about what this meant in operational terms. This suggests that additional sector-specific guidance is needed for the use of AI in national security.

It also points to the need for an agile approach to oversight. UKIC has to both understand the opportunities and risks associated with the application of AI in national security, but avoid stifling necessary innovation. Without this balance, the ability of agencies to respond to evolving technological threats could be undermined.

As with AI itself, our review suggests policy and guidance can go only so far. Complex, case-specific decisions will have to be made by individuals. This places an additional burden on those decision-makers. Fostering a culture which empowers them to make such judgements whilst enabling discussion and accepting challenge is an important piece in the operational jigsaw.

The Investigatory Powers Commissioner’s office has a central role to play in this oversight issue. It should engage with agencies in a way which enables ongoing review of the development and deployment of AI, and considers the views of a wide range of stakeholders: not just the security community itself, but civil society organisations and other public interest groups.

The fundamental responsibility of all the organisations across this landscape is to understand each others’ perspectives. This is the best route to a practical application of AI which understands its true potential in a national security context, enables it to be effective and to evolve, but prevents unseen and unwarranted intrusion into people’s lives.