Discussion Prompt: Will AI solve the "information overload" challenge for intelligence agencies?

See all contributions to this question.

In an increasingly congested data environment, AI — advanced algorithms that can learn to find complex patterns in data — will be a tremendous asset, both to Western intelligence agencies and their adversaries. For GCHQ it is essential that AI is used ethically and is subject to effective oversight. AI can augment human creativity and judgment in intelligence operations, helping to keep society safe, but only if the inherent risks, including issues of bias and transparency, are adequately addressed.


Our society is going through revolutionary change: the development of advanced data technologies such as Artificial Intelligence (AI) is transforming how we think about our economy, our daily lives, and our national security. It should be no surprise that GCHQ is embracing AI ⁠— we have spent the last hundred years at the cutting edge of data science and security, stretching from the pioneering Colossus computers of Bletchley Park through to the new National Cyber Security Centre. AI will be central to our future ⁠— as will the ethical decisions that surround its use.

What is AI? Ask any two academics, and you will receive two very slightly different answers. But for the purposes of this article, AI can be considered a form of machine learning ⁠— algorithms that can learn to find complex patterns in data, which can be used to form predictive rules potentially useful to us. AI takes those conclusions and uses them to automate or augment some part of a process ⁠— perhaps triaging information, filtering out options, or flagging up a new security risk.

AI requires three key things to be successful: substantial amounts of data, necessary to train algorithms before they are used for actual analysis; significant processing or ‘compute’ power; and exceptionally good data science. While the mathematics that underpins most AI has been well understood since the 1970s, it is the exponential increase in processing power and data availability which have now made AI a reality. The rise of the modern data economy and AI are intertwined — the new 5G networks currently being rolled out around the world are managed using AI — and the growth in new sorts of data that they will transmit will fuel the development of many more applications of AI.

Against this backdrop, many intelligence professionals will refer to the challenges of a modern ‘information overload’. As David Anderson, the former Independent Reviewer of Terrorism Legislation, reported during the passage of the UK’s Investigatory Powers Act, the intelligence agencies are charged not only with countering a wide variety of threats, from cyber-attacks to child sexual exploitation, but also with operating in an increasingly congested data environment. Finding the information needed for effective intelligence has never been so difficult.

This challenge is no longer just a human one. Online automated systems began to outnumber human actors over a decade ago. The ‘deep web’ of machine data that supports our society is already several orders of magnitude greater than the internet seen and used by humans. The rise of the Internet of Things, involving the connection of many billions of everyday appliances and sensors around the globe to the internet, will only accelerate this trend still further. Our digital homeland continues to grow exponentially.

In response, GCHQ is working out how it might take some of our current processes and augment or automate them at scale. This will allow us humans to do what we’re best at: applying creative thought, engaging with partners, taking decisions, or making value judgements. And it will allow our AI systems to do what they’re best at: solving well-defined, narrow problems, where the necessary data and feedback are fully available to the algorithm.

When faced by this kind of task – for example, from the perspective of an intelligence agency, triaging seized media for child sexual exploitation content, optimising cooling systems in a data centre, or identifying anomalous network activity — AI systems are typically much faster and often more accurate than humans. Indeed, AI systems are now able to perform tasks that would be so time-consuming for a human that they would otherwise be impossible to achieve.

We live in a dangerous world, however, one in which our adversaries seek to use these same techniques against the UK and our global interests. The technical press is alive with reports of how AI can be used by hackers to automatically find vulnerabilities in networks and software. Researchers have expressed concerns that adversaries are deploying AI to automate their disinformation campaigns, and privacy campaigners have suggested that illiberal states are using AI in disinformation campaigns or to suppress ethnic minorities. Unlike Western intelligence agencies, such hostile actors will not bound by the rule of law, our adherence to human rights principles such as proportionality, or by our ethical standards.

Modern day intelligence analysts may sometimes complain of information overload, of course, but the problem is hardly a new one. Examples of this phenomenon date back to the middle of the 19th century, fueled by the huge growth in European telegraphic communications at the time. Many of the incredible breakthroughs at Bletchley Park in the 1940s were driven by the need to deal with the soaring quantities of wartime electronic communications. We recall many similar conversations at the start of the digital revolution in the 1990s, and we suspect that our successors will be having them throughout the next century.

Often what we refer to as ‘information overload’ is not simply the sheer volumes of data we face but rather the pressures of constantly adapting to the pace and complexity of the technological and social change around us. New risks, new economic models, new opportunities, increasingly realised at internet speed. This is the real challenge we are facing: how to pioneer new forms of security to keep the UK safe into the future, using all aspects of the UK’s cyber power, from our offensive and defensive capabilities, through to our contributions to international law and ethics.

We are confident that AI will be at the heart of that response. The successful intelligence organisation of the future will be built around a combination of brilliant diverse minds, well-curated data sets, cloud processing, and cutting-edge data science. The building blocks of our future will typically be AI applications, constantly helping to connect our people with the necessary data and insights.

But there will be limitations. AI conclusions will remain poor whenever wider context is essential to understanding a problem, or where the past may not predict the future well. Unfortunately, our fast-changing security environment is often all about strategic context and unexpected discontinuities. We also expect to place controls on permitting AI to learn independently as autonomous ‘black boxes’, however analytically powerful that might be. Taking the challenge of prioritising which seized media an analyst should examine, for example, we will want to be able to explain the algorithmic process itself — the detailed mathematics and data that helps the software advise the analyst — but also justify the operational outcome of the triage: was the process as a whole transparent, ethical, and reasonable?[1]

We can all benefit from the power of AI systems, but most of us would feel nervous if we could not interpret their results, identify their occasional mistakes, or correct their inevitable biases. The right sort of human intervention, strong oversight and accountability will be ever more critical — an area in which the UK is rightly regarded as a world leader.

At the end of the day, AI — however good — will not provide us with the leadership or the imagination to find a path through the confusion of information overload. There will remain no alternative to recruiting the very brightest, most diverse thinkers; building deep relationships with the best academic and industry minds; and upholding strong codes of ethics, backed by good laws and transparent debate. Our data-filled future will likely continue to hold a distinctively human streak.


[1] GCHQ is currently sponsoring the second phase of a research project by the Royal United Services Institute, in collaboration with the UK’s Centre for Data Ethics and Innovation, to develop principles and guidelines for this and other similar National Security challenges; https://rusi.org/commentary/new-generation-intelligence-national-security-and-surveillance-age-ai. For more on the UK’s general approach to transparency and other AI ethical issues, see https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf.



Picture Source: Crown copyright 2004

Privacy Preference Center