Discussion Prompt: Will AI solve the "information overload" challenge for intelligence agencies?

See all contributions to this question.

AI is overrated. The role of machines is not to replace but facilitate human reasoning. Augmented intelligence (AuI) can help intelligence agencies navigate the data deluge by enabling human analysts to make data-driven decisions in a more transparent and accountable way.


AI has moved to the forefront of the political agenda. While some consider it the solution to society’s most pressing problems, others fear it as the harbinger of socio-technical doom. The following contribution will argue that AI is overrated: what will help intelligence agencies navigate the data deluge is not artificial but augmented intelligence.

The term AI has become ubiquitous. But while many passionately opine on it, few rigorously define it. What do we actually talk about when we talk about AI? The goals that are pursued when investing in AI typically fall along a spectrum. On one end, organizations may strive to build information systems that function like human minds (“strong AI”). Examples of strong AI are still strictly confined to the realm of science fiction, and arguably include the robots in Westworld or the virtual assistant Samantha in Her. On the other end, organizations may be satisfied with building task-focused information systems that believably emulate human reasoning within one narrow context (“weak AI”). Examples of weak AI include IBM Watson and Apple’s virtual assistant Siri. The underlying hope is always the same: machines that (pretend to) think like humans will be better thinkers than humans themselves.

But what if the debate around the extent to which machines can think like humans is misguided? What if the entire point of machines is precisely not to compete with but complement human thinking in fundamentally non-human ways? According to this third way, also known as augmented intelligence (AuI), the role of machines is not to replace, but facilitate human decision-making. The question we need to ask, then, is no longer whether AI will “solve” the information overload challenge for intelligence agencies but in what ways AuI can assist human analysts with navigating the data deluge.


A critique of artificial reason

To address this question, we first need to assess how human reasoning differs from the ways in which machines “think.” In his seminal 1980 paper Minds, Brains, and Programs, the American philosopher John R. Searle links human reasoning to intentionality and causality. Put (very) simply, intentionality is the ability to assign meaning to things; causality is the ability to understand relationships between things.

Now, on the face of it, one could argue even the simplest computer programs already “achieve” intentionality and causality with something like definitions and if-then-else statements. But as Searle’s famous Chinese Room thought experiment suggests, functioning is not the same as understanding. Just because a computer is programmed to correlate one set of symbols with another set of symbols, does not imply that the computer “understands” what these symbols mean and how they relate to each other. Computer programs execute commands; humans make decisions based on their experience, values and emotions. Similarly, while advocates of machine learning (ML) might argue that computers can build up “knowledge” by being trained to identify patterns in masses of data, correlation does not imply causation. Computers do not “understand” data like humans; however, they can make data more understandable to humans. They can also help humans better focus their attention, for instance, by flagging correlations worth examining as possible causations.


Augmented Intelligence & the information overload

Assuming, then, that the focus on AI is misguided, to what extent might AuI help intelligence analysts make sense of the overwhelming amount of information that they receive? We’ll begin with the basic units that make up information, that is, data. We have never had access to more data than now. For intelligence agencies, this data might come from different sources, such as signals intelligence and human intelligence, and in different formats, such as telecommunication records, email accounts, financial data, etc. The biggest challenge for intelligence agencies, but really any organization, then, is how to harmonize, filter, and glean actionable information from, that data.

A computer program can effectively augment human intelligence by providing analysts with a unified data landscape that is moreover presented in a way that makes sense intuitively. For an analyst working at an intelligence agency, this might mean turning data stored in documents, reports, and tables into persons, objects, and events, and graphically visualizing the relationships between them. AuI does not aim at providing answers but at enabling subject-matter experts to ask the right questions. Asking the right questions in turn enables human analysts to efficiently sift through a morass of data to find the information that actually matters.


The human element

And those analysts should indeed remain human, for both practical and normative reasons. Intelligence agencies are responsible for some of the most sophisticated, high-stakes analyses around the world. These analyses involve targets and adversaries who are often extraordinarily well-resourced, savvy, and motivated to evade detection. Analytical approaches that seek to wholly supplant the expert intuitions and tradecraft of professional analysts will prove inadequate to the task at best; not only because of the sudden occurrence of unexpected variables that weren’t included in their models, but also because of a lack of “machine awareness,” so to speak, that there might be unexpected variables to begin with. Unlike machines, humans can make contextual decisions in scenarios where something just does not feel quite right. Sometimes it just takes a human to know one.

Of course, human analysts may still misinterpret the data or make poor decisions; AuI does not solve the problem of human error. At Palantir we recognise that all phases of AuI deployment are subject to inherent limitations (e.g., personal bias, limited understanding) and the extrinsic failings (e.g., poor engineering practices) of their designers and operators. When employing AuI/ML tools, we therefore examine not only the viability and efficacy of the technologies themselves, but also the fidelity and fairness of the data upon which relevant models are being trained, and the human purposes for which those models are being deployed.

At the same time, AuI can go a long way toward ensuring that humans make decisions based on the most comprehensive and comprehensible data available. Representing data in a more humanly intuitive fashion, in turn, can also increase the transparency and accountability around such decision-making processes. For instance, Palantir’s Privacy and Civil Liberties (PCL) team has built a feature that translates audit logs that are otherwise inaccessible to non-technical audiences into an interface that represents user and administrator activity in an intuitive and searchable way. Compliance and oversight teams with no programming experience can thus easily review how Palantir is used by analysts in practice to help determine whether individuals are inappropriately accessing, using or exporting information, or engaging in other activities in possible violation of privacy and civil liberties.


AuI as the basis for greater transparency and accountability

To conclude, the infatuation with AI comes and goes. Information systems that are both practically sustainable and normatively defensible, by contrast, do not compete with, but complement human decision-making. What will help intelligence agencies navigate the data deluge is thus not artificial but augmented intelligence, such that human analysts can make data-driven decisions in a more targeted, transparent and accountable way.