Automated Video-Surveillance

Discussion Prompt: Should we ban the use of automated video-surveillance?

Facial recognition technology has captured both the imagination and the concern of many. But it is only one form of biometric surveillance, which itself is merely one application of automated video-surveillance. The field of video analytics is vast, and can include nearly every type of action and occurrence imaginable, even human sentiment. The pair of human eyes once tasked with passively watching CCTV footage has been replaced with artificial intelligence programmes. Law enforcement and other security agencies, which increasingly resort to automated video-surveillance, tout the technology’s aid in reducing crime and increasing public safety, but critics have long raised the alarm. They may highlight its supply-driven market background or point to all the problems AI itself is fraught with — racial biases, false positives, and algorithmic inscrutability, among others. Fundamentally, they worry that full-scale automated video-surveillance in public spaces will create a point of no return, after which we will be unable to live our lives anonymously and assert our essential civil liberties, such as freedom of assembly or freedom of speech, with dire consequences for democracy. With the spotlight usually being on how to regulate technology after it has already been introduced, we want to take a step back and ask: Do we even want to allow this kind of technology? How can we enforce our democratic will in the face of ever-faster technological change?