On the 20th anniversary of what is known as the worst terrorist attack on United States (U.S.) soil, Americans still vividly remember September 11, 2001 as if it was yesterday. With respect to technology and data, the impact of the events that occurred on that day have transformed the global community and its response to counter-terrorism. For both local and international travel within the U.S., the Transportation Security Administration (TSA) implemented additional security protocols that included expanding what could and could not be brought onto a plane, limited airport access to non-ticketed travelers, and reinforced cockpit doors on airplanes. While these manual and policy-centric reforms have been instrumental in increasing security presence, technological advancements alongside the rise of data science have added improvements to the security apparatus. Technologies, including artificial intelligence (AI) and machine learning (ML), have enabled us with facial and speech recognition to self-driving cars and automated financial investing and thus presented an even greater opportunity to protect our communities. How they can be used to address terrorism and related issues is a question that is constantly being raised. Additionally, how are they being used to keep us safe and prevent the destructive chaos of terrorist attacks all over the globe?
Case Study: Modern Applications for AI/ML in Counterterrorism
There are two (2) common types of methods to counter potential attacks: Deterrence and Prevention. Deterrence seeks to provide infrastructure changes that make carrying out a successful attack increasingly difficult. Examples of these include generally improved internal security coupled with policy changes. Prevention seeks to stop an attack one step prior, by prioritizing the early identification of terrorists. Examples of this include identifying provocative or dangerous content online, limiting recruitment opportunities, and continuous surveillance. With improved technology that can produce vast amounts of continuous data, AI/ML experts have ventured to harness this information and provide digital solutions in support of both deterrence and prevention-based counterterrorism strategies. These solutions have been designed to provide predictive capabilities that ultimately allow for automated, efficient, near real-time decision making that far exceeds what humans have the capacity to manually accomplish.
Predictive biometric systems have played a crucial role in counterterrorism efforts by providing means of improved security measures for both deterrence and prevention related methods. Project FIRST (Facial, Imaging, Recognition, Searching and Tracking), see more information here, led by INTERPOL serves as an informative example of this technology in action. Project FIRST’s international initiative is to provide a framework which allows countries to share biometric data (such as facial images and fingerprints) on terrorists “thereby improving efforts to locate terrorists and carry out successful investigations and prosecutions”. According to Interpol, local law enforcement officers gather biometric data of prison inmates convicted of terrorism-related offenses. Upon accurate recording, this data is stored in Interpol databases, accessible by governments around the world. In the case of productionized facial recognition systems, deep learning models are then applied to these databases of facial images to capture key information, namely facial geometry. In layman’s terms, there are a multitude of key points on a person’s face. As humans, when looking at another person we identify their features such as their eyes, nose, mouth, and other structural factors to identify who they are. These deep learning systems have been trained to accomplish this same task algorithmically. When a facial image in Interpol’s database is processed by this method, the system identifies these facial landmarks (typically 60-100+ facial landmarks) and computes distances between them. These distances serve as a means of numerically representing your facial signature and are ultimately stored in a database containing other terrorists’ facial signatures. When these systems are integrated within cameras or CCTV’s, people’s facial images and ultimately facial signatures are then captured before being compared to the terrorist database to identify potential matches. As a product of Project FIRST and other similar initiatives, facial recognition technology has been successfully implemented to positively identify terrorists and track their movements under a preventative mindset, while also being used directly in airports to identify individuals as part of security protocol under a deterrence mindset. And while the human brain can perhaps only identify and recognize/compare perhaps a few faces at a time, many of these systems can autonomously complete this process for several hundred faces per second accurately to significantly bolster our digital security in public spaces. When considering high traffic areas such as airports especially, this combination of efficiency and accuracy becomes increasingly important to detecting individual terrorists amongst broader crowds.
Impact to the Digital Battlefield
The global response to 9/11 has accelerated both technological innovation and the reliance on technology to solve our toughest challenges, such as applications of facial recognition for counterterrorism. Technological advancements, increasing research, and improved computational power has made facial recognition a viable option for battling counterterrorism, enabling faster autonomous decision making. With such diverse applications, facial recognition technology for security has even become commercialized to be included as features for logging onto phones and computers.
But there are risks, and this technology is not perfect. Even today, we see facial recognition falling victim to algorithm biases, where structural and systematic biases are reflected by data collection and in turn learned by associated models. Many people have seen these biases occur in practice, including popular examples such as flaws in Twitter’s picture preview cropping algorithm, which utilizes facial recognition to identify key people in pictures, before cropping around that person to form a preview. While this may not be the most impactful example in comparison to counterterrorism, the same risk factors hold true when considering facial recognition in a security setting. While we want an accurate product, we also want a product that promotes the common good while minimizing harm. This means a system that does not unfairly target specific demographic groups as inherent risks for example and avoiding these sorts of flaws means paying careful attention during the data collection process in order to build a truly generalizable model. However, these forms of biases are no secret and significant steps have been taken in industry to improve and augment datasets in order to create more ethical models.
Facial recognition presents just one method of technology for counterterrorism. Another common example includes natural language processing (NLP) for monitoring social media data, a practice which has even been adopted by many schooling systems around the United States. With the growing amount of data as well, the risk of cyber terrorist attacks has increased in parallel alongside AI/ML based solutions to counter them. As the risk on the physical battlefield remains great, the growing risk on the digital battlefield highlights the need for responsible technology solutions as a key player in our security measures across all levels of potential engagement.
Comments