Artificial intelligence has drawn significant attention in society, offering us the opportunity to exploit untapped information in data to reveal previously unknown, and in some cases, unexpected insights. Sheldon H. Jacobson discusses opportunities for artificial intelligence in aviation security, discussing where it may be most effective, and some of the pitfalls for its adoption.
Artificial intelligence with its applications in such diverse areas as transportation (autonomous vehicles), medicine (healthcare diagnosis and treatment planning), finance (high-speed trading), and marketing (consumer buying patterns) is ubiquitous and growing. Numerous industries are jumping onto the artificial intelligence bandwagon, seeking competitive advantages that can be unlocked by making sense of the plethora of large (big) data sets being collected and made available.
The term ‘artificial intelligence’ connotes the ability of a computer to think and respond like a human mind. There are in fact many types, levels, and shades of artificial intelligence, including computer vision, pattern recognition, and speech interpretation, as well as reasoning, decision-making, and optimal planning. The aspirational goal is general intelligence, which includes critical thinking and intuitive decision-making that meet or exceed the capabilities of the human brain. When IBM’s Watson computer system played the game Jeopardy against two recognised Jeopardy champions in 2011, Watson won convincingly. This led some to believe (albeit incorrectly) that artificial intelligence had truly arrived. What this exhibition demonstrated was that given a highly focused objective (i.e., recalling answers to questions that require information without insight), artificial intelligence algorithm performance will be superior to the human mind and the speed at which it can respond. However, building artificial intelligence algorithms that exploit computing speed and computer memory do not directly translate into capturing the complex functions and decision-making that human brains are capable of performing.
Artificial intelligence methodologies are quite broad, with many in use for several decades, even though they have not always been classified as artificial intelligence. For example, meta-heuristic optimisation methods like simulated annealing (a probabilistic search method inspired by the annealing process in metallurgy) and genetic algorithms (a search method inspired by natural selection in biology) have been effective in solving hard problems that human solvers are incapable of addressing in a realistic amount of time. More recently, machine learning and deep learning algorithms have drawn attention by how they are able to use massive data sets to be trained, providing human-like insights. The models employed by such algorithms, based on probability, statistics and neural networks, have been in existence for decades, yet only recently have they become feasible and practical to apply, with a long and growing list of impressive results in the field.
One aspect of artificial intelligence that has been demonstrated to be highly effective is identifying patterns in data that humans are challenged to efficiently identify, and use such patterns to create insights (or, informally, connect the dots). The belief that data is information is flawed, since data represents facts, while information is insights extracted from the analysis of data. Indeed, data often contains useful information; procedures to extract such information are the value-added for artificial intelligence. Machine learning and deep-learning algorithms go one step further by using models that are trained on massive data sets and create connections that can be exploited to predict, forecast, and behave in manners similar to human intelligence. If done well, systems can be created whose performance is superior to humans, reducing human error in decision-making and improving the entire decision-making process.
Aviation security presents an intriguing opportunity for us to apply artificial intelligence to gain advantages to thwart would-be intruders and enhance the protection of the aviation system. First, there is a tremendous amount of data available on various types of threats, passengers, baggage, and the aviation system as a whole. Second, although threats can and do evolve over time, there are numerous threat signatures (in shape and composition) that can be classified for training artificial intelligence algorithms. Third, human decision-making plays a critical role in aviation security operations. As such, artificial intelligence has the potential to reduce, if not eliminate, the human error component in passenger and baggage threat identification decisions, effectively reducing both false alarm and false clear rates that are attributable to human error. These factors position artificial intelligence as a viable target for application to enhance aviation security system performance.
Threat detection relies heavily on human operators interpreting information from some type of screening device – often based on X-ray, computed tomography (CT), or millimetre wave technology – that projects images onto a monitor for interpretation. Automated threat recognition systems are designed to reduce false alarm and false clear rates by improving the interpretation of such images and assisting the human decision-maker in assessing the risk of the items being screened. This is an area where artificial intelligence has the potential to improve threat detection, given that the training used by human security operators can be magnified and enhanced in the training of artificial intelligence algorithms to make threat detection decisions with more sensitivity (true alarm) and specificity (true clear). Moreover, with artificial intelligence, monitors are superfluous (they are only needed for human sight interpretation), since the data generated to create the monitor images is what artificial intelligence requires. The benefit of artificial intelligence algorithms is that significantly larger data sets can be used to train them, far beyond what a human security officer can digest and interpret in a realistic amount of time.
“…systems can be created whose performance is superior to humans, reducing human error in decision-making and improving the entire decision-making process…”
Another area where artificial intelligence can be useful is in detecting features and characteristics of potential terrorists – the human and behavioural aspect of security screening. Identifying terrorists among a large pool of passengers is akin to finding needles in haystacks, since most travellers are benign, with no nefarious intent. Given that there is significant information collected about all travellers, such massive amounts of data can be used to train artificial intelligence algorithms to identify anomalies that alert security officers for additional human attention and scrutiny. Artificial intelligence can also be employed to enhance behavioural detection systems to create a more complete analysis of all passengers, and further partition the haystack so that a smaller group of passengers require enhanced security screening and attention. In the United States, artificial intelligence may breathe new life into the Transportation Security Administration’s (TSA) SPOT (Screening of Passengers by Observation Techniques) programme, improving its overall effectiveness and justifying its implementation. Of course, when using passenger information, issues related to privacy and fairness may become a concern. Addressing such ethical issues requires a thorough analysis of the costs and benefits of using such information, which can only be resolved based on laws and standards for sharing data.
Artificial intelligence can also be effective in support of risk-based security strategies. Given that the majority of travellers pose no risk to the air system, risk-based security manages risk and resources so that they are appropriately aligned. The TSA’s PreCheck is a highly visible risk-based security programme. PreCheck offers passengers the opportunity to be voluntarily vetted, and if approved, subjected to a lower level of security screening (referred to as expedited screening). Those not vetted are required to undergo standard screening (referred to as enhanced screening.) Artificial intelligence has the potential to be used to create multiple classes of security vetting, resulting in multiple classes of physical screening at airports, effectively enlarging the enhanced and expedited screening lanes into multiple screening options. Given that the majority of travellers are not terrorists, this will result in a larger proportion of passengers being assigned to one of the expedited screening classes, leaving the most unknown travellers (which are where terrorists are likely to fall) in the enhanced (or even, super-enhanced) screening lanes. Any system that more carefully partitions passengers based on their risk results in a better utilisation of security resources, and provides security officers the ability to direct more attention to the riskiest class of passengers, or those passengers for which little or no information is available. Such an aggressive risk-based enhancement may also provide an enhanced deterrence effect, since would-be terrorists may perceive the bar to be raised in circumventing security screening protocols. To actuate such a transformation will require enhanced identity verification, such as biometrics, which also reduces the human error element in assessing each passenger’s identity. Indeed, artificial intelligence algorithms can efficiently analyse large amounts of disparate pieces of data, to make passenger risk assessments more efficient, effective, and accurate. Given that terrorists will work to gain access to any of the expedited screening classes, artificial intelligence algorithms will be the cornerstone of the vetting process, to ensure that such efforts are thwarted.
Technologies and procedures that can improve aviation security, while reducing inconvenience to travellers, are of significant benefit to secure the entire air system. Artificial intelligence is poised to take the lead in enhancing aviation security beyond the limitations posed by human decision-makers. However, like all good things, more is not always better. Indeed, the optimal security strategy is to provide minimal levels of security commensurate with passenger risk. A strategic view of aviation security suggests that just because artificial intelligence can be used to enhance a particular security tactic does not mean that it should be applied. For example, artificial intelligence may be effective in reducing false alarm rates for a particular threat, but the cost in doing so in terms of time spent screening and its impact on passenger satisfaction may not justify such an application. Indeed, artificial intelligence should not be used as a hammer in search of nails, but rather as a carefully crafted tool for precision enhancements in security. Artificial intelligence may also be useful in creating opportunities that are outside of the traditional security paradigm. For example, it may be possible to sequester some travellers such that they require no physical airport screening, a privilege typically afforded only to a small group of travellers (such as, in some locations, airline pilots). With enhanced vetting and biometric identity verification, such a transformation may be possible. In addition, traditional screening lanes may be replaced with security transit hallways whereby passengers pass on their way to their boarding gate, with artificial intelligence algorithms providing sufficient information to assess and identify only those passengers that need to be directed into more formal physical security screening lanes.
“…artificial intelligence should not be used as a hammer in search of nails, but rather as a carefully crafted tool for precision enhancements…”
Given the enormous investment made to harden the airside of airports, recent attacks have focused on the landside. Such non-sterile areas are highly vulnerable, due to the large volume of people who can gain access to them. The widespread use of cameras and monitoring devices provides real-time surveillance of such areas. Artificial intelligence may provide additional insights into the analysis of such data and glean insights to provide better deterrence, protection, and response to mitigate terrorist attacks.
One challenge in developing effective uses for artificial intelligence to improve aviation security is accessing the human capital talent to make such advances. There is a severe (and growing) shortage of artificial intelligence researchers and developers available to meet all the demands for such skills in industry and government, creating a highly competitive environment for such talent. As such, aviation security needs may need to be filled with commercial off-the-shelf products rather than creating customised artificial intelligence tools. This creates its own set of problems, since, to be effective, aviation security requires its procedures to be standardised (for the benefit of passengers and security officers) and unpredictable (to keep would-be terrorists off balance). This delicate trade-off may make it difficult for commercial off-the-shelf products to be successfully deployed. The sensitivity of aviation security strategies in general also makes such an approach even more challenging.
Given that artificial intelligence may supplant or even replace human decision-making in the field, testing such algorithms will also be challenging. Evaluation conducted in controlled environments, or using red teams, is common in today’s aviation security environment, yet never fully captures the nuances faced in actual aviation security operations. This may impede the implementation of artificial intelligence algorithms and even limit their use in the field.
Another concern is that artificial intelligence may be exploited by terrorists in their own approaches to circumvent and breach aviation security operations. This creates an adversarial environment whereby the very tools used to enhance aviation security operations may also be used to penetrate it. Moreover, given that machine learning algorithms require large data sets to train, infiltration of such data sets by perpetrators, or benign errors in how such data is chosen and collected, can lead to such algorithms being ‘mis-trained’, resulting in flawed field performance. Along these lines, continuously updated data sets will need to be used to retrain such algorithms, as terrorist threats evolve. Such possibilities mean that the testing and evaluation of artificial intelligence algorithms in the field may also require numerous adversarial scenarios to be evaluated to identify potential security holes and how security breaches can be prevented.
“…artificial intelligence may be exploited by terrorists in their own approaches to circumvent and breach aviation security operations…”
In spite of such challenges, artificial intelligence algorithms have already been effective in improving aviation security operations. Mathematical models employing artificial intelligence, including game theory, real-time decision-making under uncertainty, and optimisation, have been effective in allocating security resources, partitioning passengers into security classes, and optimally assigning security technologies to airports. Academic researchers have contributed much of this work, with industry and government agencies the willing and eager consumers of such knowledge and methods. As the breadth of artificial intelligence algorithms grow, and data becomes available to support such algorithms, an ever-expanding frontier of aviation security opportunities will present themselves. Successful applications of artificial intelligence will also create a deterrence effect among would be perpetrators, since such systems may be perceived to be more difficult to ‘game’ and breach. Clearly, the future of artificial intelligence in aviation security is bright. The only limitations are the innovations of researchers to identify new ways to exploit the surfeit of data available, and the willingness of decision-makers to take bold steps forward in realising its full potential.
Sheldon H. Jacobson, Ph.D. is a founder professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign (USA). Dr. Jacobson has been working on the design and analysis of aviation security systems using optimisation-based artificial intelligence models since 1995. He has received numerous awards for his research, including a John Simon Guggenheim Memorial Foundation Fellowship, the IISE Award for Technical Innovation in Industrial Engineering, and the INFORMS Impact Prize for his contributions to risk-based security. He has published over 260 refereed journal articles, book chapters, professional publications, and conference proceedings, and delivered over 480 presentations/seminars/panels/posters at conferences, research labs, workshops, and universities around the world. He is an elected Fellow of the Institute of Industrial and Systems Engineers (IISE), and the Institute for Operations Research and the Management Science (INFORMS).