“We’re at the beginning of a golden age of AI. Recent advancements have already led to invention that previously lived in the realm of science fiction — and
we’ve only scratched the surface of what’s possible.”
— Jeff Bezos, Amazon Founder and executive chairman
The use of artificial intelligence (AI) in the domain of aviation security (AvSec) is
growing but it is currently still limited. In this feature on AI in AvSec we have reached
out to industry experts for an update on the implementation of AI in AvSec, its threats and the necessary precautions, as well as a look into future applications.
Growing Use and Limited Applications
Richard Mayne, lead data scientist at Osprey Flight Solutions, affirms that the use of AI is growing in the AvSec field but is still limited in application. “Machine learning particularly is used across the screening and scanning functions, applying biometric identification capabilities, improving detection of unauthorized objects in baggage and other related activities,” he says.
As to the extent and the applications of AI in AvSec currently, Andrew Cox, an R&D systems analyst at Sandia National Laboratories, observes that the U.S. Transport Security Administration (TSA) uses machine learning algorithms specializing in object-detection for threat-detection in baggage, on-person screening, as well as identity verification systems.
Indeed, there is still a way to go to realize the power that AI can bring to keeping the aviation sector safe and secure, according to Mayne. “And there is a fundamental gap, an area in which AI is enabling game-changing capabilities, but for which the takeup in aviation is relatively small,” he affirms. “Understanding the risks the system faces, monitoring those risks and making efficient operational decisions that are accurately informed by broad spectrum and highly specific data sets is a far more critical function.”
Risk management is what would accurately judge the need for specific controls or mitigation measures, such as improved scanning and identity detection capabilities, observes Mayne. “The industry is gradually adapting to using AI, predominantly for optimizing the process of gathering, cleaning and structuring open-source data, in efforts to reduce the monetary and time investment required for this most fundamental of processes,” he says. “As a forerunner in applying data science techniques to the AvSec industry, we believe that such applications only scratch the surface of what is possible and do not yet offer any meaningful analysis. More specifically, in addition to simply improving workflow efficiencies, AI has the potential to offer advanced pattern identification, identification of new metrics for risk, forecasting of adverse events and behavioral analysis of key actors, to name but a few of the applications that the industry could benefit from.”
Threats and Precautions
The implementation of AI in AvSec comes with some unique challenges; there are some threats potentially stemming from an improper use of AI in AvSec, which require some precautions to be applied.
Cox affirms that there is a wide variety of challenges that can come from applications of AI, and the best ways to manage those challenges are with a careful and comprehensive data collection, curation and an algorithm testing regime specifically designed to discover and correct for any flaws. “One of the advantages of TSA’s open architecture approach is that it can more easily monitor the performance of algorithms to catch potential challenges before they become operational challenges,” he says.
It is partially due to the perceived threat of improper use that the industry has not adapted AI more wholeheartedly, according to Mayne. “It is true that it takes significant investment to convert abstruse mathematical computing techniques into a scientifically validated product that risk management professionals can begin to put faith in; indeed, media coverage of the AI field has been almost universally negative in recent years, which only exacerbates this issue. The level of risk implied in using AI specifically for enhancing AvSec analysis is often conflated with societal and ethical considerations which are not necessarily relevant,” he says.
One should not assume that there is no risk, but perhaps it is best to consider improper use of AI in this context to be roughly equivalent to inept handling of conventional statistics, points out Mayne. “A badly implemented machine or deep learning model may be as dangerous as manipulating a crucial statistic when the insights gathered therein are used to inform policies on which lives depend,” he says. “It is our opinion, however, that with a full understanding that AI may only do one of two basic things (predict or classify), and that the analytical outputs therein are produced by potentially fallible models, the potential for miscalculation is dramatically reduced.”
Future Applications
As to the future applications of AI in AvSec as AI evolves the time window that is to be expected before future applications are deployed, Cox points out that the most important application of AI is threat object detection algorithms for baggage and on-person screening. “While the TSA has already implemented such algorithms, as the TSA transitions to an open architecture technology strategy, it will more easily and rapidly update those algorithms to perform more effectively,” he says.
According to Mayne, the application of AI to the fundamental requirements of risk management (identifying, analyzing and monitoring risk) offers a huge step forward for the AvSec industry. “One can have the most advanced baggage screening capability in the world, but if the risk of someone carrying illicit or illegal goods using this method is vanishingly small, the expense and resource required to implement and manage such capabilities is not worth it,” he says. “Such a wealth of open-source data is now available that advanced analytical techniques are not only possible, but are also more feasible than more conventional, statistical techniques.”
As the scale of the available data becomes truly ‘big’, conventional techniques used for cleaning data, generating statistics and syntheses based upon those, be they manual or otherwise, will gradually be phased out in favor of AI techniques which are truly scalable, affirms Mayne. “Examples of such activities from our offerings include anomaly detection for early reporting, automated risk scoring and generation of machine-written reports. The transformation has begun, but we predict that the next five years will be a crucial milestone in the mass acceptance of these technologies,” he concludes.