The Challenges in AI for Aviation Security Intelligence

The Challenges in AI for Aviation Security Intelligence

Richard Mayne’s role as lead data scientist is to help the company to get more from its data and automate some of the laborious data processes. He is involved in writing software that applies statistics and/or artificial intelligence techniques to business problems, communicating whatever insights are uncovered and, if necessary, helps to deploy the software at scale. He also regularly surveys new developments in the field to ensure that Osprey remains at the cutting edge of machine- and deep learning technologies. Mayne started his career as a biomedical scientist specializing in histopathology, before retraining in computer science. He spent five years as a postdoctoral researcher in unconventional computing and some brief time as a statistics lecturer, before moving to data science in 2019 and joining Osprey in 2021.

As a data scientist, the bulk of my work involves gathering, structuring and interrogating data on large scales, usually through the use of artificial intelligence (AI) techniques.

Our end goal is to find hidden mathematical features which we then convert into useful insights and products; we tend to prefer using AI techniques over conventional statistics because the former excel when applied to very large and/or complicated data sets.

It is with regret that I encounter articles in popular media sources decrying the use of AI technologies, which is nowadays a regular occurrence. A cursory search in the technology section of my chosen national newspaper’s website reveals six articles written within the last eight weeks, all of which have derisory titles and no small amount of hyperbole within their constituent text.

What riles me, regarding aforementioned newspaper articles, is that I don’t get the sense that their authors accurately represent the true challenges in our field, of which there are many. A year or two ago, I might have posited that our industry’s challenges were primarily mechanical: writing computer software is, in my opinion, a challenging career choice for the mathematicians and scientists who choose to specialize in data science. Communicating complex numerical output in a format that non-data scientists may easily interpret is also tricky, as has been the rapid move towards cloud-based big data applications for our craft. I only hear about these challenges from my colleagues, however, and hardly ever from media sources or lay persons.

Instead, I more commonly read about the notoriety generated by AI in recent years which, I cannot deny, is partly justified following revelations that many unscrupulous organizations have used machine and deep learning algorithms to snoop, spy, attack and target individuals with malicious intent. Mass public awareness of these events has, understandably, damaged the credibility of AI and those who employ it.

This presents an enormous challenge to those of us who are attempting to leverage AI for a purpose that we may demonstrate to be helpful and necessary, be it accelerating vaccine development, writing simulations for molecular interactions inside next-generation batteries or generating intelligence data from publicly available sources to help air operators reduce the risk of their flights.

I will argue here that we as an industry are now facing a moral imperative to adopt, trust and champion the use of AI for making transport safer for all.

Why Hasn’t the Industry Adopted AI Already?

Practitioners in AI ethics, which is an enormous and fascinating academic field, have published extensively on the factors affecting rate of rollout of AI technologies into our society. A huge number of variables come into play, ranging from neo-Luddite claims of AI removing human jobs from the economy to more mechanical issues, such as the resource-hungry nature of deep learning algorithms necessitating prohibitively expensive computing power until recent years.

My own perception is that the slow uptake of these technologies within our industry is primarily a matter of percolation: just as it takes time to brew coffee by letting gravity draw water through grounds, so too will it take time for new ideas to filter into our daily practice. Surely, then, is it not just a matter of time before we’re all talking about neural networks and sitting back while our machines do all of the boring, manual work we’ve been itching to become divested of? At the risk of torturing my own metaphor, I believe the answer to this hypothetical is “no,” as we data scientists must be the gravity combating opposing forces of public distrust and general lack of awareness.

It is in the interest of communicating my rebuttal to popular anti-AI arguments that I devote this article.

Arguments for AI

We now generate too much data to not use AI.
As I’m sure all are now aware, the scale of data generation and storage nowadays is utterly staggering. Even focusing down on the small corner of this vast resource that I work with day to day, i.e. aviation data, it is unthinkable that we could now track the sheer volume of flights and events pertaining to the safety of flights without the aid of data science.

For example, at Osprey we use deep learning techniques to read all of our incoming text data streams and automatically interpret them, for the purposes of cleaning (e.g. nullifying the effects of spelling mistakes and synonyms), structuring (detecting entities such as countries and weapons) and classification into logical groups. This allows us to vastly increase the rate of data influx, automatically identify trends and reduce the burden of manual work on our team of expert security analysts.

We have little recourse but to use our new toolbox of AI software, which is designed to handle data at this scale. Failure to do so is to deny our clients access to the highest quality tools for making decisions on which human lives depend.

AI Does Not Replace Human Intelligence

The role of the data science team at Osprey is not to replace analysts with software, but to assist them through automating cumbersome, simple tasks and hence allowing them to better apply their talents to interpretation, calculation and communication of risk to our clients.

More generally speaking, our AI can only do what humans do: one could, in theory, calculate information flows through machine and deep learning models by hand because the calculations underlying anything a computer does are, at base, extremely simple, although this would of course be grossly impractical.

I do not wish to patronize readers by clearly demarcating the boundaries of science fiction and reality, but increasingly I am given the impression that there is a very real fear that our AI is “intelligent” in a way that can be equated with human intelligence. Authors may speak of “AI taking over,” or otherwise ruefully report that new software can do something scary, such as generate realistic synthetic text, pictures or music. This example wasn’t picked at random: these are real technologies, but all such applications are entirely guided by their programmer; there is no machine volition behind this behavior.

Artificial intelligence refers to mimicry of the information processing that living organisms (or their components) do. Machine learning is a set of algorithms (mathematical “recipes”) where the computer keeps a track of how well it’s doing on successive repeats through the same calculations (usually in comparison with some example data or other mathematical standard provided by a human) and “learns” successful strategies for increasing its score through minutely, randomly changing operation parameters. Deep learning is the same thing, but uses conceptual blocks that mimic the structure and function of the human brain. We have chosen to equate “skill with mathematical operations” here with”‘intelligence”; emulation of consciousness, conversely, is still very much the preserve of fiction for now.

In summary, an AI entity might process a lot of aviation data comparatively quickly, but it can’t truly understand it without reference to human supervision.

AI Is Literally Everywhere, and Not Always Deployed by the Good Guys

My argument above on the nonexistence of thinking, feeling artificial constructs is not to diminish the risks of AI that is deployed with intent to harm, such as self-evolving spyware, weaponized robots and mass surveillance.

I am continually surprised at the lack of public awareness of AI. This is partly down to us data scientists — I mentioned that one of our profession’s biggest challenges is communication — and partly because large companies and governments don’t wish to invite public scrutiny of how they use their data. It is therefore with some amusement that I read tweets from people casting aspersions on public health policies on “privacy” grounds, when the device used to post said tweet logs phone calls, text messages, internet search history, spending habits, geographical location, fingerprints, iris scans, voice patterns and more, all of which is being constantly uploaded and assessed by hundreds of machine and deep learning models.

Let us consider how we may use AI more constructively. At Osprey, one of our flagship data science products (Squawk) is software that reads all of our incoming data in chunks: not just text is interpreted, but also its metadata, such as location, risk classification, frequency of alerts for this region, and so on. This model excels at detecting events that are out of the ordinary: an unusual increase in activity, a single event in isolation that is extremely rare, or even a combination of factors. Crucially, this system can be configured to examine whole countries, airspaces or even individual airports, sending alerts whenever an anomalous event is detected. We have found this system to be particularly useful for helping to identify noteworthy events, both among seas of non-remarkable information and hidden within tempestuous, rapidly changing data, particularly in conflict zones. Using AI to glean beneficial use from data that so frequently arises from violence, catastrophe and unrest is exactly what all companies in our field should aspire to.

Conclusion

I have summarized here the major challenges faced by our industry on the road to mass adopting of AI technologies, which include misconceptions, opinion and substandard communication. The fact that we must work with vast quantities of data on a daily basis, that it will never replace humans with AI and that we must adapt to a technology-laden world are, in my opinion, essential reasons for wrapping our industry around these technologies, as we are now aware that we simply cannot provide superlative security or risk intelligence without it.

About Osprey Flight Solutions

Founded in 2017, Osprey fuses real-time information, technology, and industry-leading expertise to deliver the most advanced aviation risk analysis available anywhere. Using a data-driven approach provides instant situational intelligence to power dynamic decision making. Being able to see, understand and react to threats as they emerge sets a new standard for ensuring the safety and security of passengers, crew, and aircraft. Because risk isn’t static in a fast-moving, turbulent world.