Philip Baum is the Managing Director of Green Light Limited, Visiting Professor of Aviation Security at Coventry University and the former Editor-in-Chief of Aviation Security International. Green Light runs courses in T.R.A.P. (Tactical Risk Assessment of People). He can be contacted at email@example.com
For those engaged in the aviation security industry, the implementation of behavioural analysis has been a long-standing subject of debate. Whilst most recognise its value as a security technique, and even encourage members of the public to report their concerns about suspicious activity observed, they hesitate to fully embrace it. Technology and automation, regardless of their fallibilities, prevail and serve as our first line of defence. Objections raised in respect of behavioural analysis include fears of racial profiling, the inability for regulators to test its effectiveness and an alleged absence of academic proof of concept. Yet, despite this, other modes of transport are looking to the aviation industry – the purveyors of best practice in terms of transport security and certainly the sector with the greatest experience of responding to mass-casualty terrorist activity given the historic challenges it has faced – for advice and mentorship. So what, in respect of behavioural analysis implementation, are the takeaways?
Profiling, as we used to call it, was introduced to the aviation security industry in the early 1970s. The authorities responsible for screening those boarding Israeli carriers started questioning passengers in the aftermath of a series of incidents, commencing with the hijacking of an El Al flight to Algiers in 1968 and then, from a profiling perspective more significantly, the attempted bombing of three El Al flights (and the smuggling of a toy gun onto a flight in a cake!) in 1971 and 1972 through the use of naïve women who had been duped into carrying bombs onto flights. In the 1972 incident, the device did detonate, but did not destroy the airliner.
It is important to note that the unwitting bomb carriers were women from Holland, Peru and the United Kingdom – not Middle Eastern men. The profiling system was created to respond to such incidents, not to focus on those who matched the stereotypical image of a terrorist. That said, many security agencies did utilise a passenger’s ethnicity, and perceived religion, as means to discriminate, and even US carriers, who adopted the Israeli profiling system in the late 1980s, did resort to focussing on passengers holding passports from Middle Eastern countries, bearing Arabic-sounding names and female passengers travelling alone. Yet, nowadays, as the threat has evolved and our adversaries have become more sophisticated, so too has profiling itself become an artform that can address the threats of the future, as well as the past, specifically because it does not focus on a single attribute. It is reliant on a well-trained workforce who know that focussing on any one group is not only politically incorrect, but devoid of security sense. It is based on common sense rather than suspicious signs lists and an appreciation of baseline expectations rather than assessing to what extent a person matches a pre-determined profile of an individual with criminal intent.
I prefer the use of the word ‘profiling’ to phrases such as ‘behaviour detection’ or ‘behavioural analysis’, because the technique does not only require an analysis of how an individual behaves, but rather demands the security operative assess all available information including, but not exclusively focussing upon, behavioural indicators. The semantic debate has, however, been partially responsible for the reluctance of the aviation industry, like many others, to truly embrace it as an ICAO Standard. It is, however, now, and much to ICAO’s credit, a Recommended Practice.
To those who continue to argue the case against profiling as being discriminatory, I would remind them that customs, immigration and quarantine agencies all blatantly differentiate at airports – after passengers have disembarked flights – with little opposition. We do not question the fact that certain passengers are pulled aside for further inspection in the green channel at customs or that the line we stand in at passport control may depend upon our nationality.
ICAO has, in this ‘Year of Security Culture’, been actively promoting the usage of behaviour detection and encouraging States, airports and airlines to share best practice in this field. Their desire is to create a security culture throughout the aviation industry, not just amongst the screeners, guards and flight attendants who perform the core frontline security roles. And that is certainly something that other modes of transport can attempt to incorporate into their operations.
Yet there are tendencies prevalent within the aviation industry which should be avoided. Let’s start with the fact that, whilst we do encourage the general public and other airport-based employees to report suspicious behaviour that they might witness in the terminal or, indeed, even in the passenger cabin, there remains a reluctance to prioritise the concerns a trained member of the security team might express; such concerns are actually more important than the alarms generated by an X-ray machine, explosive trace detection system or archway metal detector. All too often a concern raised by a screener will be ‘resolved’ based on whether the available technologies support the decision of the screener – if you can’t prove that the passenger (or employee) was carrying something prohibited or restricted, then they are normally cleared for travel or access to a security restricted area. Rarely is law enforcement summoned to address behavioural concerns and rarer still is a person denied boarding because an individual is deemed, but not proven, to be a threat to a flight.
The aviation industry is still reluctant to accept that not all explosives — in particular homemade explosives — can be detected by ETD systems, that not all weapons are metallic, and that no airport is routinely testing for the presence of chemical and biological agents, let alone the fact that not all attacks even require a weapon or IED be used in the first place. Technologies are usually certified on their ability to detect certain quantities of explosives, even though smaller quantities might successfully bring down an airliner. And, despite the fact that narco-traffickers are prepared to smuggle drugs onto aircraft in body cavities and, in extreme cases, even have them surgically implanted and, as a result Customs agencies, are prepared to used transmission X-ray when they have serious cause for concern, such technology is almost never used in pre-board screening. It’s a question of mindset. We don’t actually believe that a perpetrator will smuggle component parts of a weapon or IED through an airport checkpoint inside their bodies.
Those employed by the prison service — an environment where customer service, facilitation, throughput rates and time-on-task are not primary concerns — will attest to the fact that they, in what should be extremely security sensitive settings, cannot prevent the infiltration of prohibited and restricted articles and substances completely. When we quickly screen passengers en masse, nor can we guarantee, using technology, that the aviation industry is completely secure. Consequently, we must embrace an arsenal of different tactics — including technology.
However, if we are not prepared to recognise the limitations of technology, which can be addressed in part by emphasising the added value the workforce can bring to the table, a fundamental aspect of that drive towards creating a security culture is missing. Rail, maritime, mass transport, listen up — encouraging people to treat behavioural concerns seriously is key to developing a security culture and a security culture is key to security effectiveness.
The real reason the aviation industry has dragged its feet on the issue for so long is not actually the fear of its misuse — that could actually just be the excuse — but rather the challenge it poses to regulators who are understandably keen to test the effectiveness of their security operations. One can run hundreds of X-ray tests every month if one wishes, and ensure that every X-ray operator be tested periodically, but it is far more complicated to organise frequent penetration tests where screeners are expected to identify abnormal behaviours or activities. It may be more difficult, but is that a reasonable excuse not to deploy what many would argue is the most effective weapon in our security arsenal — human instinct? Knowing when something just isn’t right.
One regional airport (it cannot be named) conducts a minimum of eight penetration tests every day and ensures that all security staff are the target of a security drill, based on behavioural analysis, on a regular basis. The airport has a division dedicated to that task. As a result, staff are not worried that they might miss a terrorist, rather that they might miss a drill! And that’s great; that constant state of alertness and thinking outside-the-box encourages people to report their concerns and take action. Yet the airport in question is an exception. Most fear the impact on throughput rates or that customers might be unreasonably delayed by overly officious security operatives. That is where training comes in.
On 22 May 2017, a terrorist attack took place at the Manchester Arena in the United Kingdom. 22 people lost their lives, the majority as they were about to leave the venue at the end of an Ariana Grande concert. A public inquiry was established by the British Home Secretary in 2019 and the testimonies of all witnesses, staff on duty and management was heard. In respect of behavioural analysis, however, it was the testimonies of a few members of the general public, and two individuals employed at the venue with security responsibilities, that was the most telling. The perpetrator of the suicide bombing, Salman Abedi, was identified by numerous people as behaving suspiciously. A member of the general public was so worried that “he might let a bomb off” that he even approached Abedi himself to question him and, unsatisfied with the response, reported his concern to a security-trained steward. That steward was already worried about Abedi — and had noted Abedi’s presence over an extended period of time — but he failed to act in a timely fashion. The steward did eventually manage to discuss Abedi with a colleague who admitted at the inquiry that one of the reasons that he did not react was that he feared that he would be accused of racial profiling. The two stewards apparently even discussed, jokingly, what they would do if Abedi took a weapon out. Suffice to say, they identified Abedi as a threat but failed to take appropriate action. Abedi succeeded in carrying out his deadly attack.
The incident may have been at a music venue, but the lessons learned can equally be applied to the transportation industry. The first report was released in June 2021 and, in respect of behavioural analysis, it concludes that, “Any and all suspicious behaviour by event goers or members of the public close to a venue must be noted. It must be reported promptly so that investigations can be made and action taken, if appropriate.” Transportation industry employees need to ensure that they do not end up giving testimony to an inquiry in which they are forced to admit that they thought something was wrong, but were frightened to act due to concerns that they, or their employer might be sued, or accused of being prejudiced.
In that vein, the report continues, “When this [aforementioned] recommendation is followed there will be false alarms. While this may be frustrating, it is important that the way false alarms are dealt with does not discourage the reporting of suspicious behaviour. If unnecessary reports are made, the remedy should be to train staff better to recognise suspicious behaviour, rather than criticise them for making the report.”
Generally, when security personnel identify somebody as a cause for concern, they will not find prohibited or restricted items on their person, or in their baggage. This does not mean that they were wrong to single the person out for further inspection, nor does it mean that the person they stopped had positive intent. An individual performing hostile reconnaissance for a future attack is unlikely to be carrying anything threatening; a person reliant on an insider to pass them a prohibited item after the checkpoint would be ‘clean’ during screening; and, victims of human trafficking, suicidal pilots, intoxicated passengers are just the start of a list of reasons why somebody could have negative intent yet not be carrying anything dangerous.
The Manchester Inquiry reports goes on to say, “Bomb hoaxes can cause great inconvenience, but no one can safely ignore bomb warnings. So it is with suspicious behaviour. It should only be ignored when an innocent explanation has been verified.”
Identifying suspicious behaviours, and promoting ‘see something, say something’, or ‘see it, say it, sorted’, or ‘see it, hear it, report it’ style campaigns — is a great start and one to be applauded. But the real skill, and therefore challenge, is the next stage — resolving concerns.
I often ask management (in any industry) what they would like staff to do if they have concerns about an individual. “Report it,” is the standard response. That may well be an appropriate course of action in most cases, but our security personnel are there to deal with needles in haystacks — the exceptionally rare event — and are expected to respond accordingly. If faced with a marauding firearms attack or if suspicious that an individual might be, like Abedi in Manchester, about to embark on a suicidal attack against the facility, reporting it is not necessarily enough, especially if reporting something just creates an eternal chain of referrals, from team leader to supervisor to duty manager to terminal manager to, at some point, law enforcement; by then it may be too late. All security staff should, at times, be empowered to take matters into their own hands and short-circuit the reporting chain by, for example, calling law enforcement before reporting their concerns to their superiors if they deem the threat of an attack to be imminent.
If we don’t trust our staff to act appropriately, we have to question whether we are employing the right calibre staff in the first place. If we can’t even bring ourselves to address the possibility of an imminent attack, and address that challenge in training, then we have to question whether we are truly creating a security culture commensurate with the threats our societies face.
If, however, we can support the decisions made by our frontline personnel, even when they may seemingly be mistaken, because they have applied common sense, and if we can encourage outside-the-box thinking, thereby better protecting ourselves against the threats of the future, then we will have created a, as we term it, tactical risk assessment of people system that is the very essence of security culture.
Perhaps one of the biggest lessons we can learn from aviation is that we must avoid tick-box practices. Passengers are often asked to answer questions when they check-in for flights online — “Did you pack your bags” or “Has anybody given you anything to take” for example. These, now closed questions (which can only be answered ‘yes’ or ‘no’), were originally developed as open questions where it was the way a person answered that was as important as what the answer to the question actually was. The move to automated check-in processes, where passengers are even questioned about whether they packed their bags themselves before they have even commenced packing, makes a mockery of a sophisticated system designed to address threats.
Profiling is, for any mode of transport, most successful when we observe people on the move, rather than at checkpoints, when we target those people who we feel do not meet our baseline expectations, rather than treating everybody equally and when we understand the most appropriate ways to resolve concerns, rather than wait for an alarm to flash red or green. It is sophisticated, but, as every other security agency would confirm, highly effective at identifying those with negative intent. And that is our goal.