Red teaming has lately been demonstrated to be an effective method for testing the efficacy of security screening processes. But as with any testing method, questions are raised when vulnerabilities in operations are revealed. Amir Neeman and Dr. Erin Gallagher discuss the benefits and challenges of developing effective red teaming programmes, as well as the potential risks of making public the results of such exercises.
On 1st June 2015, ABC news reported that TSA airport screeners failed to find simulated explosives and weapons in 67 out of 70 red team tests conducted by undercover Department of Homeland Security Inspector General (IG) auditors. The IG auditors (as well as internal TSA red team units) periodically test airport screening in an effort to expose security gaps and allow TSA to address such gaps. However, the stunning failure rate of over 95% led to immediate leadership changes at TSA, sharp focus by Congress on TSA’s vulnerabilities, quick revisions to TSA’s Standard Operating Procedures (SOP) for screening, as well as an extensive screener retraining programme.
In response to the news, many voices were eager to publicly opine on this matter; including Senator John Thune and Bill Nelson from the Senate Commerce, Science and Transportation Committee. They said, “Terrorist groups like ISIS take notice when TSA fails to intercept 67 out of 70 attempts by undercover investigators to penetrate airport checkpoints with simulated weapons and explosives.”
The IG inspectors linked the lapses they discovered to a combination of inattentive TSA screeners and poorly designed or malfunctioning equipment; however, they did not disclose the specific weaknesses they found since this information was classified.
Red team inspectors (external and internal to TSA and similar organisations worldwide) conduct thousands of tests at airport checkpoints, access control systems, checked baggage screening, cargo screening, and other sites to repeatedly expose shortcomings of the security system. This work is considered by aviation security professionals as a type of quality assurance. The results help identify systematic lapses and assist security personnel and their management learn from the mistakes.
But there are important questions related to red teaming – for example, if a red team publicly exposes (intentionally or unintentionally) vulnerabilities within the aviation security system, does it not increase the appetite for adversaries to attack a system that they now perceive as more vulnerable? How does public knowledge of red teaming failures affect screeners’ morale and motivation? How does it affect the risk awareness and potential stress level of passengers and other airport and airline employees? How effective is red teaming since security personnel are often aware that they are being tested?
In this article we will address these questions and others and help explain the role, methodology, benefits and challenges related to red teaming.
What is Red Teaming?
Red teaming is the practice of analysing a security system from the standpoint of an external attacker or an adversary. A red team typically is a group of third-party penetration testers (often experts in the security systems that they are testing) that plan and execute scenarios that would mimic the attack of an adversary. As important as execution, if not more so, is the development of best practices, corrective action plans, and continuous improvement. The ultimate purpose of red teaming is to harden security systems against real-world attacks. Some common examples of red teaming include: white hat attackers penetrating IT networks with a cyber security mechanism; and, military systems with a red team unit simulating military adversaries (sometimes referred to as war gaming or simulations of war). Finally, such systems in our context are aviation security systems (a combination of technology, security personnel and their security procedures) with red team personnel mimicking adversaries attempting to penetrate the system. There should always be a firewall between red teams and the security apparatus they are testing. True red teams test based on information gleaned from open source intelligence gathering or from the direction given by the intermediaries. They do not plan scenarios based on insider information. The group that acts as the intermediary, the white cell, should communicate with the leadership of the organisation being tested and the red team prior to the assessment for scenario development.
With respect to aviation security, a red team is typically a group of Subject Matter Experts (SMEs), with appropriate security, operational, technological and other relevant backgrounds, that provide an independent peer review of technologies and processes, develop relevant scenarios, acts as a devil’s advocate, and knowledgeably role-play the adversary using an iterative, interactive process. These events are used to test the systems and develop corrective action plans, determine best practices, and continuously improve the security apparatus. Red teaming provides security managers with an independent capability to explore alternatives in plans, operations, concepts, organisations and capabilities in the true operational environment of an airport and from the perspectives of aviation stakeholders, adversaries and others.