Traditional security solutions still aren’t helping companies detect, recognize, and respond to rapidly changing relationships and threats. This problem is underscored by three main factors:
We have the opportunity to turn this situation around by applying 70+ years of research, practices, and lessons learned from the field of Safety Engineering to Cyber Security, largely informed by Sidney Dekker’s Safety Differently lecture series. In his model, the earlier factors of current security practices should be inverted:
“Human error” is the favored bugaboo of security incidents, often listed as the “cause” of an incident. This is fueled by a belief that humans are unreliable, and solutions to human error problems should involve changing the people or their role in the system. This is a dangerously narrow and misguided view. First, the notion of human error relies on hindsight that lends credence to considering a person’s otherwise normal predictions, actions, and assessments as problematic. Second, people operating in complex environments are necessarily aware of potential paths to failure, and develop strategies to avoid or forestall those negative outcomes.
It’s time for the security world to start taking human performance, instead of human error, seriously. The people building and securing modern, complex systems tackle the messy daily work filled with trade-offs, goal conflicts, time pressure, uncertainty, ambiguity, vague goals, high stakes, organizational constraints, and team coordination requirements. They are directly responsible for the vast majority of time that your systems work as intended, and are secure and available.
If, as described above, you acknowledge that the people that build, maintain, and secure your systems have the most knowledge about them and are therefore poised to deal with perturbations, you must ensure they are able to act and adapt.
“Adaptation is not about always changing the plan or previous approaches, but about the potential to modify plans to fit situations—being poised to adapt.” —David Woods, Graceful Extensibility
Reactive countermeasures that create more guardrails and hinder security teams’ ability to act will only increase the brittleness of your systems, hence impacting their security as well.
Focusing on positives versus counting negatives can shift the culture and mindsets within security teams to look for what is working well so they can better inform the rest of the organization on how to improve. The current practices of tracking, counting, and measuring the presence of negative events doesn’t help us understand what is going right.
For example, industry regulatory standards such as NIST, PCI, or HITRUST recommend that security practitioners should track system vulnerabilities and classify them based upon risk level. If all we focus on is the tracking of the negative outcomes, we are missing opportunities to gain proactive improvements by improving the competency and common sense of the engineers doing the actual work.
Our industry is beginning to recognize that learning from other domains like safety can make a difference. There are decades of lessons learned, proven practices, and research to be learned from in the fields of Nuclear Engineering, Safety Engineering, Resilience Engineering, Medicine, and Cognitive Science that could help the security industry turn the tide on cybercrime. If we don’t continue to evolve our practices and learn from others, we will continue to see breaches, outages, and bad headlines exponentially climb.