Flipping the Script on Security

Traditional security solutions still aren’t helping companies detect, recognize, and respond to rapidly changing relationships and threats. This problem is underscored by three main factors:

  1. Managers view workers as the cause of poor security. Workers make mistakes, they violate rules—they represent a problem that an organization needs to solve, instead of dealing with the underlying phenomena of security issues.
  2. Because of this, organizations intervene to try and influence workers’ behavior. Managers develop strict rules and guidelines that control what workers do, because they cannot be trusted to operate securely on their own.
  3. Organizations measure their security success through the absence of negative events.

We have the opportunity to turn this situation around by applying 70+ years of research, practices, and lessons learned from the field of Safety Engineering to Cyber Security, largely informed by Sidney Dekker’s Safety Differently lecture series. In his model, the earlier factors of current security practices should be inverted:

  1. People are not a problem to control; workers are the solution. Learn how your people create success on a daily basis and harness their skills and competencies to build more secure workplaces and systems.
  2. Rather than intervening in worker behavior, intervene in the conditions of people’s work. This involves collaborating with front-line staff and providing them with the right tools, environment, and culture to get the job done securely. 
  3. Measure security as the presence of positive capacities. If you want to stop things from going wrong, enhance the capacities that make things go right.

People Are the Source of Solutions, Not Problems

“Human error” is the favored bugaboo of security incidents, often listed as the “cause” of an incident. This is fueled by a belief that humans are unreliable, and solutions to human error problems should involve changing the people or their role in the system. This is a dangerously narrow and misguided view. First, the notion of human error relies on hindsight that lends credence to considering a person’s otherwise normal predictions, actions, and assessments as problematic. Second, people operating in complex environments are necessarily aware of potential paths to failure, and develop strategies to avoid or forestall those negative outcomes. 

It’s time for the security world to start taking human performance, instead of human error, seriously. The people building and securing modern, complex systems tackle the messy daily work filled with trade-offs, goal conflicts, time pressure, uncertainty, ambiguity, vague goals, high stakes, organizational constraints, and team coordination requirements. They are directly responsible for the vast majority of time that your systems work as intended, and are secure and available. 

Competency and Trust Instead of Control and Compliance

If, as described above, you acknowledge that the people that build, maintain, and secure your systems have the most knowledge about them and are therefore poised to deal with perturbations, you must ensure they are able to act and adapt. 

“Adaptation is not about always changing the plan or previous approaches, but about the potential to modify plans to fit situations—being poised to adapt.” —David Woods, Graceful Extensibility 

Reactive countermeasures that create more guardrails and hinder security teams’ ability to act will only increase the brittleness of your systems, hence impacting their security as well. 

Focus on Positives Instead of Counting Negatives

Focusing on positives versus counting negatives can shift the culture and mindsets within security teams to look for what is working well so they can better inform the rest of the organization on how to improve. The current practices of tracking, counting, and measuring the presence of negative events doesn’t help us understand what is going right. 

For example, industry regulatory standards such as NIST, PCI, or HITRUST recommend that security practitioners should track system vulnerabilities and classify them based upon risk level. If all we focus on is the tracking of the negative outcomes, we are missing opportunities to gain proactive improvements by improving the competency and common sense of the engineers doing the actual work.

Our industry is beginning to recognize that learning from other domains like safety can make a difference. There are decades of lessons learned, proven practices, and research to be learned from in the fields of Nuclear Engineering, Safety Engineering, Resilience Engineering, Medicine, and Cognitive Science that could help the security industry turn the tide on cybercrime. If we don’t continue to evolve our practices and learn from others, we will continue to see breaches, outages, and bad headlines exponentially climb.

Author's Bio

Aaron Rinehart

Co-founder & CTO of Verica