Over the last few months, I've had the pleasure of speaking at a few events on some of my ideas regarding red teaming techniques at the highest levels of the business. To be clear, this is not all about finding more vulnerabilities (a la pen-testing), but rather challenging our assumptions about what it means to protect an organization. I want to summarize some of the key points from this talk within a short series, the first of which here will focus on connecting with peer executives around risk.
The talk itself is titled "Red Teaming the Board" and focused on three main themes:
- Connecting with peers to advance the visibility of information security.
- Helping enable others to do their jobs in a more impactful way by applying security techniques, building valuable rapport and awareness at the same time.
- Framing security into a competitive advantage instead of a cost center.
Connecting with Our Peers
I am convinced that some of the biggest sources of friction between security and non-security professionals are our lack of shared language constructs. Our industry craves and fuels the discussion of risk through the lens of ordinal, qualitative scales. Our organizational peers however often prefer to talk about risk in more quantitative terms that are more relatable without the jargon.
Consider for example a finance and operations discussing something like this:
There is a 35% chance that a major project will fall behind. If that project deadline slips then the opportunity cost of both sunken employee time and lost sales could range between $5M and $10M.
Alternatively, security professionals will take an approach like this:
Our three flagship applications have a total of 3 critical, 26 high, and 72 medium risk vulnerabilities. We need to start mitigating some of these vulnerabilities before we end up getting breached.
While everything in the ordinal scale lens may be exactly correct, it's incredibly hard to understand exactly what these designations mean and why 26 high is or isn't more important than three critical. I would argue that if you were to bring three security professionals into a room together that there wouldn't be a solid consensus amongst them on this question; and if we can't agree, how can we expect to agree with our non-security peers?
I believe one big reason that many professionals avoid a more quantitative approach is out of fear of being wrong or not 100% accurate with an estimation of loss or likelihood. The secret is, our peers aren't 100% accurate either, they are applying informed assessments to a situation in the same way we are, but phrasing the output (e.g., the declaration of risk) with numbers instead of vague words.
For the next two weeks, every time you discuss risk, challenge yourself to talk about the likelihood and impact in a quantitative way. Do this with people within a security team and outside of it, gauge how people receive the results and the dialogue that follows.
For those that learn through examples, turn this:
We just did a penetration test and found three instances of SQL injection on this web application. These are high-risk vulnerabilities that need to be fixed right away.
We just did a penetration test and found three instances of SQL injection vulnerabilities. Given the nature of these vulnerabilities, we estimate that within the next six months there is a 75% chance these issues would likely be found and exploited. If that happens, all of the customer PII from database X would be at risk of being stolen and exposed. Between fines and brand damage that could cost us between $2.5M and $5M.
Give it a shot! And let me know on Twitter how it goes.