The Problem Of Relying On Point-In-Time Vulnerabilities In Red Teaming

The Problem Of Relying On Point-In-Time Vulnerabilities In Red Teaming

Red teaming in the context of information security is an assessment strategy that is all about adversary modeling, which is very useful for identifying various types of vulnerabilities (both directly and indirectly exploitable). Most of the approaches to red teaming in various articles, journals, conferences, etc focus on identifying point-in-time vulnerabilities which are subsequently strung together to form an attack chain. These issues may be identified a number of ways, including network or application penetration testing, social engineering, physical security testing, or more.

A point-in-time vulnerability could be any one of the following:

  • An un-patched server on the network
  • An employee that responds to phishing attacks or phone-based attacks and provides sensitive information such as credentials to an unknown, un-trusted party
  • An overlooked network service that is using a default or weak password to protect sensitive data
  • An employee who plugs in and runs content from an unknown removable media device that they found out in the parking lot
  • A physical barrier, such as a door that isn’t properly locked at a time of entry of held open for somebody to tailgate

Point-in-time vulnerabilities in this context are not limited to the any one of the items outlined above. Rather, they can be any issue that is dependent on a very specific condition that happens to be occurring at the time of the assessment (often an oversight such as the improperly shut door) and does not affect the entire organizational threat model or which will be completely mitigated once that condition (a node in an attack graph) is removed or changed. It is worth pointing out that point-in-time vulnerabilities are definitely a cause for concern in any organization and as an industry we should not move away from identifying and subsequently helping to remediate the risk introduced from these issues. However, as red teamers, we must start to consider the bigger picture if we are going to provide real, long-term value as a result of our hard work.

Ultimately, a point-in-time vulnerability is only valid when the enabling conditions are present, which cannot be guaranteed or consistently reproduced. To ensure that an organization can derive value from a red team assessment, I suggest that the process incorporate a hybrid approach of point-in-time vulnerability discovery and an analysis of security-related themes relevant to the modeled threat actor’s motivations and capabilities. Value in this context is the ability to:

  • Generate actionable threat intelligence
  • Remediate systemic issues that can address many potential point-in-time vulnerabilities
  • Measure results, return on investment, and relative risk
  • Apply mitigation strategies around business process design flaws
  • Improve the organization's ability to self-check and correct their own security posture

A security theme in this ongoing discussion will represent a slice of the bigger picture, such as a business process design flaw or other systemic condition. An example of what this looks like in practice is the un-patched and vulnerable operating system that is outside of the organization's typical technology stack identified running on one of the in-scope systems. Identifying the bug is important, yes, as is the remediation of that bug. But asking bigger thematic questions, such as those below will prompt a more strategic discussion:

How did the development and/or operations team get an unsupported operating system into production?
Why wasn't it seen until now?
Who was responsible for maintaining this system and what else about it may violate policy?

A red team assessment is never going to cover every single shred of attack surface, regardless of the organization's size. So, utilizing the point-in-time vulnerabilities that are found to step back and assess emerging themes and policy violations opens the door for a more strategic intervention (and hopefully proactive prevention) of security issues.

About Robert Wood