Episode 25 — Turn Threat Vectors, Impact, and Probability Into Testable Design Requirements
When you start learning security architecture, it can feel like risk conversations stay stuck at the level of general worries, like someone saying a system could be hacked, data could be stolen, or services could go down. Those statements are not wrong, but they are too fuzzy to guide real design work, because architecture needs decisions you can build and later verify. The real skill is taking a threat vector, thinking through what it could impact, estimating how likely it is in your context, and then turning that thinking into requirements that describe what the system must do or must not do. A good requirement is not a vague promise like be secure; it is a statement that can be tested by observing system behavior. This episode is about bridging that gap so your threat analysis does not end as a scary story, but becomes concrete design requirements that teams can implement and validate. You will learn how to express the requirements in a way that connects directly to architecture decisions such as boundaries, access control, data handling, and resilience. When you can do this, risk management becomes a design discipline rather than a guessing game.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
A threat vector is simply the path or method by which an attacker, accident, or misuse could cause harm, and it is useful because it describes how the problem could happen. It might be an internet-facing interface that accepts untrusted input, a weak authentication process that allows impersonation, a privileged administrative function that is too broadly accessible, or a dependency that can be tampered with. Beginners sometimes think threat vectors are only about clever attacks, but many of the most common vectors are ordinary design mistakes, like trusting the wrong boundary or exposing too much. The important part is to describe the vector in a way that includes what the attacker interacts with, what condition makes it possible, and what the attacker gains. For example, instead of saying attackers might steal data, you might describe a vector where an attacker can query a search function and retrieve records they do not own because authorization is applied inconsistently. Once you have that clear vector, you can connect it to impact and probability in a structured way. This clarity is what allows you to write requirements that are specific rather than generic.
Impact is about the harm that occurs if the threat vector succeeds, and for architecture you usually think about impact in terms of confidentiality, integrity, and availability. Confidentiality impact means information is exposed to someone who should not have it, which can harm individuals, organizations, or both. Integrity impact means data or behavior is changed incorrectly, which can cause fraud, incorrect decisions, or unsafe outcomes. Availability impact means services are disrupted, which can halt business processes and sometimes create secondary security problems when people bypass controls to get work done. Impact can also include accountability impact, where you cannot prove what happened, making response and recovery harder. When you connect a threat vector to impact, you want to describe what would be lost, changed, or interrupted in plain language. For example, if unauthorized users can alter account settings, the integrity impact is not just that data changes, but that trust in the system is damaged and users can be harmed. Impact thinking helps you decide how strong requirements should be and where to place controls in the architecture. Higher impact should generally mean stricter, more layered requirements.
Probability is about how likely the threat vector is to be used successfully, and it is the part that often feels uncomfortable because it involves uncertainty. You do not need perfect prediction to use probability in design; you need reasonable judgment based on exposure and attacker effort. Factors that raise probability include broad exposure to untrusted users, easy-to-exploit conditions, common attack patterns, and the presence of valuable targets. Factors that lower probability include strong boundaries, limited access, high complexity for the attacker, and effective detection that discourages repeated attempts. A beginner-friendly way to think about probability is to ask whether the vector is plausible for a typical attacker, not just a highly skilled one, and whether the system’s design makes the vector easy to reach. For example, an internet-facing login form has high interaction probability because attackers can reach it easily, even if the control strength is good. An internal-only interface with strict segmentation may have lower probability, but it is not zero, because attackers can sometimes reach internal systems through stolen credentials or compromised devices. Probability helps you prioritize which requirements must be enforced everywhere versus which can be narrower.
Now take threat vector, impact, and probability, and turn them into testable requirements by focusing on behaviors. A design requirement should describe a condition that must hold, a boundary that must be enforced, or an outcome that must occur when an action is attempted. Instead of writing requirements as aspirations, you write them as statements that a tester can confirm by performing an action and observing the system response. For example, if the threat vector is spoofing identity through weak sessions, the requirement might be that access to protected resources must require a valid authenticated session tied to the correct identity and must be denied when the session is invalid or expired. If the threat vector is data disclosure through overbroad queries, the requirement might be that results must be filtered based on user authorization and ownership rules, and the system must not return unauthorized records. If the threat vector is tampering with configuration, the requirement might be that configuration changes must be restricted to authorized roles and must be recorded in an audit trail. The key is that each requirement includes something you can test: a gate, a denial, a filter, or a recorded event. This turns risk reasoning into design intent that can be verified.
A helpful technique is to phrase requirements in terms of who, what, and under what conditions, because that keeps them grounded in real interactions. Who refers to the identity, which could be a user role, a service identity, or an administrator. What refers to the action or resource, such as viewing a record, changing a setting, exporting data, or invoking an operation. Under what conditions refers to context like authentication state, role membership, ownership, time limits, or approval workflows. A requirement might say that only users with a specific role can approve a transaction, and that approval must be linked to the initiating request and logged. Another requirement might say that only a specific service identity can call a sensitive internal endpoint, and that the endpoint must reject calls without that identity. Even when you avoid tool specifics, you can describe the behavior clearly. This structure also helps you avoid the trap of writing requirements that sound strong but are impossible to test, like the system must prevent all unauthorized access. Instead, you define the boundaries and expected behavior when boundaries are tested. A requirement that can be tested is a requirement that can be trusted.
You also want to connect requirements to the architecture layers where enforcement should happen, because a requirement that has no clear enforcement point is easy to miss. Enforcement might occur at the user interface, at an application service boundary, at a data access layer, or at an infrastructure boundary, depending on the design. For example, an authorization requirement should not rely only on the user interface hiding a button, because that is a weak enforcement point. A stronger requirement is that the service handling the action must enforce authorization regardless of how the request arrives. Similarly, data filtering requirements are often best enforced near the data access layer so every query path gets the same controls. Requirements about secure communication are enforced at connection boundaries, where data crosses trust zones. Requirements about logging are enforced where the action occurs, so the right context can be captured. Thinking about enforcement points helps you write requirements that are actionable and reduces the chance the system passes superficial checks while failing deeper security behavior. This is architecture thinking because you are deciding where the system must be strong.
An important part of making requirements testable is defining expected failure behavior, because security often shows up when the system refuses. When an unauthorized action is attempted, the system should deny it consistently, and it should do so without leaking sensitive details. When an invalid input is provided, the system should reject it or handle it safely without crashing or entering a weird state. When a dependency is unavailable, the system should fail in a way that does not grant extra access or bypass important checks. These are all requirements that can be tested through normal functional testing by attempting disallowed actions or inducing common failure conditions. Beginners sometimes avoid writing failure requirements because they feel negative, but failure behavior is where security boundaries become visible. If you do not specify how the system should fail, teams may implement inconsistent behavior, and inconsistency is often exploitable. A well-written requirement includes both what must happen when conditions are met and what must happen when conditions are not met. That makes testing clearer and reduces ambiguity.
To ensure your requirements reflect impact and probability, you can adjust their strength and scope based on risk. High-impact, high-probability vectors often justify requirements that are strict, layered, and applied broadly. For example, if credential theft is likely and the impact of account compromise is high, you might require stronger authentication and additional checks for sensitive actions, not just for logins. If denial of service is likely and availability impact is high, you might require rate limiting and resilience behaviors at exposed interfaces. If data disclosure is high impact, you might require encryption for certain data flows and strict data minimization in outputs, even if probability is lower. Lower-risk vectors may still require mitigation, but the requirements might be narrower or focused on detection rather than prevention. The point is not to write weaker requirements for lower risk because you do not care, but to match effort and complexity to value. Overly strict requirements everywhere can make systems hard to use and hard to build, which can cause people to bypass controls. Risk-based requirement writing helps you strike the balance that keeps security strong and sustainable.
Another common misconception is thinking that requirements must be written in highly formal language to be valid. Clarity matters more than formality, especially for new learners and cross-functional teams. A good requirement should be understandable by the people implementing it and by the people testing it, and it should not depend on hidden assumptions. If a requirement relies on an assumption, such as the trustworthiness of an external identity provider, you should state that assumption or convert it into a requirement about validation. If a requirement depends on roles, you should define what those roles mean in terms of permissions, at least at a high level. If a requirement involves data sensitivity, you should indicate which types of data are sensitive in the context of the system. When requirements are vague, teams interpret them differently, and security behavior becomes inconsistent. When requirements are specific, teams converge on the same intended behavior. This is how threat modeling becomes real: the risks drive requirements, and the requirements drive consistent design and implementation.
As you get comfortable with turning vectors, impact, and probability into requirements, you will notice that you are building a bridge between risk language and design language. Threat vectors describe how harm could occur, impact describes what would be harmed, and probability describes how likely the path is given exposure and attacker effort. Testable design requirements then describe what the system must do to reduce likelihood, reduce impact, or improve detection and response in a way you can validate. When this is done well, you can trace a requirement back to a risk and forward to a design decision and a test, which makes security work more disciplined. It also makes conversations with stakeholders easier because you can explain why a requirement exists in terms of real outcomes rather than fear. Over time, this approach helps you avoid both extremes: ignoring risk because it feels abstract, and overreacting with controls that are not tied to specific threats. The core lesson is that security architecture becomes trustworthy when its risk reasoning produces behaviors you can observe, confirm, and repeat. That is what it means to turn threat vectors, impact, and probability into testable design requirements.