Episode 40 — Define Infrastructure and System Cryptography Requirements That Avoid Fragile Designs
When beginners hear cryptography, they often imagine it as a special ingredient that automatically makes a system secure, like sprinkling a protective layer over data and communications. In reality, cryptography is a powerful tool, but it is also easy to misuse in ways that create fragile designs, meaning designs that look strong until one assumption fails and then security collapses. A fragile design might rely on one shared secret everywhere, treat encryption as a substitute for access control, or assume keys will always be managed perfectly without defining how that happens. Good security architecture uses cryptography to strengthen boundaries and protect sensitive assets, but it also treats cryptography as part of a larger trust model that includes identity, authorization, monitoring, and safe failure. The goal in this episode is to help you define cryptography requirements at the infrastructure and system level that remain durable as systems change, scale, and integrate with other environments. You will learn how to express requirements in behavior-focused language, how to think about key management and trust, and how to avoid common patterns that create hidden single points of failure. By the end, cryptography should feel less like a mystery and more like a disciplined set of decisions that support clear security outcomes.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
The first step in defining cryptography requirements is to be clear about what security properties you need, because different cryptographic mechanisms support different goals. Encryption is often associated with confidentiality, meaning keeping data secret from unauthorized parties, but cryptography also supports integrity, meaning detecting or preventing unauthorized changes, and authenticity, meaning proving that a message or object came from a legitimate source. Many beginners fixate on confidentiality alone and forget that integrity and authenticity are often what prevents attackers from tampering with systems or impersonating trusted services. For example, data that is encrypted but not integrity-protected may still be altered in ways that cause harmful behavior. Likewise, a service that accepts data without authenticating its source may be vulnerable to spoofing even if the data is encrypted in transit. Good requirements therefore specify what properties must be achieved for a given data flow or asset, such as confidentiality plus integrity for sensitive transactions, or authenticity plus integrity for control messages. They also specify where those properties must hold, such as across untrusted networks, between services, or within storage systems. When you start from properties, cryptography becomes an architectural control that supports specific outcomes rather than a vague promise. This property-first mindset is the foundation of avoiding fragile designs.
A major category of cryptography requirements concerns data in transit, which is any time information moves between components, users, or environments. The architecture goal is to ensure that data crossing an untrusted boundary is protected so attackers cannot read it, modify it, or impersonate endpoints. Requirements should state that communications carrying sensitive data or sensitive commands must provide confidentiality and integrity protection, and that endpoints must authenticate each other appropriately for the risk. A beginner misunderstanding is thinking that encryption in transit is only needed when traffic crosses the public internet, but many internal networks are not truly trusted, and attackers frequently gain internal footholds. Durable requirements therefore avoid relying on location as the only trust factor and instead require cryptographic protection based on sensitivity and boundary conditions. Requirements should also address downgrade resistance, meaning the system should not silently fall back to weaker protection if strong protection is unavailable, because attackers can exploit downgrade behavior. Another important aspect is certificate and identity validation, because encrypted traffic that does not validate endpoints can still be vulnerable to man-in-the-middle attacks. While the details of protocols are implementation choices, the requirement should specify that endpoints must authenticate each other in a way that prevents impersonation. This keeps the design from being fragile, because it does not depend on “the network is safe” as an unspoken assumption.
Data at rest is the next major area, and requirements here must avoid the common mistake of treating encryption at rest as a magical shield. Encrypting stored data helps protect against theft of storage media or unauthorized access to raw storage, but it does not automatically control who can access data through the application. A fragile design is one that encrypts a database and then assumes data access is solved, while the application still allows unauthorized queries due to weak authorization. Requirements should therefore separate concerns: encryption at rest must protect stored data from unauthorized access at the storage layer, and authorization must still control access at the application layer. Requirements should state which data classes must be encrypted at rest and should also define how encryption keys are protected so encryption is meaningful. They should specify that encrypted backups are also protected, because backups often contain the most concentrated copy of sensitive data. Another requirement area is access auditing, because encryption at rest does not tell you who accessed the data, so monitoring and auditability are still needed. For beginners, the key lesson is that encryption at rest is a boundary control for storage compromise scenarios, not a replacement for good access control. When requirements reflect that, the design becomes layered rather than fragile.
Key management is where many cryptographic designs become fragile, because keys are the real power behind cryptography, and poor key handling can nullify strong algorithms. Requirements must define who can access keys, how keys are stored, how keys are rotated, and how key use is logged, because these decisions determine whether cryptography remains trustworthy over time. A fragile design often uses shared keys across many systems, which means compromise of one area compromises everything. A more durable requirement is to separate keys by environment, by application, and by sensitivity, so that compromise has limited blast radius. Requirements should also define rotation expectations, because long-lived keys increase the risk that a stolen key remains useful for too long. Rotation requirements should be realistic and tied to risk, with stronger requirements for high-value assets and privileged services. Another important requirement is key recovery and continuity, because losing keys can cause availability failures and data loss, which can be just as damaging as key theft. Requirements should therefore specify that key storage must be resilient and that key access must be controlled but not so fragile that normal operations cannot continue. For beginners, the most helpful mental model is to treat keys like master keys to a building, which must be protected, tracked, and changed when compromised, and you should avoid having one master key open every door. When key management is explicit, cryptographic controls become durable rather than brittle.
Identity and trust requirements are closely connected to cryptography because cryptography often underpins authentication and secure session behavior. Many systems use cryptographic tokens, signatures, or certificates to assert identity, and requirements must ensure those identity assertions are trustworthy. A fragile design might accept identity claims without verifying signatures, or it might allow tokens to be reused or replayed without context checks. Requirements should specify that identity tokens must be validated for authenticity, integrity, and freshness, and that token lifetimes and renewal behavior must be designed to limit misuse. They should also specify that sensitive actions may require stronger re-verification, because a long-lived session can become a liability if a device is compromised or if a token is stolen. Another important requirement is binding identity to context where appropriate, such as ensuring that tokens cannot be easily used from completely different environments without detection. While you may avoid protocol specifics, the requirement can still state that identity assertions must be cryptographically protected and that validation must occur at the boundaries where they are trusted. This protects against spoofing and reduces the chance that a single validation mistake becomes a systemic failure. Cryptography requirements that support identity must be tied to authorization requirements, because proving who someone is is only useful if the system then enforces what they are allowed to do. When these requirements are aligned, cryptography becomes a foundation for trustworthy access control.
Another area where cryptography requirements must avoid fragility is in the use of hashing and signatures for integrity, especially for artifacts like software packages, configuration files, and logs. Integrity controls help detect tampering, which is a supply chain risk and an operational risk, because attackers and accidents can alter code and configuration. Requirements might state that critical software artifacts must be integrity-verified before deployment, and that configuration changes must be validated and logged. They might also state that logs must be protected from tampering so audit trails remain reliable. A fragile design might store logs on the same system that can be compromised and then assume the logs will be trustworthy. A more durable requirement is to ensure logs are protected with integrity mechanisms and stored in a way that reduces the chance that an attacker can rewrite history. Integrity requirements also apply to data exchanged between services, where signatures or message authentication codes can provide assurance that messages have not been modified. For beginners, it helps to see integrity as the answer to the question, how do we know this is the same data we intended to send or store? Without integrity, confidentiality alone can still leave the system vulnerable to manipulation. Durable cryptographic designs make integrity a first-class requirement for high-impact flows.
Cryptography requirements must also address random number generation and entropy, because many security mechanisms depend on unpredictable values for keys, tokens, and nonces. Weak randomness is a hidden fragility because systems may appear to function normally while producing predictable secrets that attackers can guess. Requirements should specify that security-critical keys and tokens must be generated using appropriate sources of randomness and that predictable or repeatable generation is unacceptable. While beginners may not know the details of how randomness is provided, they can understand that unpredictable values are essential for secrets to remain secret. Requirements can also specify that secrets must not be derived from low-entropy sources like timestamps or user identifiers, which can be guessed. Another important requirement is that randomness must remain robust across environments, because differences between systems can cause unexpected weakness if one environment does not provide sufficient entropy. This is especially relevant in virtualized or containerized systems, where startup behavior can influence randomness availability. The architectural point is that randomness is not an optional detail; it is a foundational dependency. When you make randomness explicit in requirements, you reduce the risk of subtle, hard-to-detect weaknesses that compromise many controls at once.
A key way to avoid fragile cryptographic designs is to define requirements that prevent cryptography from becoming a single point of failure for availability. It is common to focus on confidentiality and forget that cryptography can break systems if key access is lost, if certificate validation fails unexpectedly, or if expiration behavior is not planned. Requirements should therefore define safe failure behavior, such as what happens when a certificate expires, when a key is revoked, or when a validation service is unavailable. The safe choice is not always to fail open or fail closed in a simplistic way; it depends on the action. For sensitive actions, failing closed by denying access may be appropriate, while for some availability-critical functions, you may need a controlled degraded mode that preserves safety. Requirements should also define renewal and rotation planning so cryptographic material does not expire unexpectedly, causing outage. A fragile design is one that depends on manual renewal performed by a single person with no monitoring until something breaks. A durable design includes monitoring for upcoming expirations, defined rotation workflows, and redundant access to key services. For beginners, the lesson is that cryptography is part of operations, not just part of security, and a secure system must remain functional. When availability planning is included, cryptography strengthens the system rather than becoming a hidden operational landmine.
Interoperability and lifecycle requirements are also important because systems evolve, integrate, and migrate between on-premises, cloud, and hybrid environments. Cryptographic requirements should therefore avoid hardcoding assumptions that only work in one environment, such as assuming a single trusted network or a single centralized key store that cannot be reached in hybrid conditions. Requirements should define how trust is established between components in different environments and how cryptographic materials are managed across those boundaries. They should also define how legacy systems are handled, because older systems may not support modern protections, and the architectural response may require compensating controls like segmentation and strict gateways rather than weakening requirements everywhere. Another important lifecycle aspect is algorithm agility, which means the system should be able to change cryptographic algorithms and parameters when needed without requiring a total redesign. Beginners sometimes assume algorithms never change, but in reality cryptographic best practices evolve, and systems that cannot adapt become fragile over time. A requirement for agility might specify that cryptographic settings are configurable and that updates can be applied in a controlled way. This is not about planning for exotic future threats; it is about ensuring the design does not lock itself into a brittle posture. Durable cryptographic requirements anticipate change.
Monitoring and audit requirements for cryptography are often overlooked, but they are crucial for both detection and assurance. Requirements should specify that key access, key usage, and critical cryptographic configuration changes must be logged and monitored, because misuse of keys can indicate compromise. They should also require visibility into certificate status, expiration timelines, and validation failures, because these events can signal both attacks and operational drift. Another requirement is to ensure that cryptographic errors do not leak sensitive information through logs or error messages, which can happen if systems log too much detail about secrets. For beginners, it helps to connect this to the earlier discussion of monitoring: cryptographic controls are part of the trust model, and trust models require evidence. If you cannot tell whether encryption is being used correctly, or whether keys are being accessed unusually, you cannot be confident in your protections. Monitoring requirements also support compliance and incident response, because they help answer whether data could have been exposed and whether key material may have been compromised. Durable designs treat cryptographic state as something that should be observable and managed, not as a set-it-and-forget-it layer.
Another common fragility is using cryptography in ways that create false confidence while leaving core architectural risks unaddressed. A classic example is encrypting data but leaving authorization weak, so attackers can access the data through legitimate application paths. Another example is using signed tokens but failing to enforce authorization decisions consistently, so tokens become tickets to excessive privileges. Another example is relying on encryption in transit but leaving management interfaces exposed, allowing attackers to change configuration and disable protections. Requirements should explicitly state that cryptography supports but does not replace access control, segmentation, and monitoring. They should also specify that sensitive design decisions like who can access keys, who can change encryption settings, and where validation occurs are part of the security architecture, not implementation details. This keeps teams from treating cryptography as an excuse to skip other controls. For beginners, the key is to see cryptography as one layer in a layered defense, where each layer covers weaknesses that other layers cannot. When requirements emphasize layering, the design avoids brittle dependency on any single control.
Finally, good cryptography requirements include clarity about ownership and responsibility, because cryptographic controls fail when nobody owns their lifecycle. Requirements should define who is responsible for key management, certificate management, policy changes, and monitoring of cryptographic health. They should define how emergency actions are handled, such as key revocation during incidents, and how those actions are audited and communicated. They should also define testing expectations, meaning cryptographic controls should be validated in environments that reflect production, because differences in configuration can create unexpected behavior. For beginners, it is important to understand that cryptography is not a one-time setup; it is an ongoing discipline. A durable architecture makes that discipline manageable by defining processes and by designing systems that can rotate, renew, and recover without chaos. When responsibility is clear and validation is planned, cryptography becomes a stable foundation rather than a fragile puzzle. This is how you avoid designs that collapse under the first real operational challenge.
As you define infrastructure and system cryptography requirements, the main goal is to make cryptography an enabler of trustworthy behavior rather than a brittle dependency. You start by specifying the security properties needed—confidentiality, integrity, and authenticity—and you require those properties where data crosses boundaries or where sensitive assets are stored. You separate encryption at rest from authorization so confidentiality is not mistaken for access control, and you define key management requirements that limit blast radius through separation, controlled access, rotation, and resilient storage. You connect cryptography to identity trust by requiring validated identity assertions at boundaries and by designing token behavior to resist replay and misuse. You include integrity requirements for artifacts and logs so tampering is detectable, and you make randomness quality explicit so secrets remain unpredictable. You also define safe failure and lifecycle expectations so cryptography does not become a hidden availability risk, and you require monitoring so cryptographic health and misuse are observable. Most importantly, you emphasize layering so cryptography supports, rather than replaces, other architectural controls. When requirements are written this way, they avoid fragile designs because they do not depend on a single assumption staying true forever; they build a durable security posture that can evolve safely as systems change.