Episode 64 — Choose Cryptographic Implementations for Data In-Transit, In-Use, and At-Rest
In this episode, we’re going to take the big idea of cryptography and connect it to three very practical moments where data can be exposed: while it is moving across networks, while it is being actively processed, and while it is sitting in storage. Beginners often hear encryption described as if it is a single switch you flip, but real systems need different cryptographic implementations depending on the data’s state. Data in transit faces risks like eavesdropping and tampering as it travels between a client and a service or between internal services. Data at rest faces risks like stolen disks, unauthorized database access, or backups being copied. Data in use is the trickiest, because the system must decrypt or otherwise reveal data in order to compute on it, which creates opportunities for exposure inside memory, inside application logic, or at endpoints. Choosing cryptographic implementations across these states is an architecture skill because it requires aligning protection with where the threats really exist and where the system can realistically apply controls. The goal is to make you comfortable explaining what protection makes sense in each state, why it matters, and how the choices interact without getting stuck on vendor names or configuration steps.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
A strong way to begin is to define the three states clearly, because a lot of confusion comes from mixing them together. Data in transit means the bits are traveling over some communication path, such as from a browser to a web service or from one service component to another. Data at rest means the bits are stored somewhere durable, such as on a disk, in a database file, in object storage, or in a backup archive. Data in use means the system is actively working with the data, so it may be in memory, in CPU registers, on a screen, or in application variables during processing. Each state has a different “attack surface,” meaning a different set of ways an attacker might try to get to the data. In transit, the attacker may be on the network path. At rest, the attacker may have stolen media or gained access to storage systems. In use, the attacker may be on the endpoint or inside the execution environment, aiming to capture data right after decryption. Your cryptographic implementation choices should therefore be made with an understanding that protecting one state does not automatically protect the others. The right mindset is to treat each state as its own problem, then design how those solutions work together.
For data in transit, the main security goals are confidentiality and integrity while moving between two parties that need to communicate. Confidentiality keeps eavesdroppers from reading the content, and integrity helps ensure the data was not altered on the way. A very common beginner misunderstanding is thinking that encryption alone is enough, but encryption without integrity can still allow certain kinds of manipulation, and integrity without authentication can still allow impersonation. In practical terms, transit protection usually includes encryption plus integrity checks plus a way for each side to authenticate the other, at least to some degree. This is why secure communication protocols are designed as packages that combine these properties rather than asking you to bolt them together manually. From an architecture perspective, your decision points include where secure channels must exist, which communications are allowed to be unsecured, and what trust you place in internal networks. Modern threats treat internal networks as hostile when compromised, so many architectures assume that internal service-to-service traffic needs the same protections as external traffic, even if performance and operational overhead become constraints. Choosing the implementation means deciding which connections require strong protection, how endpoints validate identities, and how failure conditions will be handled without breaking the system.
Even when the idea of protected transit is clear, you still have to deal with boundaries and intermediaries, because not all traffic goes directly from client to application. Many systems include proxies, gateways, load balancers, and inspection layers that may terminate and re-establish secure connections. That design choice is not automatically good or bad, but it changes where plaintext exists and which components are trusted. If a secure connection is terminated at a proxy, that proxy becomes a place where data is visible, which increases the need to protect the proxy and its key material. If the secure connection remains end-to-end to the application, you reduce exposure in intermediaries but may lose certain middle-layer controls that depend on seeing traffic. As an architect, you evaluate these tradeoffs based on the sensitivity of the data, the need for visibility, and the trustworthiness of the intermediate layers. You also consider how identities are represented in transit, because the application needs to know who the user is, and that identity might be conveyed via tokens, certificates, or session identifiers that must be protected from replay and theft. The key lesson is that transit cryptography is partly about securing the pipe and partly about ensuring the right endpoints can confidently trust what they receive. If you cannot explain where encryption starts and ends and where keys are held, you do not fully understand the protection you are claiming.
Data at rest shifts the focus from network attackers to storage exposure, and that can include both physical and logical threats. Physical threats include lost laptops, stolen drives, or decommissioned hardware that still contains sensitive data. Logical threats include unauthorized database access, snapshots being copied, backups being downloaded, or a storage admin abusing privileges. Encryption at rest aims to reduce the harm from exposure of stored data by making the stored bits unreadable without keys. However, the protection depends heavily on key management, because if the keys are stored right next to the data or are easily accessible to attackers who compromise the host, the benefit shrinks. At-rest cryptography can operate at different layers, such as full-disk encryption, file-level encryption, database-level encryption, or application-level field encryption. Each layer protects against different threats and has different operational implications. Full-disk encryption protects against stolen media but not against a running system where an attacker has access. Database-level encryption can protect certain files but may still decrypt data for any process with database access. Application-level encryption can provide stronger separation because the application controls decryption, but it increases complexity and can limit searching and analytics. Choosing the right implementation is therefore about matching the threats you care about to the layer that can most effectively reduce risk without destroying functionality.
A practical way to reason about at-rest choices is to ask who you are trying to protect against and what you want to keep separate. If you want to protect against someone stealing a disk, full-disk encryption is often effective because the attacker lacks the keys when the device is powered off. If you want to protect against a storage administrator or a cloud storage breach, you may need encryption where keys are not accessible to the storage layer, which pushes you toward higher-layer encryption controlled by the application or a dedicated key service. If you want to protect specific sensitive fields, like national identifiers, you might encrypt those fields separately so exposure of the database does not reveal them. But every layer adds operational complexity, and complexity can create new failure modes, like losing keys and being unable to recover data. At-rest cryptography decisions also influence backups and recovery, because encrypted backups must still be decryptable during restoration, which means keys must be available when you need them most, even during disasters. For beginners, the key takeaway is that at-rest encryption is not a single feature; it is a design choice about where decryption is allowed to occur and which roles are trusted to have access. That is why key separation and access governance are part of the cryptographic implementation decision, not separate tasks.
Data in use is where many learners first realize that cryptography has limits, because data must often be in plaintext at the moment it is processed. If a user views a record, the record is decrypted to be displayed. If an application computes on data, it must load it into memory in a usable form, which creates opportunities for exposure through memory scraping malware, debugging interfaces, or compromised application code. This does not mean protection is impossible; it means the approach must include reducing exposure and hardening the execution environment rather than relying solely on encryption. Cryptographic implementations for data in use can include techniques like encrypting data at rest and in transit while carefully limiting where decryption happens, minimizing the time data stays in memory, and restricting which components can request decryption. It can also include isolating sensitive processing into trusted zones, using strong identity controls to limit access, and ensuring logging and monitoring can detect abnormal access patterns. Some specialized cryptographic methods aim to compute on protected data, but even without going deep into those methods, the architectural idea is that protecting data in use often means controlling the boundary where decryption occurs and limiting who can reach that boundary. When you choose implementations for in-use protection, you are often choosing a combination of cryptography and system design patterns that reduce the number of places where secrets exist.
A beginner-friendly example is thinking about where passwords live during authentication. A password should not be stored in plaintext at rest, and it should be protected in transit, but during login the system must compare the user’s input to a stored representation. The safest approach avoids ever decrypting stored passwords by using one-way hashing, which changes the problem from confidentiality to verification. That example shows an important principle: sometimes the best cryptographic implementation is not encryption at all, but a design that avoids needing plaintext. Similarly, with payment data or personal identifiers, you might avoid copying sensitive fields into logs, avoid sending them to unnecessary services, and avoid displaying them widely, which reduces in-use exposure. In-use protection often depends on minimizing the footprint of sensitive data, which is a design choice supported by cryptography. You can also separate duties so that one component handles decryption while another handles business logic, limiting the blast radius of compromise. The lesson is that cryptography works best when paired with data minimization and carefully defined trust boundaries.
Another part of choosing implementations is understanding how these states interact across a full end-to-end flow. Consider a client sending a request containing sensitive data to an application. The data might be protected in transit to the gateway, possibly reprotected to internal services, then stored encrypted at rest, and later retrieved and decrypted in use for processing. If your design has gaps, such as unprotected internal hops or logs that capture plaintext, attackers can target those gaps rather than breaking strong encryption. This is why architects draw data flows and identify where data is in each state, even if they do it mentally rather than on paper. When you map the flow, you can decide where encryption begins, where it ends, and where it needs to persist. You also discover hidden states, like data being at rest in caches or temporary files, or data being in transit within message queues. Those hidden states are common sources of accidental exposure because teams forget they exist. Choosing cryptographic implementations is therefore tied to understanding the real places data lives, not just the official design.
Key handling is the glue across all three states, because the strongest encryption fails if keys are mishandled. For transit, keys and certificates are used to establish secure channels and authenticate endpoints. For at rest, keys are used to encrypt and decrypt stored data, and access to those keys should be restricted and monitored. For in use, keys may be used to decrypt data on demand, which means key access controls must be strong enough to prevent misuse by compromised components. In many architectures, a key service is used to centralize key protection and auditing, but the high-level principle is what matters: keys should be treated as high-value assets with limited exposure. You also need to think about key rotation and revocation, because long-lived keys increase risk, but rotation requires coordination so data remains accessible. If a key is rotated without planning, data may become unreadable or services may fail to establish connections. So part of choosing implementations is ensuring the system’s operational processes can support key lifecycle activities without constant emergencies. Beginners often underestimate that key lifecycle is the hard part, but architects treat it as a core requirement.
Availability and recovery also affect cryptographic implementation choices, because encryption can create dependency on key availability. If a system cannot reach its keys during a failover event, it may be unable to start or may be unable to decrypt critical configuration data. That could turn a recoverable outage into a prolonged disaster. Architects therefore evaluate how keys will be available during B C D R events and how the system will behave if key access is delayed. Similarly, transit protections can fail during certificate issues, and certificate issues can cascade quickly across services. A strong design includes monitoring for certificate expiration, controlled renewal processes, and reasonable fallback behaviors that do not silently drop security. At rest, encrypted backups must be restorable when needed, which means key access must be planned for the worst day, not the best day. This does not require complex explanations, but it requires a consistent mindset: if cryptography is a dependency, then resilience planning must include that dependency. Choosing the implementation is therefore partly about choosing where you can tolerate failure and where you need redundancy.
Finally, you want to be careful about common misconceptions that lead to overconfidence. One misconception is that encrypting a database means data is safe even if an attacker compromises the application, but compromised applications often have legitimate access to decrypt data, so the attacker can use the application as a decryption oracle. Another misconception is that encrypting traffic means you can trust the sender, but encryption without proper authentication does not stop impersonation. A third misconception is that data in use is “safe” because it is inside the server, but malware and insider threats can operate there too. The architectural response to these misconceptions is layered protection: protect transit to stop network interception, protect at rest to reduce storage exposure, and design in-use handling to minimize plaintext exposure and constrain decryption to trusted components. When these layers are aligned, attackers have fewer easy options and must take harder, riskier steps to get value. The point is not to promise perfect secrecy, but to make theft and tampering less likely and less impactful.
Choosing cryptographic implementations for data in transit, in use, and at rest is really about matching protections to the data’s states and the system’s trust boundaries. Transit protections secure the movement and the endpoint identities so messages cannot be casually read or altered on the path. At-rest protections secure stored bits so exposure of media or storage systems does not automatically expose the data, assuming keys are properly separated and protected. In-use protections acknowledge that data must be processed in a usable form, so the design focuses on limiting exposure, reducing where plaintext exists, and controlling decryption as a privileged act. When you bring these together, you get a system where cryptography is not a single feature but a coordinated set of choices tied to how data flows and where attackers can realistically strike. An ISSAP learner should come away able to explain why different states require different implementations and how the choices depend on threats, capabilities, and operational resilience. If you can describe where data is protected, where it is not, and what compensating design decisions reduce the remaining risk, you are thinking like an architect rather than treating encryption as a magic shield.