Episode 37 — Separate IT and Operational Technology Requirements Without Breaking Safety Goals

When students first hear that organizations have both traditional business computing and industrial or facility systems, it is easy to assume they are basically the same thing with different hardware. In reality, Information Technology (I T) and Operational Technology (O T) often have very different purposes, failure consequences, and constraints, and those differences directly shape what security architecture requirements should look like. I T systems usually focus on information processing, communication, and business workflows, while O T systems focus on controlling physical processes like power, water, manufacturing, building automation, and safety systems. When security architects ignore this distinction, they sometimes apply I T expectations to O T environments in ways that disrupt reliability and safety, or they apply O T exceptions broadly and end up weakening security across the enterprise. The goal is not to treat one side as more important, but to separate requirements carefully so each environment is protected in a way that matches its mission and risk. In this episode, you will learn how to define requirements that respect O T safety goals while still enforcing strong security boundaries, and how to avoid the common beginner mistake of assuming a single set of controls fits every system.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

A strong starting point is to understand that the word safety has a specific meaning in O T environments that can be broader and more immediate than what many people mean when they say security. Safety in many O T contexts means preventing physical harm to people, preventing damage to equipment, and preventing uncontrolled physical behavior, which can include hazards like overheating, pressure buildup, chemical release, or machinery movement. In I T, security goals often emphasize confidentiality, integrity, and availability of data and services, and safety is usually a secondary concern unless systems are life-critical. In O T, availability and integrity are often inseparable from safety because a loss of control or a wrong control signal can create unsafe physical outcomes. This difference matters because some common I T security actions, like aggressive patch cycles or forced reboots, can create unacceptable safety risk if applied without careful coordination. At the same time, O T systems still need protection against cyber threats, because malicious changes to control logic and sensor data can also create safety hazards. The separation you are learning is therefore not about creating a “secure side” and a “special side,” but about defining requirements that preserve safety as a primary property while still reducing cyber risk. When you hold safety goals in view from the beginning, you can design controls that support, rather than conflict with, the physical mission.

Another key difference is how “availability” is experienced and measured in each environment, because this affects what security requirements are reasonable. In many I T systems, a short outage might be disruptive, but the system can often be restarted, users can wait, and business can resume with some loss of productivity. In many O T systems, a short outage can mean loss of control visibility, loss of automatic safety interlocks, or loss of process stability, which can cause physical damage or harm even if the outage is brief. This changes how you write requirements for maintenance windows, updates, and incident response actions, because in O T the cost of downtime may be much higher and the ability to tolerate downtime may be much lower. It also changes how you think about denial of service threats, because disrupting control communications can create safety consequences beyond “the system is slow.” Requirements might need to state that critical control functions must continue operating during security events, or that safe fallback modes must exist when connectivity is disrupted. In I T, you might accept a system being taken offline to contain an incident, but in O T you often need a containment strategy that isolates risk while preserving essential control. The architectural separation begins with acknowledging that availability requirements are not interchangeable across these environments.

The lifecycles of systems also differ, which shapes what “normal” security maintenance looks like in I T versus O T. Many I T systems are updated frequently, with software versions changing monthly or even weekly, and teams expect a continuous improvement cadence. Many O T systems are designed for long operational lifetimes, sometimes decades, with strict certification, vendor validation, and stability expectations that make frequent changes risky. That does not mean O T should remain unpatched forever, but it does mean that patching and upgrading must be planned in a way that respects operational stability and safety testing. A beginner misunderstanding is thinking that if you cannot patch quickly, you cannot be secure, which leads to frustration or to reckless change. A more mature approach is to recognize that security in long-lived O T systems often relies more heavily on containment, monitoring, and strict boundary controls to reduce exposure, while patching is handled on a slower, more controlled schedule. Requirements should therefore separate update expectations, stating that I T components may follow a faster patch cadence while O T components require safety-impact evaluation and controlled deployment. This also means you need clear documentation of what is O T, what is I T, and where hybrid systems exist, because lifecycle assumptions depend on accurate classification. By aligning maintenance requirements to lifecycle reality, you reduce both security risk and safety risk.

Network architecture is one of the most visible places where the separation between I T and O T requirements becomes essential, because connectivity is both necessary and dangerous. In I T, broad connectivity is often expected, with users and services communicating across many segments, and security focuses on identity, encryption, and monitoring across those paths. In O T, broad connectivity can be unacceptable if it increases attack surface or creates paths for lateral movement into control networks, especially when devices cannot be easily hardened or updated. Requirements often need to define strict segmentation between enterprise networks and control networks, with carefully controlled conduits for data that must cross, such as telemetry reporting or remote support. The concept to internalize is that connectivity should be justified, not assumed, and that control traffic should be limited to what is required for safe operation. At the same time, O T does need some forms of connectivity, such as for monitoring, maintenance, or integration with business systems, so requirements should define how that connectivity is mediated and monitored. A poorly defined boundary leads to either unsafe isolation, where operators lose visibility, or unsafe openness, where attackers gain access. Separating requirements means being explicit about where the boundary is, what crosses it, and what protections must exist at that crossing.

Identity and authentication requirements also need careful separation, because O T environments may not support the same identity mechanisms that are common in I T, and yet identity trust remains fundamental. In I T, it is common to require strong authentication for users, centralized identity management, and consistent authorization policies across applications. In O T, some devices and protocols may have limited support for modern authentication, or they may rely on shared credentials, which is a serious risk if left unmanaged. The architectural response is not to accept weak identity indefinitely, but to define requirements that improve identity trust at the boundaries and in the management plane, even if some field devices remain constrained. For example, requirements might state that remote access into O T environments must use strong authentication and must be tied to individual identities, with clear role separation and auditing. They might also state that privileged actions on controllers and engineering workstations require explicit authorization and are logged for accountability. A beginner mistake is thinking identity is only a user login screen, when in reality identity includes service identities, administrative access paths, and the trust relationships between management systems and control components. Separating requirements means you may demand very strong identity controls for the points where humans and external systems interact with O T, while recognizing that some device-level constraints may require compensating controls like segmentation and strict management access. The goal is to raise the trust level where you can, and contain risk where you cannot.

Logging and monitoring requirements must also be separated thoughtfully, because what you monitor and how you respond can affect safety. In I T environments, aggressive monitoring and automated blocking actions may be acceptable, because false positives are usually a nuisance rather than a hazard. In O T environments, a false positive that triggers an automated shutdown of communications or a controller process could create unsafe conditions, so response actions must be designed with safety in mind. Requirements should define what telemetry is needed to detect unauthorized access, abnormal commands, configuration changes, and unusual communication patterns, but they should also define how response is staged and verified. For example, it may be safer to alert and escalate to an operator for confirmation before isolating a critical controller, while still allowing automated containment for less critical pathways like remote access gateways. Monitoring requirements should also account for the reality that some O T devices produce limited logs, so architecture may need additional monitoring at network boundaries and management stations. Beginners often assume monitoring is simply “turn on logs,” but in practice you must ensure logs are meaningful, time-aligned, protected from tampering, and accessible during incidents. Separating requirements means acknowledging these constraints while still demanding enough visibility to investigate and contain threats. The goal is to design monitoring that supports detection and response without creating operational instability.

Change management and configuration control are especially sensitive in O T environments, and separating requirements here is crucial to avoid breaking safety goals. In I T, changes can often be rolled out quickly, and if something goes wrong, systems can frequently be rolled back or redeployed with limited physical consequence. In O T, a configuration change might alter control logic, timing, or safety interlocks, and rollback might not be simple if the physical process is mid-operation. Requirements should therefore state that changes to O T control logic, safety functions, and critical configurations must be reviewed with both security and safety considerations, and that testing must reflect operational conditions as closely as possible. This does not mean security is optional; it means security changes must be validated in a way that respects process safety. Another important separation is emergency change handling, because incidents can pressure teams to make rapid changes that bypass normal review, and in O T that can be dangerous. Requirements might specify that emergency actions must have predefined safe procedures, such as controlled isolation steps and clearly authorized decision roles. A beginner misunderstanding is thinking that slowing down change is always bad, when in safety-critical contexts, controlled change is a protection against both accidents and attacks. By separating requirements, you create a disciplined pathway for secure change that does not compromise safe operation.

Threat modeling also looks different across I T and O T, not because attackers are fundamentally different, but because the impacts and constraints are different, and that changes mitigation priorities. In I T, the primary feared outcome might be data theft, fraud, or service disruption, which drives controls around identity, data protection, and resilience. In O T, the feared outcome can include unsafe physical behavior, damaged equipment, or environmental harm, which drives controls around integrity of sensor data, integrity of control commands, and continuity of safe control. A threat vector in O T might involve tampering with setpoints, spoofing sensor readings, or altering control logic, and these can be subtle because the system may continue running while drifting into unsafe conditions. Requirements might therefore emphasize integrity monitoring, strict access control to engineering tools, and separation between monitoring networks and control networks. They might also emphasize the ability to restore known-good configurations, because recovery in O T often requires confidence that control logic is trustworthy. Beginners sometimes assume the same vulnerability list applies everywhere, but the same weakness can have very different consequences depending on whether it touches a spreadsheet or a physical actuator. Separating requirements means you prioritize mitigations that protect what the environment values most, and in O T that often means the integrity and availability of control. When threat modeling reflects those priorities, architecture decisions become more aligned with real risk.

Remote access is a particularly common bridge between I T and O T, and it is an area where requirements must be carefully separated and tightly defined. I T remote access often supports user productivity and can be managed with standard identity controls, but O T remote access can provide a path directly into environments where devices are hard to patch and where the impact of misuse is severe. Requirements should define who is allowed remote access to O T systems, under what conditions, and how that access is monitored and controlled. They should also define that remote sessions must be tied to individual identities, that privileged actions are logged, and that access is limited to the minimum set of systems needed for the task. Another requirement area is session control, such as limiting time windows and requiring explicit approvals for certain types of access, because O T environments often benefit from higher friction for high-risk actions. Beginners sometimes think remote access is either fully allowed or fully forbidden, but in practice the goal is controlled remote access that supports operational needs without creating an uncontrolled tunnel. The architecture should also anticipate that compromised credentials are a common attack path, so it should not rely on “internal” location as a trust signal. Separating requirements ensures that remote access to O T is treated as a high-risk capability with additional safeguards rather than as just another convenience feature.

Incident response requirements must also be separated because the safest containment action in I T may not be the safest action in O T. In I T, isolating a system quickly and aggressively can be an effective way to stop data exfiltration or lateral movement, and the operational impact may be acceptable. In O T, sudden isolation of a control network could disrupt visibility and control, potentially making the situation worse and increasing physical risk. Requirements should define that O T incident response must prioritize maintaining safe operation while containing cyber risk, which often means staged containment and close coordination with operators. This can include requirements for maintaining a safe mode of operation if communications are disrupted, and requirements for clear decision authority when safety and security tradeoffs are present. Another key requirement is evidence collection, because investigations need logs and configuration states, but evidence collection should not disrupt critical control. Beginners may assume incident response is the same everywhere, but in safety-critical systems, response is a joint safety and security activity. Separating requirements helps prevent a scenario where a well-intended security action triggers an unsafe operational condition. The architecture should make safe response possible by providing visibility and control points that allow targeted action rather than all-or-nothing shutdowns.

It is also important to separate requirements around asset management and visibility, because O T environments often have different discovery and inventory realities than I T. In I T, scanning and inventory tools can often identify devices and software quickly, and assets change frequently. In O T, aggressive scanning can disrupt fragile devices, and assets may be stable but poorly documented, especially in older environments where equipment was installed long before modern asset management practices. Requirements should therefore define how O T assets are identified, tracked, and classified without introducing operational risk, and how inventory information is kept current enough to support security decisions. Visibility requirements might include maintaining an accurate list of control devices, engineering workstations, and critical network paths, along with ownership and maintenance responsibilities. Without this, both security and safety suffer because teams cannot confidently know what is present or what is affected during incidents. Beginners sometimes assume that discovery is always safe and always complete, but O T often requires more cautious approaches. By separating requirements, you avoid applying I T scanning assumptions to O T networks while still demanding the visibility necessary to manage risk. A well-defined inventory process becomes a safety support because it reduces uncertainty during troubleshooting and response.

Governance and responsibility boundaries also need to be separated, because I T and O T are often managed by different teams with different priorities and different risk tolerances. If responsibilities are unclear, security gaps appear in the seams, such as who owns patch decisions, who approves remote access, and who is accountable for monitoring and response. Requirements should define roles clearly, including who can authorize changes to control logic, who can approve emergency actions, and who maintains the boundary controls between I T and O T networks. They should also define how conflicts are resolved when security and operational goals collide, ideally through predefined decision pathways rather than improvisation during crisis. This is not just organizational housekeeping, because unclear governance leads to inconsistent enforcement, and inconsistency is a common attack opportunity. Beginners sometimes think governance is separate from architecture, but governance determines whether architecture requirements are actually implemented and maintained. When roles and accountability are clear, teams can coordinate upgrades, incident response, and monitoring in a way that supports both safety and security. Separating requirements here is about ensuring that the human system aligns with the technical system, because both must work together for protection to hold.

As you define separated requirements, it helps to remember that separation does not mean isolation, because I T and O T often need to exchange data for business operations, maintenance planning, and oversight. The goal is to create a controlled relationship where data and access are mediated through well-defined boundaries that respect O T safety needs and I T security expectations. Requirements should therefore describe the acceptable kinds of data exchange, the acceptable directions of flow, and the protections required at the interface, such as strict authentication, authorization, and monitoring. They should also define what is not acceptable, such as direct unmanaged access from general business networks into control systems. For beginners, a helpful mindset is to treat the I T–O T interface as a high-value boundary that deserves extra design attention, because it is where different assumptions meet. If you design that boundary well, both environments can support each other without sharing risk excessively. If you design it poorly, a compromise in one environment can propagate to the other, and in O T that propagation can have physical consequences. Separation is therefore a way of preserving safe operation while still enabling necessary integration, not a way of pretending the environments do not interact.

By the end of this episode, the core message should feel practical and grounded: separating I T and O T requirements is a disciplined way to protect both information and physical processes without causing unintended harm. You start by recognizing that safety and operational continuity are often primary goals in O T, and that those goals shape how you define availability, maintenance, monitoring, and incident response requirements. You define clear boundaries and controlled connectivity, because uncontrolled pathways are where cross-environment risk becomes real. You strengthen identity and access at the management and remote access points, because those are often the most feasible and impactful places to raise trust. You define monitoring and response in a way that detects threats without triggering unsafe automated actions, and you define change control that respects safety testing and stability. You clarify governance so responsibilities do not fall into the seams, and you maintain asset visibility through methods that do not disrupt fragile environments. Most importantly, you treat integration as a controlled interface rather than a casual connection, so data can flow without risk flowing freely. When you can write requirements this way, you are doing real security architecture: matching controls to mission, protecting people and systems together, and ensuring that security improvements do not accidentally break the safety goals they are meant to support.

Episode 37 — Separate IT and Operational Technology Requirements Without Breaking Safety Goals
Broadcast by