The Patching Disconnect
Every month, the second Tuesday brings a predictable ritual for IT security teams: download the Microsoft patches, test in staging, deploy to production, mark the tickets closed. The cycle is imperfect but functional. It represents decades of collective learning about how to keep enterprise systems reasonably current.
In an OT environment, applying that model without modification can be genuinely dangerous. A patch that reboots a PLC mid-cycle can cause an uncontrolled process state. A firmware update that changes a controller's communication behavior can break a safety interlock. A Windows update applied to an HMI that the vendor has not qualified can destabilize a system that was tuned and tested at a specific software revision.
This is not an argument against patching OT systems. It is an argument for building a fundamentally different model — one designed around the realities of industrial operations rather than borrowed wholesale from IT practices.
Why OT Patching Cannot Mirror IT Patching
The differences are not excuses. They are engineering constraints that must be respected:
Vendor certification requirements. In many industries, applying an unauthorized modification to a control system can void the vendor certification on which the safety case for that system rests. Before patching a Triconex SIS, a Yokogawa CENTUM DCS, or a Siemens S7-1500 PLC, the asset owner must confirm that the patch has been qualified by the vendor for that specific hardware revision and firmware combination. Unilateral patching can create a system that is technically more secure from a CVE perspective but legally and regulatorily uncertified.
No separation between the test environment and production. A core assumption of good IT patching practice is that you test in staging before deploying to production. Most OT environments have no equivalent test system. The SCADA server is the only SCADA server. The PLC controlling Unit 3 is the only PLC controlling Unit 3. Testing often means deploying to production and observing whether anything breaks.
Real-time and safety constraints. Many OT systems have hard real-time requirements where communications must complete within defined time windows. A patch that introduces latency — even a few milliseconds — can cause process anomalies. On safety systems, the stakes are higher: a patch-induced timing change in a Safety Instrumented System could affect its ability to execute an emergency shutdown within specification.
Maintenance windows measured in months, not hours. Continuous processes — refineries, power plants, chemical reactors — may run for months or years without a planned shutdown. The next opportunity to patch a critical controller without impacting production may not arrive for six months or more. By the time the window opens, the vulnerability that was urgent in November has been sitting unpatched — and unmitigated — well into the following year.
Extended lifecycles. A PLC installed in 2006 may be running firmware that the vendor stopped updating in 2012. No patches exist. No new ones will be released. This is not negligence on the asset owner's part — it is the reality of 20-year industrial asset lifecycles meeting software vendor support timelines.
The Actual Cost of Not Having a Program
The instinct to delay patching indefinitely due to operational complexity is understandable. But the cost of an unmanaged vulnerability exposure is real and calculable:
- CISA's ICS-CERT advisories disclosed over 2,000 OT-specific vulnerabilities in a recent 12-month period. Many affect widely deployed hardware in energy, manufacturing, and water infrastructure.
- Exploitation of known vulnerabilities — ones with public CVEs and available patches — was a primary initial access vector in multiple major OT incidents.
- Insurers are increasingly requiring documented patch management programs as a condition of OT coverage. Lack of a program is becoming an underwriting risk factor, not just a security gap.
- Regulatory frameworks including NERC CIP (for electric utilities), AWIA (for water utilities), and IEC 62443 (broadly) all include requirements for managing software vulnerabilities. "We don't have a patching program" is not an acceptable audit response.
Building a Program That Actually Works
An effective OT patch management program accepts the constraints of industrial environments and designs around them.
Step 1: Ground Truth Asset Inventory
You cannot patch assets you do not know about, and you cannot prioritize patches without knowing what software and firmware version each device is running. The starting point is a comprehensive, current asset inventory that includes:
- Every networked device: PLCs, RTUs, DCS controllers, HMIs, engineering workstations, SCADA servers, historians, firewalls, switches
- The exact firmware or software version currently deployed on each device
- The vendor and the mechanism for receiving that vendor's security advisories
- The vendor's current support status for that hardware/firmware combination
Passive OT network monitoring tools can discover and maintain much of this inventory automatically. Manual verification is still needed for air-gapped systems and serial-connected devices.
Step 2: Vulnerability-to-Asset Correlation
With the inventory in place, establish a systematic process to correlate vulnerability disclosures against your environment. Sources to monitor include:
- CISA ICS-CERT advisories: The primary source for OT-specific disclosures, published at cisa.gov/ics-advisories
- Vendor security portals: Siemens ProductCERT, Rockwell PSIRT, Schneider Electric security portal, ABB cybersecurity advisories — subscribe to every vendor represented in your environment
- NVD/CVE database: For vulnerabilities that affect commercial software components running on OT systems (Windows, third-party applications)
The correlation work requires human judgment. OT vendor advisories often cover large version ranges imprecisely, and matching an advisory to your specific hardware revision and firmware build requires review rather than automated matching alone.
Step 3: Risk-Based Prioritization
Not every vulnerability requires the same urgency. For OT environments, prioritization must account for factors that are irrelevant in IT vulnerability management:
- Network exposure: Is the vulnerable device reachable from the IT network? From the internet? Or buried behind multiple layers of firewalls and accessible only from a dedicated engineering workstation? Exposure radically changes actual exploitability.
- Exploit availability: Does a working public exploit exist? Is it being actively used in OT-targeted campaigns? ICS-CERT advisories and OT threat intelligence feeds often indicate exploitation activity.
- Safety and operational impact: Could successful exploitation affect a safety system, cause an uncontrolled process state, or result in production shutdown? A CVSS 7.0 on an internet-facing historian is more urgent than a CVSS 9.8 on a PLC accessible only from a single isolated engineering workstation.
- Compensating control effectiveness: Are there existing network controls, application whitelisting, or monitoring rules that reduce the exploitability or detectability of exploitation for this vulnerability?
Build a scoring framework that combines these factors into a single priority score. The output is not a list of vulnerabilities ranked by CVSS — it is a list ranked by actual risk to your specific operational environment.
Step 4: Define the Remediation Path
For each vulnerability, the remediation path falls into one of three categories:
Patch when feasible. The vendor has released a patch, it has been qualified for your hardware and firmware revision, and there is a maintenance window within an acceptable time horizon. Plan the patching activity: obtain vendor guidance, prepare rollback procedures, schedule with operations, test after application, and document completion.
Apply compensating controls and patch at next opportunity. Patching is not immediately possible — because of operational constraints, lack of a maintenance window, or pending vendor qualification. Compensating controls go into place now: tighter firewall rules to limit network exposure, IDS signatures tuned to detect exploitation of this specific vulnerability, enhanced monitoring on the affected zone. These controls reduce risk while waiting for the patch window to open.
Compensating controls permanently. No patch will ever exist — because the product is end-of-life, the vulnerability is a design characteristic rather than a fixable bug, or the system cannot tolerate any modifications. The compensating control set becomes the permanent defense for this device, and planning begins for device replacement as the long-term resolution.
Step 5: Vendor Coordination as a Process
Vendor coordination is not an occasional activity — it is a systematic program requirement. For each vendor relationship:
- Designate a technical contact on your team responsible for that vendor's products
- Establish a formal channel for receiving security advisories — vendor portals, mailing lists, or RSS feeds
- Develop a standard process for requesting patch qualification information when a new advisory affects your environment
- Negotiate contractual provisions requiring vendors to notify you of security vulnerabilities in supported products within a defined timeframe
Vendor coordination is especially critical for patch qualification. The question is not just "does a patch exist?" but "is this patch qualified for my specific hardware model, installed firmware version, and operating configuration?" The answer requires vendor engagement and often takes longer than the urgency of the vulnerability warrants. Build this lead time into your SLAs.
Step 6: Track and Measure
A patch management program without metrics is invisible to leadership and cannot demonstrate improvement over time. Track:
- Time to assess: From vulnerability disclosure to determination of whether your environment is affected — target 48 hours for critical/high, five business days for medium
- Time to remediate or mitigate: From confirmation of impact to patch application or compensating control implementation — measured per priority tier
- Compensating control coverage: Of all vulnerabilities where patching is not currently feasible, what percentage have documented and implemented compensating controls?
- Overdue items: How many vulnerabilities have exceeded their target SLA without resolution? This is the metric that drives escalation and resource allocation decisions.
Report these metrics to leadership quarterly. Use them to justify investment in additional maintenance windows, lab environments for patch testing, or vendor contract modifications.
When Patching Is Not the Answer
Some OT environments include equipment that simply cannot be patched — ever. End-of-life PLCs and HMIs running discontinued operating systems, field devices with immutable firmware, third-party systems with no update mechanism. For these assets, the security strategy is:
Maximum isolation. Place the device in its own network zone with the tightest firewall rules that maintain operational function. Reduce the protocol surface to exactly the communications required for process operation. Block everything else.
Continuous monitoring. Deploy monitoring with tuned alerting for the zone containing unpatchable assets. Any deviation from the established communication baseline should trigger investigation.
Replacement roadmap. Document the unpatchable assets and develop a formal replacement roadmap with funding requirements, timeline, and risk justification. This transforms an open-ended liability into a managed capital project.
Insurance disclosure. Ensure that your cyber insurance program is aware of material unpatchable vulnerabilities. Non-disclosure can affect coverage at claim time.
Changing the Culture
The deepest challenge in OT patch management is cultural. Operations teams whose performance is measured on uptime and production output have legitimate concerns about changes that could disrupt either. Security teams that approach OT patching with IT urgency will create conflict that slows everything down.
The bridge is a shared risk framework: translating vulnerability exposure into production risk and regulatory exposure that operations leadership can understand and weigh against the risk of a patching event. When the conversation moves from "security compliance" to "unpatched CVE-2024-XXXX provides an attacker who reaches this network segment the ability to modify PLC logic without authentication and without any logged evidence," operations leadership begins to engage differently.
Beacon Security helps industrial organizations build OT patch management programs that balance security requirements against operational realities. We provide vulnerability correlation, risk-based prioritization frameworks, and compensating control design for environments where traditional patching is not viable. Contact us to build a program suited to your environment.

