Introduction
Deploying an OT security monitoring platform is one of the highest-value security investments available to industrial organizations. Without monitoring, security events occur invisibly. An attacker who gains access to an OT network, conducts reconnaissance for weeks, and then modifies PLC logic leaves no observable evidence in organizations without monitoring. With a properly deployed passive monitoring platform, every one of those activities generates observable data — and potentially, actionable alerts.
The gap between "deployed" and "properly deployed" is substantial. An OT monitoring platform that is capturing traffic from the wrong locations, has protocol decoders configured for the wrong industrial protocols, has no tuned alert rules, and is not integrated into any investigation workflow provides minimal actual security value despite representing significant capital investment.
This guide provides a detailed deployment methodology — from platform selection through sensor placement, configuration, baseline establishment, alert tuning, SIEM integration, and multi-site scaling. It is designed for OT security engineers and IT/OT convergence teams who are responsible for deploying and operating OT monitoring infrastructure.
Platform Selection Criteria
Core Capability Requirements
Before evaluating specific platforms, define the capability requirements for your environment:
Protocol coverage: Which industrial protocols are present in your environment? A platform without native decoders for your specific protocols provides limited value. Common protocols to verify:
- Modbus TCP / Modbus RTU over TCP (nearly universal in OT environments)
- EtherNet/IP / CIP (Rockwell environments; common in discrete manufacturing)
- PROFINET (Siemens environments; common in European manufacturing)
- DNP3 (utilities — electric, water, oil and gas)
- IEC 104 / IEC 101 (electric utilities, substation automation)
- IEC 61850 / GOOSE / MMS (substation automation, protective relay communication)
- OPC DA / OPC UA (data server communications, historian integration)
- BACnet/IP (building management, HVAC integrated with OT)
- PROFIBUS DP (legacy Siemens; requires gateway for IP visibility)
- Modbus RTU (legacy serial; requires gateway or serial tap for visibility)
Vendors claim broad protocol support — verify the depth of support, not just the list. A "Modbus" decoder that only identifies the protocol without function-code-level inspection provides limited detection capability.
Asset discovery and inventory: The platform should passively build and maintain an asset inventory from observed network traffic, identifying device type, vendor, model, and firmware version where protocol fingerprinting supports it.
Behavioral analytics: Static signature-based detection alone is insufficient for OT. The platform must support behavioral baselines — learning the normal communication patterns of the environment and alerting when those patterns change.
Threat intelligence and signatures: Does the platform include OT-specific threat intelligence and signatures for known malware families (TRITON, INDUSTROYER, PIPEDREAM-related indicators) and known attack techniques?
ATT&CK for ICS alignment: Are detections mapped to the MITRE ATT&CK for ICS framework? This enables coverage gap analysis and supports structured reporting.
SIEM integration: Does the platform support log forwarding to your SIEM platform (Splunk, Microsoft Sentinel, IBM QRadar, Chronicle) via CEF, Syslog, or native connector?
API and integration ecosystem: Does the platform provide an API for integration with vulnerability management, asset management, and incident response tooling?
Scalability: How many devices and how many sites can the platform support? How is data from multiple sites centralized?
Platform Comparison Overview
| Platform | Strength | Best For |
|---|---|---|
| Dragos | Deep threat intelligence, ATT&CK coverage, strong detection engineering | Critical infrastructure operators with high threat profile |
| Claroty | Broad protocol support, strong asset inventory, enterprise integration | Large multi-site manufacturers, enterprise OT programs |
| Nozomi Networks | OT + IoT combined coverage, strong scalability | Multi-site deployments, environments with mixed OT/IIoT |
| Microsoft Defender for IoT | Native Sentinel integration, Azure connectivity | Organizations already in Microsoft security stack |
| Tenable OT Security | Combined monitoring + vulnerability management | Teams prioritizing vulnerability visibility alongside threat detection |
Do not select a platform based on marketing literature alone. Request a proof-of-concept deployment in your environment — ideally a representative site — and evaluate the platform against your actual protocol mix, device population, and alert quality.
Architecture: Centralized vs. Distributed Management
For single-site deployments, a single platform instance typically suffices. For multi-site organizations, consider:
Centralized architecture: All sensors forward data to a central platform instance. Analysts work from a single console. Configuration is centrally managed. This simplifies operations but creates WAN bandwidth requirements for forwarding raw traffic or decoded event data from remote sites.
Distributed architecture: Each site has its own platform instance. A central aggregation layer collects normalized events (not raw traffic) from all sites for cross-site correlation. This reduces WAN bandwidth but increases the complexity of managing multiple platform instances.
Most enterprise OT platforms support hierarchical architectures where site-level instances handle local data processing and forward events to a centralized management console. This is the recommended approach for organizations with more than three or four sites.
Sensor Placement Strategy
The most consequential deployment decision after platform selection is sensor placement: where on the network to collect the traffic that the platform will analyze.
TAP vs. SPAN: The Fundamental Choice
Network TAP (Test Access Point): A hardware device inserted in-line on a network link that passively copies all traffic from that link to an output port connected to the monitoring sensor. TAPs are passive — they have no IP address, generate no traffic, and cannot be remotely exploited. They provide 100% packet capture with no dropped frames. TAPs are the preferred sensor collection method for OT environments.
Types of TAPs:
- Passive fiber TAP: A fiber-based TAP that uses optical splitting to copy light from the fiber to the monitoring output. Completely passive — no power required, no electronics.
- Active copper TAP: For copper (RJ45) links, an active TAP regenerates the signal to provide the copy output. Requires power and has a small probability of introducing link issues.
- Aggregation TAP: Combines traffic from multiple links into a single monitoring output. Useful for monitoring multiple links with a single sensor interface.
SPAN port (Switched Port Analyzer): A switch configuration that copies traffic from one or more switch ports or VLANs to a designated monitoring port. SPAN ports use existing switch infrastructure without additional hardware but have limitations:
- SPAN ports may drop packets under high switch load — the monitoring copy is lower priority than forwarded traffic
- SPAN ports can introduce additional load on the switch CPU
- Misconfigurations (incorrect VLAN selection, asymmetric SPAN) can result in incomplete traffic capture
- Some managed switches have limitations on what traffic can be SPANned (e.g., intra-VLAN traffic only on certain platforms)
Recommendation: Use TAPs at the most critical monitoring points (IT/OT DMZ uplinks, zone boundary uplinks). Use SPAN ports for supplementary collection within zones where TAP installation is impractical.
Coverage Requirements: What Must Be Monitored
Prioritize sensor placement based on security value:
Priority 1 — IT/OT Boundary: Network links between the IT network and the OT DMZ, and between the DMZ and the OT network. Traffic at this boundary has the highest security significance — all IT-to-OT communications and all east-west movements that cross the IT/OT divide are visible here.
Priority 2 — Zone Boundary Uplinks: Network links at the boundaries between OT security zones. Traffic between the supervisory zone (SCADA, historians, HMIs) and the control zone (PLCs, DCS) should be monitored. Traffic entering or leaving the safety zone should be monitored.
Priority 3 — Engineering Workstation Network Segment: Engineering workstations are high-value targets and initiate programming sessions to PLCs. Monitoring traffic from the engineering workstation segment provides visibility into any unauthorized programming activity.
Priority 4 — Control Zone Backbones: Core switch uplinks in the primary control zone provide visibility into the majority of control network traffic.
Priority 5 — Remote Site Connections: WAN links connecting remote sites (unmanned substations, remote RTU sites, offshore platforms) to the central SCADA network.
Sensor Sizing
OT monitoring sensors must have sufficient processing capacity for the traffic volumes at their collection point. Underpowered sensors drop packets and provide incomplete traffic capture, which degrades detection coverage.
Calculate expected traffic volumes at each monitoring point:
- Polling-based protocols (Modbus, DNP3) generate regular, predictable traffic volumes
- Process historian data collection from many PLCs can generate substantial traffic
- During engineering sessions (vendor maintenance, configuration changes), traffic spikes significantly
Sensor vendors publish throughput specifications; verify that the specified sensor hardware exceeds the peak traffic volume at the deployment point. Deploy with 50% headroom above typical peak volumes.
Protocol Decoder Configuration
Protocol decoders translate raw network packets into structured industrial protocol data that the platform can analyze and alert on. Configuration steps vary by platform but follow common principles.
Protocol Auto-Detection vs. Manual Configuration
Most platforms support protocol auto-detection: the platform observes traffic on known protocol ports and attempts to identify the protocol. Auto-detection is a starting point, not a final configuration. OT environments often include:
- Protocols running on non-standard ports (Modbus on a port other than TCP/502)
- Multiple protocols sharing the same port (some vendor protocols layered on common ports)
- Vendor-proprietary protocols with no standard port designation
After auto-detection, review the protocol configuration and manually configure any protocols that were not automatically detected or were identified on non-standard ports.
Configuring OT-Specific Decoder Parameters
For each detected protocol, configure decoder parameters that enable the deepest inspection and most context-rich detection:
Modbus TCP: Configure the list of expected unit identifiers (device slave addresses) for each monitored segment. Alert generation for Modbus traffic to unknown unit identifiers helps detect reconnaissance and unauthorized device access.
EtherNet/IP CIP: Configure the list of expected Identity Objects (device descriptions, IP addresses) for the monitored segment. Deviation from the known device list triggers new device detection. Configure permitted service code ranges for each device role (SCADA server, engineering workstation, operator HMI) to enable T0855 detection.
DNP3: Configure expected station addresses for master and outstation devices. Alert on unsolicited responses from unexpected stations and on operations from unexpected master addresses.
OPC UA: Configure expected server endpoint URLs and certificate subjects. Alert on connections to unexpected OPC UA endpoints and on certificate validation failures (potential MITM).
IEC 104 / IEC 101: Configure ASDU address ranges and permitted ASDU types for your substation network. Alert on commands (C_SC_NA, C_DC_NA) from unexpected sources.
Vendor Protocol Considerations
Many OT vendors use proprietary protocols or proprietary extensions to standard protocols:
- Siemens S7 (ISO-TSAP over TCP/102): Siemens-specific protocol; requires S7 decoder support. Provides visibility into TIA Portal connections and S7 read/write operations.
- Rockwell PCCC (DF1 over EtherNet/IP): Legacy Rockwell communication protocol, still present in older installations. Requires PCCC decoder support.
- Honeywell FTE (Fault Tolerant Ethernet): Proprietary Honeywell protocol for DeltaV controller communication.
- Foxboro MESH: Schneider-Foxboro proprietary control network protocol.
Verify platform support for vendor-specific protocols present in your environment before deployment. Gaps in vendor protocol coverage mean significant blind spots in monitoring coverage.
Baseline Learning Period Management
Behavioral detection in OT monitoring platforms requires an established baseline: the platform learns what "normal" looks like before it can alert on anomalies. Poorly managed baseline periods produce either poor detection coverage (too short, normal behaviors not yet observed) or excessive false positives (normal operational variations not captured in the baseline).
Baseline Period Planning
Minimum duration: 4 weeks. A shorter baseline will not capture the full range of normal operational variations including shift changes, batch cycle variations, and periodic vendor maintenance activities.
Recommended duration: 8-12 weeks for environments with complex operational patterns, seasonal processes, or infrequent maintenance cycles that must be included in the baseline to avoid false positive alerts.
Operational coverage requirements: The baseline must capture every distinct operational state that occurs in the environment:
- Normal production at varying rates
- Planned maintenance activities (engineering workstation sessions, vendor access)
- Process startups and shutdowns if these occur on a regular cycle
- Shift handover communication patterns
- Automated batch sequences if applicable
Coordinate with operations: Notify the operations team when the baseline learning period begins. Instruct them to perform all normal operational activities — including scheduled maintenance — during the baseline period. Abnormal activities (emergency repairs, non-standard configurations) should be noted in a baseline event log so they are not incorporated as "normal" patterns.
Baseline Quality Review
At the end of the baseline period, review the baseline before enabling behavioral alerting:
-
Communication pair coverage: Review the complete list of communication pairs observed. Are all expected pairs present? Are there pairs that should not be present (unrecognized IP addresses communicating with PLCs)?
-
Protocol distribution: Does the observed protocol distribution match engineering expectations? If Modbus traffic is expected on a segment but was not observed, the sensor may have missed traffic or the baseline period was too short.
-
Traffic volume baseline: Review typical hourly and daily traffic volumes. The baseline should reflect the range from minimum (e.g., early morning off-shift) to maximum (peak production). If only one operational state was captured, the volume baseline is incomplete.
-
Outlier communications: Identify any communication pairs in the baseline that are unexpected and investigate before accepting them into the baseline. Unknown devices communicating with PLCs should be investigated before their traffic is normalized.
Incremental Baseline Updates
After initial deployment, baseline updates will be needed as the environment legitimately changes:
- New devices installed and communicating
- New data flows added as part of integration projects
- Engineering workstation sessions for major logic changes
Most platforms support manual baseline updates that add specific new communication patterns to the baseline without restarting the full learning period. Use this feature for legitimate planned changes; do not use it to suppress investigation of unexpected changes.
Alert Tuning Methodology
Newly deployed OT monitoring platforms generate high volumes of alerts until tuning is complete. Untuned alert volumes overwhelm analysts, lead to alert fatigue, and cause true positives to be missed among false positive noise.
The Tuning Workflow
Phase 1 — Alert triage and categorization (weeks 1-4 post-baseline): Review every alert generated in the first four weeks. For each alert, classify:
- True positive: actual anomalous behavior requiring investigation
- False positive — tunable: legitimate behavior that generated a false alert, addressable by adjusting the alert rule
- False positive — acceptance required: alert on a legitimate but unusual pattern that cannot be easily distinguished from malicious behavior by rule tuning alone; accept the baseline behavior
Phase 2 — Rule adjustment: For tunable false positives, adjust the alert rule to reduce false positive rate while maintaining true positive sensitivity:
- Increase specificity: add conditions that filter out the false positive trigger pattern
- Adjust thresholds: change volume or frequency thresholds if the false positive is driven by legitimate activity volume
- Add exception lists: add known legitimate communication pairs to exception lists for rules that alert on "new communication"
Phase 3 — Alert priority calibration: Reassign alert priorities based on observed behavior:
- Escalate to high priority: alerts that have consistently been true positives in the tuning period
- Reduce to medium: alerts that are occasionally true positives but require investigation to confirm
- Reduce to low: alerts that have been consistently false positives after tuning, but that should remain active for periodic review
Phase 4 — Ongoing tuning: Alert tuning is continuous. New equipment, operational changes, and software updates regularly introduce new legitimate communication patterns that generate false positives. Establish a monthly tuning review in the SOC operations calendar.
False Positive Reduction Techniques
Communication pair whitelisting: Most platforms support explicit whitelisting of communication pairs (source, destination, protocol) that are known legitimate. Use whitelisting surgically — whitelist specific pairs, not broad subnets.
Time-based exceptions: If a specific alert type consistently fires during authorized maintenance windows, create a time-based exception that suppresses the alert during those windows while maintaining alerting at all other times.
Context enrichment: Enrich alerts with asset context (device type, zone, criticality) to enable priority-based filtering. An alert on a non-critical peripheral device may be lower priority than the same alert on a PLC directly controlling a critical process.
Protocol-specific threshold adjustment: Some alerts are triggered by volume thresholds (e.g., "abnormal number of Modbus requests per second"). Adjust thresholds to match the actual traffic volumes observed for normal operation.
SIEM Integration Architecture
Event Forwarding Configuration
Forward OT monitoring platform events to the SIEM in normalized format. Most platforms support:
- CEF (Common Event Format) via Syslog
- JSON via Syslog or HTTP
- Native SIEM connectors (Splunk TA, Microsoft Sentinel data connector, QRadar DSM)
For normalized event forwarding:
- Verify that the forwarded events include sufficient context: device name, zone, protocol, source and destination, and the ATT&CK technique reference where applicable
- Test forwarding volume: at peak alert generation rates, confirm the SIEM ingest pipeline can handle the event rate without dropping events
- Configure separate SIEM index/data stream for OT events: enables OT-specific retention policies, access controls, and search optimization
Cross-Domain Correlation Rules
The value of SIEM integration is cross-domain correlation — connecting OT events with IT events to identify attack patterns that span both environments:
High-value correlation rules:
Failed IT login → OT access: Multiple failed authentication attempts on IT systems (Active Directory, VPN) within a short window, followed by a new connection to an OT system from the same source IP. This pattern may indicate credential stuffing moving from IT to OT.
IT malware alert → OT workstation activity: An IT endpoint detection alert on an engineering workstation, followed within 24 hours by an unusual connection from that workstation to a PLC. Indicates potential engineering workstation compromise attempting lateral movement to controllers.
IT lateral movement → OT DMZ probe: Lateral movement indicators detected in IT (Mimikatz, PsExec activity), followed by connection attempts to OT DMZ addresses. Indicates adversary reconnaissance across the IT/OT boundary.
Unusual IT outbound traffic → OT communication change: Large unexpected data transfers to external destinations from IT-connected OT systems (historians, SCADA servers in the DMZ) correlated with communication pattern changes in the OT monitoring platform. Potential data exfiltration combined with cover-up activity.
Multi-Site Deployment Considerations
For organizations with OT monitoring at multiple facilities, deployment consistency and centralized visibility require additional planning.
Standardized Deployment Template
Develop a standard deployment template that defines:
- Sensor hardware specification per site traffic volume tier
- TAP placement requirements (required monitoring points per site architecture type)
- Protocol decoder configuration standards for your vendor ecosystem
- Standard alert rule set and priority definitions
- SIEM forwarding configuration
Apply the template consistently across all sites. Inconsistent deployments make cross-site comparison difficult and create monitoring coverage gaps at sites that deviated from the standard.
Bandwidth Planning for Centralized Architectures
In centralized architectures where site data is forwarded to a central console, plan bandwidth requirements:
- Most platforms forward decoded event data rather than raw packet captures to the central console — this significantly reduces bandwidth requirements compared to raw pcap forwarding
- Estimate event rates per site and multiply by the normalized event size for your platform
- Add 50% headroom for peak event rates (maintenance windows, incidents)
- Confirm that site WAN connections have sufficient capacity without impacting OT operational traffic
For sites on low-bandwidth connections (cellular, satellite), ensure event forwarding is prioritized but does not crowd out OT control traffic. Use QoS marking on event forwarding traffic or configure bandwidth limits on the forwarding channel.
Consistent Asset Context Across Sites
Multi-site asset inventories require consistent naming and zone taxonomy across sites. Establish standards for:
- Asset naming conventions that encode site, area, and device type
- Zone naming that is consistent across sites (same terminology for equivalent security zones)
- Criticality ratings that apply consistent criteria across sites
Consistent taxonomy enables cross-site search and analysis in the SIEM and the centralized monitoring console.
Conclusion
Deploying an OT monitoring platform is a multi-month process that produces durable, compounding security value when done correctly. The investment in proper sensor placement, thorough protocol configuration, careful baseline management, and systematic alert tuning determines whether the platform provides genuine detection capability or generates noise that analysts learn to ignore.
The monitoring program also delivers secondary benefits: the passive asset discovery builds the asset inventory needed for vulnerability management; the communication baselines inform firewall rule design; and the historical traffic data provides the forensic foundation for incident investigation. These benefits compound over time as the program matures.
Build the program with the disciplines described here from the start. Retrofitting discipline into a poorly deployed program is harder than deploying it correctly the first time.
Beacon Security provides OT monitoring platform deployment services, including sensor placement design, platform configuration, baseline management, alert tuning programs, and SIEM integration for industrial environments. Contact us to plan and execute your OT monitoring deployment.

