Introduction
The world of Operational Technology (OT) demands certainty. The systems that run our factories, power grids, and water treatment plants simply cannot tolerate the unpredictable nature of standard IT gear. The core issue? Determinism. Control systems must operate with predictable, fixed timing. Introducing security often brings the “best-effort” liability of the IT world – and that’s a risk OT can’t afford.
Ethernet’s “Best-Effort” Problem: The Secret Behind the Standard (IEEE 802.3)
For decades, Ethernet (IEEE 802.3) has served as the bedrock of our digital world, connecting everything from corporate servers to home routers. Yet, beneath its reliable veneer lies a fundamental truth: standard Ethernet is a “best-effort” technology, which stems directly from its original media access control method, CSMA/CD (Carrier Sense Multiple Access with Collision Detection) [https://www.cesarkallas.net/arquivos/livros/informatica/network/Ethernet%20Definite%20Guide.pdf]. This protocol is the reason your data, while likely to arrive, is never truly guaranteed.
As one analysis observes, the Ethernet MAC protocol does not provide a guaranteed data delivery service: “Like most other LAN systems, Ethernet does not provide strict guarantees for the reception of all data. Instead, the Ethernet MAC protocol makes a ‘best effort’ to deliver the frame without errors.” [https://www.cesarkallas.net/arquivos/livros/informatica/network/Ethernet%20Definite%20Guide.pdf] This is the central compromise of the standard: speed and simplicity over absolute certainty.
The Functional Proof: Dropped Frames and Variab le Timing
The “best-effort” label is functionally proven by two critical characteristics of the IEEE 802.3 MAC layer:
- No Guarantee of Delivery: When two devices attempt to transmit simultaneously, a collision occurs. CSMA/CD manages this by having both devices halt transmission and wait a random amount of time before trying again. The crucial detail? If a frame exceeds its programmed retry limit (typically 16 attempts), the MAC layer silently drops it. No notification is sent to the source. The burden of noticing this loss falls entirely on higher-layer protocols, such as TCP, which must then retransmit the data.
- No Guaranteed Latency: Because a device must “sense” the cable and defer to any ongoing traffic or manage collisions, there is no maximum, predictable time a frame will take to reach its destination. The result is a system with variable and unpredictable latency under load. This is a common understanding, with literature noting: “It is an unreliable, best-effort, and connectionless packet delivery protocol.” [https://www.redbooks.ibm.com/redbooks/pdfs/sg245626.pdf]
This lack of determinism is why, for many years, mission-critical systems in industrial and automotive fields have not used standard Ethernet. Their control loops could not tolerate the uncertainty of a “best-effort” service where the delivery of a packet was “no guarantee when any packet will arrive at its destination, and they may well be out of sequence.”
[https://doc.lagout.org/network//Practical%20TCP%20IP%20and%20Ethernet%20Networking.pdf]
Why Industrial Protocols A bandoned Ethernet’s “Best-Effort”
The world of Operational Technology (OT) – factories, power grids, and automated machinery – cannot tolerate the unpredictable latency and dropped packets. In these environments, a delay of milliseconds can lead to catastrophic failures, damaged products, or safety hazards [https://en.wikipedia.org/wiki/Industrial_Ethernet].
To overcome the fundamental non-determinism of the standard, several Ethernet-based industrial protocols were specifically engineered to enforce strict timing and guaranteed delivery. They achieve determinism and low latency by either bypassing, modifying, or strictly scheduling Ethernet’s core functions. We will look at four of the common protocols: EtherNet/IP, PROFINET, EtherCAT, and Modbus TCP/IP.
Here is a contrast of how those key protocols achieve their guarantees.
EtherNet/IP (Industrial Protocol)
EtherNet/IP leverages standard, unmodified Ethernet and the TCP/IP stack, but achieves determinism through application-layer traffic management and prioritized Quality of Service (QoS).
- Messaging: EtherNet/IP uses Implicit Messaging (I/O data) for control traffic, which is scheduled on a fixed Requested Packet Interval (RPI). The RPI is a timer that ensures data is sent predictably, whether the data has changed or not [https://www.qfautomation.com/uploads/product-manual/Application%20Note%20-%20Common%20Terms%20in%20EtherNetIPTM.pdf]. In contrast, Standard Ethernet sends data only when triggered, leading to unpredictable bursts; EtherNet/IP sends at a guaranteed interval [https://library.automationdirect.com/ethernetip-implicit-vs-explicit-messaging/].
- Prioritization: EtherNet/IP relies on IEEE 802.1Q QoS tagging to mark I/O data frames as high priority [https://www.keyence.com/ss/products/controls/network/fieldnetwork/e_ip.jsp]. Switches process and forward these frames ahead of non-control (explicit) traffic [https://www.tccomm.com/Literature/Literature/Ethernet-Network-White-Papers/QoS-Prioritization]. Standard Ethernet treats all data frames equally, making control data vulnerable to delays caused by large, low-priority data transfers.
- Synchronization: While it can use standard network time (NTP), its determinism is primarily achieved by the strict, application-enforced RPI timer on each device [https://www.qfautomation.com/uploads/product-manual/Application%20Note%20-%20Common%20Terms%20in%20EtherNetIPTM.pdf]. For demanding motion applications, EtherNet/IP utilizes CIP Sync based on IEEE 1588 PTP for sub-microsecond synchronization [https://www.analog.com/en/resources/technical-articles/time-synchronization-and-real-time-performance-in-ethernet-solutions.html].
PROFINET (Process Field Network)
PROFINET uses a two-tiered approach to achieve determinism: it uses standard Ethernet for non-critical data, while employing specialized mechanisms to guarantee real-time performance.
- Real-Time (RT): The standard PROFINET RT class handles up to 90% of industrial applications by using prioritized Ethernet frames (802.1Q) and bypassing the slow TCP/IP layers for cyclic I/O data [https://www.industry.siemens.com/topics/global/en/industrial-communication/profinet/real-time-communication/Pages/real-time-communication.aspx]. By bypassing layers 3 and 4, RT messaging avoids the operating system overhead and latency associated with IP addressing and TCP handshakes [https://profinet.com/technology/performance/real-time-performance].
- Isochronous Real-Time (IRT): For motion control, IRT uses a time-slotted access method. IRT traffic is given exclusive reserved time windows within the cycle, ensuring that critical data is never delayed by non-critical data [https://en.wikipedia.org/wiki/PROFINET]. IRT traffic is protected from collision and contention by reserving bandwidth before any best-effort or other standard traffic is allowed to transmit [https://profinet.com/technology/performance/isochronous-real-time].
- Topology: Cut-through switching is often utilized in PROFINET networks, allowing a switch to begin forwarding a frame as soon as the destination address is read, further reducing propagation delay (latency) [https://www.hilscher.com/fileadmin/user_upload/products/downloads/PROFINET_Stack_Manual_en_V310.pdf]. Standard Ethernet generally uses Store-and-Forward switching, which waits to receive the entire frame before processing and forwarding it, adding unnecessary latency.
EtherCAT (Ethernet for Control Automation Technology)
EtherCAT takes the most radical approach, completely redefining how data is handled at the hardware level to eliminate the delays introduced by network components (switches).
- Processing on the Fly: Instead of switches and separate packets, EtherCAT uses a single master-generated frame that passes through all slave devices in a loop or line. The slave devices process the frame on the fly, with only a few nanoseconds of delay [https://en.wikipedia.org/wiki/EtherCAT]. This eliminates the need for multiple frames, buffers, and all switching decisions. The entire network cycle is completed with the delay of one frame’s travel time [https://www.beckhoff.com/en-us/products/i-o/ethercat-terminal/ethercat-technology/].
- Topology: Uses a physical ring or line topology that enables the single-frame processing model and is inherently simpler than a complex switched Ethernet tree. Standard Ethernet relies on switches and routers, where each node-to-node hop introduces processing delay and the potential for queuing/collision issues.
- Synchronization: The protocol incorporates a highly accurate, distributed clock mechanism where the master ensures all slave clocks are synchronized to a sub-microsecond level, guaranteeing simultaneous action across all devices [https://en.wikipedia.org/wiki/EtherCAT]. Standard Ethernet has no integrated clock synchronization, and data arrival is governed by unpredictable network conditions.
Modbus TCP/IP (The Simplest Approach to Ethernet)
Modbus TCP/IP is one of the most widely used industrial protocols, but it is unique on this list because it does not employ special, deterministic mechanisms to overcome the limitations of best-effort Ethernet. Instead, its determinism is entirely dependent on network design.
- Protocol Stack: Modbus TCP/IP is simply the legacy Modbus protocol framed directly within the standard TCP/IP stack. It relies on TCP’s guaranteed delivery (retransmission on loss) for reliability [https://www.modbustools.com/modbus.html]. It fully embraces the TCP/IP stack, making it easy to route and firewall, but inheriting the variable latency caused by queuing, buffering, and TCP’s retransmission timeouts [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet].
- Messaging Model: Operates as a simple client-server (master-slave) polling model. This synchronous nature provides a semblance of predictability [https://en.wikipedia.org/wiki/Modbus]. Unlike PROFINET or EtherCAT, which use scheduled data, Modbus timing is entirely dependent on the polling cycle and the latency of the underlying best-effort network [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet].
- Achieving Performance: To achieve low latency, Modbus TCP/IP requires network isolation (VLANs, dedicated switches) and over-provisioning of bandwidth. Performance is a function of network management, not protocol design. Because it doesn’t modify the Ethernet standard, its functional performance is limited by the CSMA/CD contention and buffering issues that the other protocols are designed to bypass.
In summary, while standard Ethernet is content to make a “best effort” and leave reliability to the applications, these industrial protocols had to engineer deterministic behavior into the network layer to meet the strict timing demands of real-world control systems. Modbus TCP/IP, as the exception, simply layers a reliable application (TCP) over the best-effort foundation, requiring manual network engineering to achieve a practical level of determinism.
Impact of High Latency
In the industrial environment, high latency and jitter can have a devastating impact. Let’s look at a few implications of latency:
- Degraded Control Cycle Time: High latency directly slows down the control loop. Suppose a Programmable Logic Controller (PLC) waits too long for sensor data, the resulting control command will be delayed, preventing the system from reacting fast enough to changes in the physical process. This is the fundamental concern for real-time control [https://www.motioncontroltips.com/what-are-latency-and-jitter-why-are-they-important-to-industrial-networks/].
- System Instability: In closed-loop control applications (e.g., temperature control, velocity control), a long, uncompensated time delay can cause the control algorithm to overcompensate or oscillate, leading to instability, decreased precision, and product quality issues. These artifacts, such as delay and jitter, degrade the real performance of control systems [https://www.researchgate.net/publication/224647008_Jitter_Evaluation_of_Real-Time_Control_Systems].
- Safety Hazards: For critical, event-driven processes (e.g., an emergency stop), high latency means a delay in executing the safety command, which can lead to equipment damage or injury. The lack of guaranteed delivery and timing in standard Ethernet is precisely why mission-critical and safety systems often avoid it, or use specialized deterministic protocols [https://en.wikipedia.org/wiki/Industrial_Ethernet].
Why Jitter is an Issue
Jitter is the variation in latency – the inconsistency in the delay between the arrival of successive data packets. It is often more detrimental than a high, but consistent, latency value [https://www.motioncontroltips.com/what-are-latency-and-jitter-why-are-they-important-to-industrial-networks/]. Jitter measures the variability in the time it takes for data packets to arrive [https://en.wikipedia.org/wiki/Jitter, https://medium.com/@digital_samba/jitter-explained-causes-effects-and-mitigation-in-networking-cfa200342a5d], and in industrial automation, it’s defined as the deviation between the time a network device is required to send a message and the time it actually sends it [https://www.dosupply.com/tech/2023/01/09/what-you-need-to-know-about-jitter-in-industrial-automation/]. It also can have several negative impacts.
- Loss of Synchronization: For coordinated systems like motion control (e.g., in robotics or CNC machines), devices must operate with precise timing relative to each other. High jitter causes messages to arrive at irregular intervals, breaking the synchronization and leading to jerky, imprecise, or failed movement [https://www.motioncontroltips.com/what-are-latency-and-jitter-why-are-they-important-to-industrial-networks/]. This is because jitter causes the system to behave in a non-periodic manner, leading to degraded performance compared to the expected response [https://www.researchgate.net/publication/224647008_Jitter_Evaluation_of_Real-Time_Control_Systems, https://portal.research.lu.se/files/6071481/8866183.pdf]. For time-sensitive industrial applications, performance is adversely impacted by jitters [http://www.diva-portal.org/smash/record.jsf?pid=diva2:1558986].
- Data Integrity and Ordering: Excessive jitter causes data packets to arrive at their destination out of order or in bursts. The receiving device must use a large jitter buffer to re-sequence them. If the buffer is too small, packets are lost; if it’s too large, it adds even more latency [https://www.dnsstuff.com/jitter-packet-loss-and-latency-in-network-performance]. This delay variability means packets can arrive at the destination in the wrong order or in “groups” that cause delays in processing [https://www.motioncontroltips.com/what-are-latency-and-jitter-why-are-they-important-to-industrial-networks/, https://www.dosupply.com/tech/2023/01/09/what-you-need-to-know-about-jitter-in-industrial-automation/], potentially leading to incomprehensible communications [https://www.dosupply.com/tech/2023/01/09/what-you-need-to-know-about-jitter-in-industrial-automation/].
- Timeouts and Communication Failure: Industrial controllers are often configured with strict timeout periods. If the variation in message arrival time (jitter) exceeds these limits, the connection is declared lost, halting the process and potentially triggering an emergency shutdown. Network jitter can lead to timeouts in industrial automation, where an application polls for a predetermined length of time before the connection is declared lost [https://www.dosupply.com/tech/2023/01/09/what-you-need-to-know-about-jitter-in-industrial-automation/]. An overly busy network or NIC, which generates high jitter, can cause CIP connection timeouts or skipped replies in Industrial Ethernet protocols like EtherNet/IP [https://literature.rockwellautomation.com/idc/groups/literature/documents/at/enet-at003_-en-p.pdf].
Latency and Determinism: Why Security Is Not So Simple
We know that best-effort firewalls shouldn’t be in OT, but now academic research is providing the hard numbers, proving that even purpose-built Deep Packet Inspection (DPI) is a double-edged sword when paired with protocols like Modbus TCP/IP. The security gains from inspecting every packet come at the cost of the guaranteed timing that industrial systems demand.
The University of Alberta Study: Filtering Kills Determinism
The study from the University of Alberta (University of Alberta, IEC 62443 Security Framework Study [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet], a reference from earlier discussions) focusing on the challenges of achieving IEC 62443 security requirements in time-sensitive industrial networks, delivered a clear warning. The research, which utilized open source Linux firewalls in Modbus TCP/IP industrial control networks, revealed a severe performance trade-off:
- Conclusion on Latency and Jitter: The results show that the latency and jitter introduced by multilayered firewalls make it challenging to achieve real-time communications in some industrial applications when strict IEC 62443 security standards are followed [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet].
- The Cause: Deep packet inspection, which is necessary for advanced security, increases message processing. This increased processing leads to additional transmission latency, jitters, and packet loss [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet].
- The Direct Impact: The core argument of the research is fully supported: “when Modbus TCP/IP firewall filtering rules are configured, latency and jitter are greatly affected.” [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet]
The Cheminod Study: Quantifying the Commercial Firewall Penalty
The work by M. Cheminod, L. Durante, A. Valenzano, and C. Zunino on the “Performance Impact of Commercial Firewalls on Networked Control Systems” took the issue one step further by testing real-world industrial products.
The central aim of the research was to evaluate the impact and behavior of commercial firewalls in networked control systems, specifically seeking to determine the acceptable safe operating margins of delays that could be tolerated in IACS networks due to the presence of an industrial firewall with deep packet inspection of the Modbus TCP/IP industrial protocol [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet].
While the study’s explicit numeric value for tolerable latency is not provided in the summary, the quantitative finding on the performance penalty is stark: experimental evaluations of latency show that industrial firewalls for Modbus introduce twice the latency when compared to firewalls with only basic filtering capabilities [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet].
Test Methodology: COTS vs. Open Source
The two studies relied on distinct methodologies to isolate the firewall’s performance impact:
The University of Alberta study used open source Linux firewalls to experimentally evaluate network packet latency, jitter, and packet loss in Modbus TCP/IP industrial networks. In contrast, the Cheminod et al. (2016) study focused on testing the performance of three commercial off-the-shelf (COTS) industrial firewalls. This latter research focused on the packet inspection capabilities of the Modbus/TCP protocol [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet], noting that two of the firewalls provided ad hoc support for analyzing and filtering industrial protocol communications. Both methodologies confirm that, regardless of whether the firewall is open source or commercial, deep packet inspection is the source of performance degradation.
Conclusions
The evidence is clear and twofold: standard IT network architectures fail in OT environments because they prioritize flexibility over determinism, and the very security measures (like DPI) that defend the enterprise actively destabilize critical industrial processes.
For cybersecurity professionals securing control systems, this means the solution is not simply “more firewalls.” It requires a shift toward contextualized security:
- Prioritize Availability: In OT, the security policy must first and foremost enforce the CIA triad in the order of Availability, Integrity, Confidentiality. Any security measure that sacrifices predictable timing is unacceptable.
- Protocol-Aware Defense: The future of OT security is the ruggedized Next-Generation Firewall (NGFW). These systems are essential because they provide security policies based on the industrial function code (e.g., blocking a specific Modbus write command) rather than blindly inspecting the entire packet flow. This surgical approach minimizes the latency penalty while providing granular protection.
- Validate Performance: As the academic studies confirm, deploying any DPI solution into a time-sensitive network without prior performance validation is reckless. Cybersecurity teams must partner with OT engineers to establish acceptable latency and jitter tolerances before rolling out any new security control.
The convergence of IT and OT is unavoidable, but it demands an admission that the rules of the IT network – from its best-effort transport to its security architecture – are fundamentally incompatible with the physics and safety requirements of the industrial world. Security must speak the language of the machine, or it risks silencing the machine entirely.
The next article in this series will examine why conventional IT Next-Generation Firewalls are often ineffective in OT environments and highlight how leading vendors are developing purpose-built firewalls designed specifically for industrial systems.
About the Author
Dmitry Sevostiyanov is an experienced cybersecurity professional with a strong background in protecting organizations from digital threats. Dmitry currently supports critical aerospace and defense systems at a leading global airspace manufacturer.
Dmitry can be reached online at https://www.linkedin.com/in/dmitry-sevostiyanov/ and at his personal blog https://thehardenedlayer.com/
