Call us Toll Free (USA): 1-833-844-9468     International: +1-603-280-4451 M-F 8am to 6pm EST
Image

The Availability Imperative

Introduction

The world of Operational Technology (OT) demands certainty. The systems that run our factories, power grids, and water treatment plants simply cannot tolerate the unpredictable nature of standard IT gear. The core issue? Determinism. Control systems must operate with predictable, fixed timing. Introducing security often brings the “best-effort” liability of the IT world – and that’s a risk OT can’t afford.

Ethernet’s “Best-Effort” Problem: The Secret Behind the Standard (IEEE 802.3)

For decades, Ethernet (IEEE 802.3) has served as the bedrock of our digital world, connecting everything from corporate servers to home routers. Yet, beneath its reliable veneer lies a fundamental truth: standard Ethernet is a “best-effort” technology, which stems directly from its original media access control method, CSMA/CD (Carrier Sense Multiple Access with Collision Detection) [https://www.cesarkallas.net/arquivos/livros/informatica/network/Ethernet%20Definite%20Guide.pdf]. This protocol is the reason your data, while likely to arrive, is never truly guaranteed.

As one analysis observes, the Ethernet MAC protocol does not provide a guaranteed data delivery service: “Like most other LAN systems, Ethernet does not provide strict guarantees for the reception of all data. Instead, the Ethernet MAC protocol makes a ‘best effort’ to deliver the frame without errors.” [https://www.cesarkallas.net/arquivos/livros/informatica/network/Ethernet%20Definite%20Guide.pdf] This is the central compromise of the standard: speed and simplicity over absolute certainty.

The Functional Proof: Dropped Frames and Variab le Timing

The “best-effort” label is functionally proven by two critical characteristics of the IEEE 802.3 MAC layer:

  • No Guarantee of Delivery: When two devices attempt to transmit simultaneously, a collision occurs. CSMA/CD manages this by having both devices halt transmission and wait a random amount of time before trying again. The crucial detail? If a frame exceeds its programmed retry limit (typically 16 attempts), the MAC layer silently drops it. No notification is sent to the source. The burden of noticing this loss falls entirely on higher-layer protocols, such as TCP, which must then retransmit the data.
  • No Guaranteed Latency: Because a device must “sense” the cable and defer to any ongoing traffic or manage collisions, there is no maximum, predictable time a frame will take to reach its destination. The result is a system with variable and unpredictable latency under load. This is a common understanding, with literature noting: “It is an unreliable, best-effort, and connectionless packet delivery protocol.” [https://www.redbooks.ibm.com/redbooks/pdfs/sg245626.pdf]

This lack of determinism is why, for many years, mission-critical systems in industrial and automotive fields have not used standard Ethernet. Their control loops could not tolerate the uncertainty of a “best-effort” service where the delivery of a packet was “no guarantee when any packet will arrive at its destination, and they may well be out of sequence.”

[https://doc.lagout.org/network//Practical%20TCP%20IP%20and%20Ethernet%20Networking.pdf]

Why Industrial Protocols A bandoned Ethernet’s “Best-Effort”

The world of Operational Technology (OT) – factories, power grids, and automated machinery – cannot tolerate the unpredictable latency and dropped packets. In these environments, a delay of milliseconds can lead to catastrophic failures, damaged products, or safety hazards [https://en.wikipedia.org/wiki/Industrial_Ethernet].

To overcome the fundamental non-determinism of the standard, several Ethernet-based industrial protocols were specifically engineered to enforce strict timing and guaranteed delivery. They achieve determinism and low latency by either bypassing, modifying, or strictly scheduling Ethernet’s core functions. We will look at four of the common protocols: EtherNet/IP, PROFINET, EtherCAT, and Modbus TCP/IP.

Here is a contrast of how those key protocols achieve their guarantees.

EtherNet/IP (Industrial Protocol)

EtherNet/IP leverages standard, unmodified Ethernet and the TCP/IP stack, but achieves determinism through application-layer traffic management and prioritized Quality of Service (QoS).

PROFINET (Process Field Network)

PROFINET uses a two-tiered approach to achieve determinism: it uses standard Ethernet for non-critical data, while employing specialized mechanisms to guarantee real-time performance.

EtherCAT (Ethernet for Control Automation Technology)

EtherCAT takes the most radical approach, completely redefining how data is handled at the hardware level to eliminate the delays introduced by network components (switches).

  • Processing on the Fly: Instead of switches and separate packets, EtherCAT uses a single master-generated frame that passes through all slave devices in a loop or line. The slave devices process the frame on the fly, with only a few nanoseconds of delay [https://en.wikipedia.org/wiki/EtherCAT]. This eliminates the need for multiple frames, buffers, and all switching decisions. The entire network cycle is completed with the delay of one frame’s travel time [https://www.beckhoff.com/en-us/products/i-o/ethercat-terminal/ethercat-technology/].
  • Topology: Uses a physical ring or line topology that enables the single-frame processing model and is inherently simpler than a complex switched Ethernet tree. Standard Ethernet relies on switches and routers, where each node-to-node hop introduces processing delay and the potential for queuing/collision issues.
  • Synchronization: The protocol incorporates a highly accurate, distributed clock mechanism where the master ensures all slave clocks are synchronized to a sub-microsecond level, guaranteeing simultaneous action across all devices [https://en.wikipedia.org/wiki/EtherCAT]. Standard Ethernet has no integrated clock synchronization, and data arrival is governed by unpredictable network conditions.

Modbus TCP/IP (The Simplest Approach to Ethernet)

Modbus TCP/IP is one of the most widely used industrial protocols, but it is unique on this list because it does not employ special, deterministic mechanisms to overcome the limitations of best-effort Ethernet. Instead, its determinism is entirely dependent on network design.

  • Protocol Stack: Modbus TCP/IP is simply the legacy Modbus protocol framed directly within the standard TCP/IP stack. It relies on TCP’s guaranteed delivery (retransmission on loss) for reliability [https://www.modbustools.com/modbus.html]. It fully embraces the TCP/IP stack, making it easy to route and firewall, but inheriting the variable latency caused by queuing, buffering, and TCP’s retransmission timeouts [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet].
  • Messaging Model: Operates as a simple client-server (master-slave) polling model. This synchronous nature provides a semblance of predictability [https://en.wikipedia.org/wiki/Modbus]. Unlike PROFINET or EtherCAT, which use scheduled data, Modbus timing is entirely dependent on the polling cycle and the latency of the underlying best-effort network [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet].
  • Achieving Performance: To achieve low latency, Modbus TCP/IP requires network isolation (VLANs, dedicated switches) and over-provisioning of bandwidth. Performance is a function of network management, not protocol design. Because it doesn’t modify the Ethernet standard, its functional performance is limited by the CSMA/CD contention and buffering issues that the other protocols are designed to bypass.

In summary, while standard Ethernet is content to make a “best effort” and leave reliability to the applications, these industrial protocols had to engineer deterministic behavior into the network layer to meet the strict timing demands of real-world control systems. Modbus TCP/IP, as the exception, simply layers a reliable application (TCP) over the best-effort foundation, requiring manual network engineering to achieve a practical level of determinism.

Impact of High Latency

In the industrial environment, high latency and jitter can have a devastating impact. Let’s look at a few implications of latency:

  • Degraded Control Cycle Time: High latency directly slows down the control loop. Suppose a Programmable Logic Controller (PLC) waits too long for sensor data, the resulting control command will be delayed, preventing the system from reacting fast enough to changes in the physical process. This is the fundamental concern for real-time control [https://www.motioncontroltips.com/what-are-latency-and-jitter-why-are-they-important-to-industrial-networks/].
  • System Instability: In closed-loop control applications (e.g., temperature control, velocity control), a long, uncompensated time delay can cause the control algorithm to overcompensate or oscillate, leading to instability, decreased precision, and product quality issues. These artifacts, such as delay and jitter, degrade the real performance of control systems [https://www.researchgate.net/publication/224647008_Jitter_Evaluation_of_Real-Time_Control_Systems].
  • Safety Hazards: For critical, event-driven processes (e.g., an emergency stop), high latency means a delay in executing the safety command, which can lead to equipment damage or injury. The lack of guaranteed delivery and timing in standard Ethernet is precisely why mission-critical and safety systems often avoid it, or use specialized deterministic protocols [https://en.wikipedia.org/wiki/Industrial_Ethernet].

Why Jitter is an Issue

Jitter is the variation in latency – the inconsistency in the delay between the arrival of successive data packets. It is often more detrimental than a high, but consistent, latency value [https://www.motioncontroltips.com/what-are-latency-and-jitter-why-are-they-important-to-industrial-networks/]. Jitter measures the variability in the time it takes for data packets to arrive [https://en.wikipedia.org/wiki/Jitterhttps://medium.com/@digital_samba/jitter-explained-causes-effects-and-mitigation-in-networking-cfa200342a5d], and in industrial automation, it’s defined as the deviation between the time a network device is required to send a message and the time it actually sends it [https://www.dosupply.com/tech/2023/01/09/what-you-need-to-know-about-jitter-in-industrial-automation/]. It also can have several negative impacts.

Latency and Determinism: Why Security Is Not So Simple

We know that best-effort firewalls shouldn’t be in OT, but now academic research is providing the hard numbers, proving that even purpose-built Deep Packet Inspection (DPI) is a double-edged sword when paired with protocols like Modbus TCP/IP. The security gains from inspecting every packet come at the cost of the guaranteed timing that industrial systems demand.

The University of Alberta Study: Filtering Kills Determinism

The study from the University of Alberta (University of Alberta, IEC 62443 Security Framework Study [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet], a reference from earlier discussions) focusing on the challenges of achieving IEC 62443 security requirements in time-sensitive industrial networks, delivered a clear warning. The research, which utilized open source Linux firewalls in Modbus TCP/IP industrial control networks, revealed a severe performance trade-off:

  • Conclusion on Latency and Jitter: The results show that the latency and jitter introduced by multilayered firewalls make it challenging to achieve real-time communications in some industrial applications when strict IEC 62443 security standards are followed [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet].
  • The Cause: Deep packet inspection, which is necessary for advanced security, increases message processing. This increased processing leads to additional transmission latency, jitters, and packet loss [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet].
  • The Direct Impact: The core argument of the research is fully supported: “when Modbus TCP/IP firewall filtering rules are configured, latency and jitter are greatly affected.” [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet]

The Cheminod Study: Quantifying the Commercial Firewall Penalty

The work by M. Cheminod, L. Durante, A. Valenzano, and C. Zunino on the “Performance Impact of Commercial Firewalls on Networked Control Systems” took the issue one step further by testing real-world industrial products.

The central aim of the research was to evaluate the impact and behavior of commercial firewalls in networked control systems, specifically seeking to determine the acceptable safe operating margins of delays that could be tolerated in IACS networks due to the presence of an industrial firewall with deep packet inspection of the Modbus TCP/IP industrial protocol [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet].

While the study’s explicit numeric value for tolerable latency is not provided in the summary, the quantitative finding on the performance penalty is stark: experimental evaluations of latency show that industrial firewalls for Modbus introduce twice the latency when compared to firewalls with only basic filtering capabilities [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet].

Test Methodology: COTS vs. Open Source

The two studies relied on distinct methodologies to isolate the firewall’s performance impact:

The University of Alberta study used open source Linux firewalls to experimentally evaluate network packet latency, jitter, and packet loss in Modbus TCP/IP industrial networks. In contrast, the Cheminod et al. (2016) study focused on testing the performance of three commercial off-the-shelf (COTS) industrial firewalls. This latter research focused on the packet inspection capabilities of the Modbus/TCP protocol [https://www.researchgate.net/publication/338257007_Real-Time_Performance_Analysis_of_Modbus-TCP_on_Industrial_Ethernet], noting that two of the firewalls provided ad hoc support for analyzing and filtering industrial protocol communications. Both methodologies confirm that, regardless of whether the firewall is open source or commercial, deep packet inspection is the source of performance degradation.

Conclusions

The evidence is clear and twofold: standard IT network architectures fail in OT environments because they prioritize flexibility over determinism, and the very security measures (like DPI) that defend the enterprise actively destabilize critical industrial processes.

For cybersecurity professionals securing control systems, this means the solution is not simply “more firewalls.” It requires a shift toward contextualized security:

  1. Prioritize Availability: In OT, the security policy must first and foremost enforce the CIA triad in the order of Availability, Integrity, Confidentiality. Any security measure that sacrifices predictable timing is unacceptable.
  2. Protocol-Aware Defense: The future of OT security is the ruggedized Next-Generation Firewall (NGFW). These systems are essential because they provide security policies based on the industrial function code (e.g., blocking a specific Modbus write command) rather than blindly inspecting the entire packet flow. This surgical approach minimizes the latency penalty while providing granular protection.
  3. Validate Performance: As the academic studies confirm, deploying any DPI solution into a time-sensitive network without prior performance validation is reckless. Cybersecurity teams must partner with OT engineers to establish acceptable latency and jitter tolerances before rolling out any new security control.

The convergence of IT and OT is unavoidable, but it demands an admission that the rules of the IT network – from its best-effort transport to its security architecture – are fundamentally incompatible with the physics and safety requirements of the industrial world. Security must speak the language of the machine, or it risks silencing the machine entirely.

The next article in this series will examine why conventional IT Next-Generation Firewalls are often ineffective in OT environments and highlight how leading vendors are developing purpose-built firewalls designed specifically for industrial systems.

About the Author

ImageDmitry Sevostiyanov is an experienced cybersecurity professional with a strong background in protecting organizations from digital threats. Dmitry currently supports critical aerospace and defense systems at a leading global airspace manufacturer.

Dmitry can be reached online at https://www.linkedin.com/in/dmitry-sevostiyanov/ and at his personal blog https://thehardenedlayer.com/

Top InfoSec Innovators Awards for 2026 now open…

X

Stay Informed. Stay Secure. Read the Latest Cyber Defense eMag

X