Delay and jitter are naturally tied to each other, but they are not the same. Delay is a significant metric in networking that is made up of four key components: processing delay, queueing delay, transmission delay, and propagation delay. It impacts the user experience, and can change based on several factors. Jitter is based off of the delay - specifically, delay inconsistencies. Jitter is the discrepancy between the delay of two packets. It often results in packet loss and network congestion. While delay and jitter share commonalities and a link, they are not equal.
What Is Delay?
Delay is an important metric in networking that measures the amount of time it takes for a bit of data to move from one endpoint to another. Delay in networking is typically on the scale of fractions of seconds, and can change based on many factors including the location of the endpoints, the size of the packet, and the amount of traffic.
How Does Delay Differ From Latency?
Latency and delay are intrinsically linked and sometimes interchangeably used. However, they are not always the same. Delay is the time it takes for data to travel from one endpoint to another. Latency, though, may be one of two things.
Latency is sometimes considered the time a packet takes to travel from one endpoint to another, the same as the one-way delay.
More often, latency signifies the round-trip time. Round-trip time encompasses the time it takes for a packet to be sent plus the time it takes for it to return back. This does not include the time it takes to process the packet at the destination.
Network monitoring tools can determine the precise round-trip time on a given network. Round-trip time can be calculated from the source since it tracks the time the packet was sent and computes the difference upon acknowledgement of return. However, a delay between two endpoints can be difficult to determine, as the sending endpoint does not have information on the time of arrival at the receiving endpoint.
What Are The Contributors To Delay?
Delay can be understood as the collection of four key delay components: processing delay, queueing delay, transmission delay, and propagation delay.
- Processing Delay is the time associated with the system analyzing a packet header and determining where the packet must be sent. This depends heavily on the entries in the routing table, the execution of data structures in the system, and the hardware implementation.
-
Queueing Delay is the time between a packet being queued and it being sent. This varies depending on the amount of traffic, the type of traffic, and what router queue algorithms are implemented. Different algorithms may adjust delays for system preference, or require the same delay for all traffic.
-
Transmission Delay is the time needed to push a packet’s data bits into the wire. This changes based on the size of the packet and the bandwidth. This does not depend on the distance of the wire, as it is solely the time to push a packet’s bits into the wire, not to travel down the wire to the receiving endpoint.
- Propagation Delay is the time associated with the first bit of the packet traveling from the sending endpoint to the receiving endpoint. This is often referred to as a delay by distance, and as such is influenced by the distance the bit must travel and the propagation speed.
These pieces of delay come together to make up the total delay in a network. Round-trip time consists of these delays combined to the receiving endpoint and back to the sending endpoint.
What Is The Impact Of Delay?
Delay mainly influences the user experience. In strictly audio calls, 150 ms of delay is noticeable and affects the user. In strictly video calls, 400 ms is discernible. Bringing these two call features together, joint audio and video calls should stay synced and have less than 150 ms of delay to not impact the user. However, generally speaking, it is important to keep delay as low as possible. ITU recommends network delay be maintained below 100 ms regardless.
What Is Jitter?
Packets transmitted continuously on the network will have differing delays, even if they choose the same path. This is inherent in a packet-switched network for two key reasons. First, packets are routed individually. Second, network devices receive packets in a queue, so constant delay pacing cannot be guaranteed.
This delay inconsistency between each packet is known as jitter. It can be a considerable issue for real-time communications, including IP telephony, video conferencing, and virtual desktop infrastructure. Jitter can be caused by many factors on the network, and every network has delay-time variation.
What Effects Does Jitter Have?
-
Packet Loss - When packets do not arrive consistently, the receiving endpoint has to make up for it and attempt to correct. In some cases, it cannot make the proper corrections, and packets are lost. As far as the end-user experience is concerned, this can take many forms. For example, if a user is watching a video and the video becomes pixelated, this is an indication of potential jitter.
-
Network Congestion - Network congestion occurs on the network. Network devices are unable to send the equivalent amount of traffic they receive, so their packet buffer fills up and they start dropping packets. If there is no disturbance on the network at an endpoint, every packet arrives. However, if the endpoint buffer becomes full, packets arrive later and later, resulting in jitter. This is known as incipient congestion. By monitoring the jitter, it is possible to observe incipient congestion. Similarly, if incipient network congestion is occurring, the jitter is rapidly changing.
Congestion occurs when network devices begin to drop packets and the endpoint does not receive them. Endpoints may then request the missing packets be retransmitted, which results in congestion collapse.
With congestion, it’s important to note that the receiving endpoint does not directly cause it, and it does not drop the packets. Consider a highway with sending house A and receiving house B. Congestion is not caused by B because it does not have enough parking spaces. Congestion is caused by A, because it continues to send cars on the highway to B.
How Do I Compensate For Jitter?
In order to make up for jitter, a jitter buffer is used at the receiving endpoint of the connection. The jitter buffer collects and stores incoming packets, so that it may determine when to send them in consistent intervals.
- Static Jitter Buffer - Static jitter buffers are implemented within the hardware of the system and are typically configured by the manufacturer.
- Dynamic Jitter Buffer - Dynamic jitter buffers are implemented within the software of the system and are configured by the network administrator. They can adjust to fit network changes.
Playout Delay
Playout delay is the delay between when the packet arrives and when it is played out for rendering. When the jitter buffer stores incoming packets and waits to distribute them at even intervals, this increases the time between when the packet arrives and when it is played out for rendering: also known as the playout delay. This delay is introduced by the jitter buffer, as it is responsible for dictating when incoming packets are distributed.
Conclusion
Delay and jitter are innately linked, but they are not the same. Delay is the time it takes for data to move from one endpoint on the network to another. It is a complex measurement affected by multiple factors. Jitter, on the other hand, is the difference in delay between two packets. Similarly, it may also be caused by several factors on the network. Though jitter and delay share similarities, jitter is merely based off of the delay, but is not equivalent to it.
This post is based off of a Stack Overflow answer by Balázs Kreith, one of our engineers at CALLSTATS I/O. In it, he details the differences between delay and jitter and how to calculate jitter. Check out the answer on Stack Overflow.