ATM Jitter Measurements

These results come from a series of jitter and latency measurements on the ATM loopback connection from DTU/Lyngby to Aalborg University. Jitter is a measure of variation in transmission time, here considered as the width (standard deviation) of the frequency distribution histogram for transmission times.

Jitter Measurements

Experimental Setup

The jitter experiments use a loopback connection operating at 1 Mbit/s, communicating via AAL-5 over an ATM PVC set up to give a 1 Mbit/s CBR service. The workstations used in these experiments are Digital AlphaStations running the Digital Unix operating system version 4.0. Each sequence of measurements followed the scheme:


repeat:
{
}


An MTU (9182 bytes) is transmitted from a workstation at DTU to a switch which is operating in loopback mode. When the MTU is received again by the workstation at DTU, a transmission time is calculated and the jitter results are updated. The host then waits a certain period of time (for example 100 ms) and repeats the task. Each experiment is based on 1000 measurements.

Measurements on WAN ATM

In the first jitter experiments, the loopback was set up in the switch in Aalborg, so that all data sent to Aalborg were returned to DTU. The workstations in Aalborg were not involved.

Jitter values are plotted in the frequency distribution figures below. The wait_time interval is varied from 0 ms to 100 ms.


jitter plot 1

Jitter Distribution with Wide Area ATM (0 ms) (Postscript file)


jitter plot 2

Jitter Distribution with Wide Area ATM (1 ms) (Postscript file


jitter plot 3

Jitter Distribution with Wide Area ATM (100 ms) (Postscript file)


The results of these first experiments revealed two phenomena:

  1. Jitter (as measured by the width of the histogram) increases if a wait of at least 1 ms is inserted between consecutive transmissions.
  2. Nearly all MTUs arrive within ±500 µs of the average transmission time. However, some (very few) MTUs are received up to 20 ms. later than the average time, which for many applications would not be acceptable. (In the histograms above, these severely delayed MTUs are not plotted in the time slot to which they actually belong, but are included in the rightmost position.)

Experiments with Local Loopback

To investigate the source of these phenomena, several further experiments were carried out. To isolate the effect of the WAN ATM network, a local area loopback configuration was set up: Data from the work station at DTU was sent to the local ATM switch and from there straight back to the workstation. As in the WAN case, data were transmitted using AAL5 over an ATM PVC set up for 1 Mbit/s CBR. An example of the jitter results for this configuration, using a wait time of 1 ms between consecutive transmissions, is as follows:


jitter plot 4

Jitter Distribution with Local Area ATM (Postscript file)


In these measurements, the latency is reduced by the order of 5000 µs, corresponding to the round-trip time from DTU to Aalborg and back, but the jitter remains very close to that observed using the WAN loopback configuration, and a small number of severely delayed MTUs are still observed.

Experiments with an Unloaded System

All the previous measurements were taken with the host systems running in the normal way, with whatever activities were in progress at the time. To see the effect of concurrent activities on the jitter, a series of measurements was made with the test program running as the sole activity on the computer. All OS daemons and other user processes were killed prior to starting each series of measurements. An example of the jitter results for this set up, over a LAN loopback configuration with a 1 ms wait between consecutive transmissions, is as follows:


jitter plot 5

Jitter Distribution with Minimum Load (Postscript file)


Removing all concurrent activities reduces the jitter (measured by the standard deviation of the delay distribution) slightly. More importantly, however, it strongly reduces the number of severely delayed MTUs. The cause of these severe delays is not yet entirely clear, but on the evidence the most significant contribution seems to be related to Digital UNIX's scheduling of processes:

  1. Allowing the operating system to "relax" after sending an MTU before sending the next MTU results in an increase in jitter.
  2. Removing competing user processes results in a decrease in jitter.
  3. A detailed examination of calls of the time measurement function, gettimeofday(), showed that sometimes a simple timestamp would take up to 20 ms.
Some more evidence related to these observations comes from measurements on the latency for varying packet sizes.

Thomas Dibbern
Updated 13 November 1997 by Robin Sharp