December 14, 2000, 11:37 AM — Latency is the enemy of interactive audio and video. Packets that take too long to
get to their destination are as good as lost, and that lowers the quality of the audio
or video connection. Fortunately, you can tune your network to remove or reduce
You should aim to modify your transmission infrastructure so that during an audio or
video call there is never more than 400 milliseconds (ms) end-to-end latency, and less
than 100 ms jitter (a term used to denote variability in the delay). If end-to-end
delays reach or exceed these parameters, your callers will see artifacts on their
screens where data is missing or hear echo and pops in their audio.
Reducing network congestion is the key to managing latency on the LAN. One way to do
this is by increasing bandwidth, but that's not the only option. A number of Quality of
Service (QoS) schemes (such as packet prioritization using IP Precedence and DiffServ)
can help ensure that audio and video packets are handled with the highest priority
available. A single QoS mechanism implemented end to end should help lower latency on
For now, we'll focus on more traditional ways of efficiently using bandwidth to
address congestion. Most interactive video communications use 300 to 400 Kbps per
stream, while voice-only sessions use 30 to 40 Kbps. Since in a conversation the
streams are bidirectional, you would double those numbers to find the aggregate demand
on network resources for each conversation. Actually, you'd need to double those
numbers and add a bit more, thanks to the overhead imposed by packet headers. For
example, a bidirectional 384 Kbps videoconference consumes approximately 425 Kbps in
To reduce congestion, make sure that you're using switched (not shared) 10- or
100-megabit Ethernet throughout the LAN. Your multimedia applications won't use all
this bandwidth constantly, but switched Ethernet is faster than shared Ethernet and
doesn't cost significantly more.
Make sure you employ nonblocking switches with output-buffered backplanes with a
capacity of at least 1 Gbps. That should help you avoid a kind of congestion known as
head-of-line blocking, which occurs when the traffic for the switch's ports
exceeds their line rates.
The most common way to lower latency on IP LANs where data, audio, and video
converge is to overprovision: you should provide enough bandwidth to meet the
highest possible load at all times. At nonpeak times, the high capacity is wasted.
Generally, this means buying more or faster switches; this approach is thus expensive,
though frequently still cost-effective.