September 10, 2012, 10:18 AM — "Latency is clearly the biggest factor in network constraints of page loading on the Web," says Guy Podjarny, chief product architect at Akamai. This is clear in run measurements of real users or in synthetic measurements, he adds, especially when compared to changes in download and upload speeds. "Unless you start with an especially slow connection, even doubling the speed will not make much difference. But with growth in latency, load times increase linearly."
Latency Is a Big Deal for Users
Analysis: Latency, Interference, Security Main Wireless Networking Challenges
Latency can often be hidden from users through multi-tasking techniques. This lets them continue with their work while transmission and computation take place in the background. The differences that latency-sensitive software design make can be dramatic, Podjarny says-start times that are four times as fast as load times twice as fast, plus better resilience due to fewer intermittent failures.
Major companies see significant usage and sales benefits from shaving off even fractions of a second of latency. For example, the Microsoft search engine Bing found that a two-second slowdown in page load performance decreased revenues per user by 4.3%, Podjarny notes. (His personal blog points out 16 more Web performance optimization statistics that demonstrate the importance of reducing latency.)
Developers also need to think about the law of unintended consequences of feature creep and address the possibility that new features may in fact subtly push users away. For example, when Google offered to let users increase the number of search results per screen from 10 pages to 30, the average page load time increased from 400 ms to 900 ms. The number of searches initiated per user dropped by 25% as a result, even though these users voluntarily chose to see the more voluminous search results.
How Will App Affect the WAN?