Seen at USENIX USITS:'Librarian' identifies Web bottlenecks

ITworld.com –

San Francisco, March 28 -- A novel approach for measuring the amount of time people spend waiting for Webpages to load was presented at the USENIX USITS '01 Conference today by Ram Rajamony, a researcher at IBM's Austin Research Labs.

While a number of companies are working on their own methods of measuring Web server response time, Rajamony's approach, called Client Perceived Response Time (CPRT), is unique in that it does not require any special upgrades to the server or client. Rather, it uses a combination of JavaScript and a server application called Librarian, which collects waiting-time data from clients.

Furthermore, the technique tracks the waiting time of actual clients, instead of sample clients distributed across the Internet, so that Website managers know exactly what their users are experiencing as opposed to having to extrapolate from a sample.

"This can justify buying or not buying faster hardware or content distribution services like Akamai or Digital Island. It could also lead to a new class of service-level agreements. For example, you could mandate that 90 percent of all users see the page within X seconds," Rajamony said.

Most Internet users have a boredom threshold of about four seconds. If a page takes longer than that to load, they are likely to search for an alternative Website to obtain their information. As people become used to new networking technologies, that boredom threshold will likely drop to only two seconds or less.

There are three components to CPRT: the server, the network, and the client. The server's access time can be boosted with more horsepower, whereas the client's rate of accessibility is subject to the available hardware. The network is perhaps the least-controllable element, although companies can improve CPRT by such means as utilizing caching servers.

Rajamony's software cannot pinpoint exactly where a bottleneck lies, but it is possible to analyze the data and determine whether slow response time is affecting everyone (a server problem), a particular network segment (a local network problem), or an isolated client (a client problem).

For example, if everyone in the .sg domain in Sinagpore is getting a slow response time, a Website manager might consider adding a caching server there. Because the Librarian application is independent of the server, it is possible to monitor the performance of all the caching servers with the single Librarian.

Rajamony's approach is to place a small JavaScript application on the client and read how long it takes the complete page to load. This approach requires no modifications to the client, server, or network.

Other approaches developed by companies such as NetMechanic, Exodus, and eValid involve pinging the site. But Rajamony said this method does not allow analysts to determine how the results correspond to customer performance. Two possible exceptions are Candle's eBA and Tivoli's QOS, both of which work on the user's client. However, Candle's approach uses Java, which is not supported by as many users as JavaScript.

Rajamony said his research with the software indicated that the best CPRT came from the .mil and .edu domains, while users in the Far East experienced much longer waits.

Insider: How the basic tech behind the Internet works
Join the discussion
Be the first to comment on this article. Our Commenting Policies