When a user requests a Web page, his browser sends numerous requests for different types of information -- such as Java applet, multimedia and database access -- resulting in as many as 50 connection requests to Web servers.
The server opens a socket and allocates memory and processes for that user, opens a session for the user, acknowledges the client's HTTP request, fetches relevant data from cache or disk memory, flows the data back to typically slow access connections and finally closes the session.
The user's next mouse click initiates the process again.
If a user's session includes static and dynamic content and an e-commerce transaction, each type of request requires a new connection, and the Web server or servers must dedicate resources for each. This slows the server's response time, taking resources from its primary task - to serve Web content.
HTTP 1.1 lets a browser send multiple requests across a persistent connection to a server, eliminating some overhead from a single client. Content providers, however, effectively turn off this feature because if connections are left open for each client session, a site will soon run out of server resources. And if the connection is kept open, you need to limit it to 5 to 15 seconds to avoid tying up your servers with idle connections.
One new approach to Internet connection management is using TCP multiplexing to break the client/server connection dependency. TCP multiplexing systems keep client-side connections open with longer timeouts. By eliminating most of the "hello-goodbye" setup and tear down overhead so that transactions can flow freely over the WAN via managed server connections, these systems dramatically improve the efficiency of high-traffic Web sites and Internet services.
WAN latency is a significant cause of congestion on the Internet. Dynamic transactions and content updates require access to origin sites across the WAN, causing delays that add up to several seconds, or even minutes, for large transfers.
Consider a Web page with 50 objects, each requiring three packets to open a connection and four packets to close. Assuming 200 msec of latency per round trip and four concurrent browser connections, that's 16.8 seconds of TCP overhead to load one page, vs. just 1.4 seconds across a persistent, managed connection using the TCP multiplexing method.
TCP multiplexing aggregates and manages Internet connections to not only reduce server loads, but also ensure rapid content delivery.
A TCP multiplexing system improves the efficiency of the Web server farm or Internet service by acting as a thin connection (or channel) proxy to servers, caches and content delivery networks. The system receives TCP/IP requests, consolidates them and applies logic to the opening and closing of server connections. It can direct and funnel client requests into high-speed server sessions that avoid the constant interruption of setup and tear ddown.
The TCP multiplexing engine monitors each incoming and outgoing packet to manage sessions and determine when a connection can be used for another client. The server no longer has to expend its processing power setting up and tearing down sessions for each user request.
It can also manage packets to protect against traffic spikes and attacks, provide access control and direct content delivery. Additionally, since client sessions are managed by the TCP multiplexing engine, the network overcomes the TCP slow-start overhead that is inherent to TCP/IP networking. The client is no longer required to wait while the server tries to gauge the quality of the client connection.
The results are:
TCP/IP was not designed to keep pace with today's content, transactions and infrastructure. TCPMultiplexing offers a solution that Web sites and Internet services can deploy without changing their existing TCP/IP network topologies.
This story, "TCP/IP multiplexing boosts sites" was originally published by NetworkWorld.