In theory, under the very best conditions, data would be able to travel across the Internet at the speed of light. In reality, as we all know, that doesn’t happen for a variety of reasons such as the fact that we don’t live in a vacuum, bandwidth constraints create bottlenecks, and communication protocols slow things down. However, new research suggests that much of what’s keeping us from surfing at the speed of light is latency caused by the physical infrastructure of the Internet and that there’s a surprisingly cheap and realistic solution to the problem.
Researchers from the University of Illinois at Urbana-Champaign and Duke University recently looked at the main causes of Internet latency and what it would take to achieve speed-of-light performance in a paper titled Towards a Speed of Light Internet. Reducing latency on the Internet, the authors posit, could have many positive benefits, such as improved user experience, expanded use of thin clients, and better geolocation. “We want to push to the limits of that endeavor; speed-of-light is the only *fundamental* limit,” one of the paper’s authors, Ankit Singla, who will soon be joining the faculty at ETH Zürich, told me via email. “Our work is an examination of why this is worth doing, and what it might take.”
Infrastructure latency is the main culprit
To get a sense for just how much slower than the speed of light the Internet currently is, Singla and his colleagues measured the time it took to fetch the index page HTML of 28,000 top web sites from clients at 186 locations around the world in December 2014 (SSL sites were not included for this study). Using the time it would take light to make the round trip between the client and the web server as a baseline, they found that the median fetch time was about 35 times as long it would take light to travel the same distance, while the fetch time for 80th percentile was more than 100 times as long.
To find out where the slowdowns were coming from, the researchers also broke down the fetch time by various steps: The median DNS-lookup time was 7.4 times as long it would take light to travel the same distance, TCP handshakes were 3.4x, request responses 6.6x, and TCP data transfers 10.2x. However, while it seems the overhead associated with these protocols is causing the bulk of the delay, it turns out that much of it is really coming from the latency of the underlying infrastructure, which works in a multiplicative way by affecting each step in the request. When the researchers adjusted for the median ping time from clients to servers, 3.2 times longer than what it would take light to travel the same distance, the true protocol overheads dropped to 2.3x for DNS-lookup, 1.1x for the TCP handshake, 1.0x for the request response, and 3.2x for the TCP transfer.
In other words, if the underlying infrastructure latency could be removed, without making any improvements to protocol overhead, the speed of the Internet could be brought down from what is often more than two orders of magnitude slower than the speed of light to just one order of magnitude slower, or less. As the authors wrote in the paper, “inflation at the lower layers plays a big role in Internet latency inflation.”
A cheap and easy speed-of-light Internet
The second part of the paper proposes what turns out to be a relatively cheap and potentially doable solution to bring Internet speeds close to the speed of light for the vast majority of us. The authors propose creating a network that would connect major population centers using microwave networks. Why microwaves? Because microwave networks have already proven to be extremely fast and (somewhat) reliable. For example, microwaves are used to transfer data at nearly the speed of light between financial markets in Chicago and New York City for high frequency trading, where minimal latency is critical, with 95% reliability. Also, other potential solutions, such as hollow fiber and line-of-sight optics, aren’t yet mature enough (or cheap enough) for consideration.
The drawback with microwave is low bandwidth. To get around that, their solution would rely on the microwave network between cities for web and data traffic for which minimal latency is important. Other things for which latency isn't as critical, like video consumption (which is currently 78% of web traffic), could continue to use existing infrastructure, so congestion wouldn’t be an issue. Traditional fiber would be used to bring data to users up to 100km away from the microwave endpoints; even at that distance, the latency introduced by fiber would be minimal.
The authors estimate that the cost of creating a network that would bring near speed-of-light Internet performance to 85% the U.S. population using microwave repeaters on existing towers would be a mere $253 million in set-up costs and $96 million a year in operational expenses. That's a relatively small investment compared to the billions of dollars currently being spent to lay new fiber optic cables across the Arctic Ocean.
Of course, there are potential issues with such an implementation. For example, getting approval from the FCC to use existing towers for microwave is not a given. Also, some applications are both latency-sensitive and high-bandwidth, so this solution may not work for those at scale. Setting up microwave networks across oceans to expand beyond the U.S. wouldn’t be simple, either.
All in all, though, Singla and his colleagues feel that their proposed solution is not unrealistic.
“We think this setup with two parallel networks — the current fiber backbone which provides huge bandwidth, but higher latency; and a microwave-based network that provides nearly speed-of-light latency, but much lower bandwidth — is very interesting,” he said, “and a plausible way of getting a lot of the benefits of low-latency networking at very little cost.”
They also feel that, whatever the ultimate solution, a speed-of-light Internet isn’t just a pipe dream, but something that we will have someday. “I think this will eventually happen,” Singla told me, "the challenge for us is to make it happen *soon*, for example, getting really close to speed-of-light latencies within a decade, at least within certain geographies."
Editor's note: An earlier version of this article incorrectly stated that request response times were calculated minus the associated server processing time when, in fact, server processing times were included in those calculations. Also, speeds that were noted in the earlier version by the author to be "x times the speed of light" were updated to the correct "x times as long it would take light to travel the same distance."