InfiniBand was introduced in 2000 as a way to tie memory and processors of multiple servers together so tightly that communications among them would be as if they were on the same printed circuit board. To do this, InfiniBand is architecturally sacrilegious, combining the bottom four layers of the OSI (Open Systems Interconnection) networking stack -- the physical, data link, network and transport layers -- into a single architecture.
"InfiniBand's goal was to improve communication between applications," says Bill Lee, co-chair of the InfiniBand Trade Association Marketing Working Group, subtly deriding Ethernet's "store-and-forward" approach.
Unlike Gigabit Ethernet's hierarchical topology, InfiniBand is a flat fabric, topologically speaking, meaning each node has a direct connection to all the others. InfiniBand's special sauce is RDMA (Remote Direct Memory Access), which allows the network card to write and read data on a server, eliminating the need for the server processor to conduct this work itself.
InfiniBand quickly gained favor in HPC systems, and as mentioned above, the technology is now creeping into the enterprise. Oracle, for instance, uses InfiniBand as a performance edge for its Exadata and Exalogic data analysis appliances. Microsoft added direct support for RDMA to its newly released Windows Server 2012.
One enterprise user of InfiniBand is the Department of Veterans Affairs. The U.S. federal agency's informatics operation runs on about 200 servers, which communicate via InfiniBand. "We do a lot of data transfer," says Augie Turano, a solutions architect at the VA. Databases are moved around quite a bit among the servers so they can be analyzed by different applications. "Being able to move the data at InfiniBand speeds from server to server has been a big boost for us," Turano says.
The Ethernet Alliance's D'Ambrosia is undaunted by InfiniBand's performance perks, however. He figures Ethernet will catch up. "We like competition from other technologies, because it makes us realize we can always keep improving," he says.
While Ethernet was first used to connect small numbers of computers, successive versions of the specification were tailored for larger jobs, such as serving as the backplane for entire data centers, a job for which it quickly became a dominant player. In this same way, a number of technologies -- such as iWarp and RoCE (RDMA over Converged Ethernet) -- have been started so Gigabit Ethernet can compete directly with InfiniBand by reducing latency and processor usage.
"Ethernet evolves. That's what it does," D'Ambrosia says. Watch out InfiniBand! A formable competitor lurks in the data center!