Mellanox extends InfiniBand across the campus

Mellanox's new MetroX switch can be used to connect different InfiniBand networks

Mellanox Technologies has introduced a new set of switches, the MetroX TX6000 series, that can stretch an InfiniBand connection across different buildings.

The switch will allow an organization to tie together two or more different InfiniBand networks that exist in different buildings on the same campus, up to 10 kilometers, or 6.2 miles, away.

InfiniBand is typically used as a high-speed interconnect to join groups of servers, or servers to storage networks within a single data center. This new technology, however, will allow organizations to connect separate InfiniBand networks across short distances.

The MetroX TX6100, available in early 2013, will have a throughput of up to 10 Gbs, and the MetroX TX6200, available in the second half of next year, will run at 40 Gbs. Eventually, the company will introduce new products that can convey InfiniBand traffic up to distances of 100 kilometers, or 62 miles.

Mellanox announced the switches at the SC12 supercomputing conference, being held this week in Salt Lake City.

Typically, to join different data center networks nearby to one another, an organization may use a Synchronous Optical Networking (SONET) optic network. The advantage of using MetroX to connect data centers is that it extends InfiniBand's use of remote direct memory access (RDMA). Other companies sell equipment for running InfiniBand across a WAN (wide area network), though without the RDMA capability.

"We are keeping RDMA all the through the entire line," said Brian Sparks, Mellanox senior director of marketing communications.

RDMA is advantageous in that it allows an end-device to send and receive data without requiring work from the system processor. With RDMA, the network card itself manages the data transfer directly from system memory, freeing up the system processor for other essential duties.

MetroX is based on Mellanox's SwitchX integrated circuit design. The devices will have six ports internal and six external ports. "There is no gateway functionality here. This is just extending InfiniBand," said Todd Wilde, Mellanox director of technical computing, during a SC12 presentation about new Mellanox technologies.

Use of InfiniBand appears to be growing, at least in the high performance computing (HPC) market. In the most recent Top 500 compilation of the world's fastest supercomputers, InfiniBand was used in 226 systems of the top 500 systems, up from 209 systems six months ago.

Mellanox did not disclose how much the switches would cost.

Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is Joab_Jackson@idg.com

Insider: How the basic tech behind the Internet works
Join the discussion
Be the first to comment on this article. Our Commenting Policies