November 12, 2013, 1:03 PM — Intel and Fujitsu last week showed off a new server prototype using Intel's silicon photonics technology to power an Optical PCI Express (OPCIe) design, a fiber optic interconnect which allows the storage and networking to be moved away from the CPU motherboard. The demo took place at the annual Fujitsu Forum in Munich, Germany.
Fujitsu is a leading server maker in its native Japan but has struggled to get any traction in the U.S. It has worked with Intel for some time on this OPCIe server interconnect and this was its first public showing. Intel, for its part, has been working on optical networking interconnects for some time, beginning with the Light Peak interconnects that became Thunderbolt, used by Apple.
In the demonstration, OPCIe servers connected to an external expansion box containing additional compute and storage nodes. Normally these nodes would have to be extremely close for a copper wire connection to minimize time as the bits travel down the wire. With photonics, things travel at the speed of light, so there is no problem with putting them many feet apart.
This means a little more room inside rack servers, since 1u servers are extremely tight. They aren't called "pizza box" servers for nothing. Four-socket 1u and 2u servers are common, but after you put four CPUs and memory for each CPU and a huge fanless heat sink, there isn't room for much else. So with the OPCIe interconnect, you can put all the storage outside of the rack.
Besides latency, copper wire has other disadvantages. There can be interference in a tightly-packed server box. Signal amplifiers solve that problem but add power costs. And the cables are heavy; up to 20 pounds. One OPCIe cable carries 10 times the bandwidth and weighs just a pound.
As part of its demo, Fujitsu took two of its Primergy RX200 servers and added an Intel Silicon Photonics module to each along with an Intel-designed FPGA to make PCIe work with optical networking. The servers connected to an expansion box with several solid state disks (SSD) and Xeon Phi co-processors, along with the connecting Silicon Photonics module and FPGA.
The demo showed the ability to connect separate boxes with compute or storage nodes so that they appear to the CPU to be on the main motherboard when in fact they are actually being fully virtualized. The SSDs and Xeon Phis appeared to the RX200 server as if they were on the motherboard. Thanks to the speed of light, data traveling a few meters down the cable had no latency.
Fujitsu’s approach showed three key benefits: increasing storage capacity, because the server box is no longer a limit; Fujitsu was able to include Xeon Phi cards for massive compute power, something that would be impossible to do with hard drives in there; and the server ran much cooler.
The only question left is when will this hit the market. Fujitsu has no concrete plans, nor do any American-based server makers.