All chips with 16 cores or more must be 'network on a chip:' MIT researcher

Data buses reach too far, use too much power, can't distribute data without adding mini mesh networks

Microprocessor designers at MIT are working on ways to make PC microprocessors more powerful using a completely different approach than those that has been doubling the power of processors every 18 months for years.

The approach – referred to as Internet on a chip or network on a chip – has been under development for years, but hasn't gone mainstream because simpler methods could deliver power boosts more efficiently.

It is getting to the point that will no longer be possible, according to researchers at MIT.

Processor designers have hit plateaus in both traditional methods of increasing the power of – increasing the width of the data bus so the chip can process larger chunks of data on each cycle, or making the cycles shorter and faster so it can process more chunks of data per minute.

They've even plateaued a bit on the alternative method – adding more chips to each chip, in the form of multiple processor cores sharing the real estate, memory and other resources built onto the processor.

Multicores finish demanding tasks by breaking them up into sections and dividing the sections among the available cores.

They don't scale as efficiently as they could, however, because the data buses they use to communicate are also becoming overloaded.

MIT researcher Li-Shiuan Peh wants to change that by making multicore chips work more like the server clusters that provide the massed power underneath most major resource-intensive applications on the Internet.

The data bus on each chip, which allows the cores to exchange data, scale pretty well on chips with as many as eight cores, Peh said. Ten-core chips may use a second bus to keep performance high, but adding extra buses for each cluster of cores would become impractical quickly, long before being able to support hundreds of cores in one chipset – a scale Peh said is not as far away as most of us would think.

The solution is to distribute the mechanism for data-transport in the same way multicores distribute the ability to process data.

Each core would get a tiny data connection analogous to the Ethernet plug that goes into the back of each server in a cluster and divide data into packets so it can be transmitted and verified more effectively than the data streams used by PC data busses. To keep track of the packets, transmit and receive them correctly, each core would have a tiny router.

Networking each core would "lay a grid over all the cores, so there are many possible paths between nodes," said Peh. "Latency is much lower, with the disparity increasing as you scale up the core counts," Peh told EETimes. "Bandwidth is also much much higher because there are many possible paths to spread traffic across."

The network-on-a-chip design would save power because each core would only send data to the four cores nearest it, which would pass them on to other cores as needed.

Data busses have to connect directly to each core – a long reach that requires a long wire and a lot of power to drive data through efficiently.

Many researchers are working on network-on-a-chip designs, but none has made it work efficiently yet.

In June Peh will present a paper summarizing 10 years of research on networked multicores at the Design Automation Conference.

Among the big changes will be Peh's calculations showing all chipmakers will have to move to ring-networked interconnections or mesh network designs for processors with 16 processors or above.

Peh and colleagues will also demonstrate a packet-switched Internet-on-a-chip design that uses 38 percent less energy than it would using a standard data bus.

The chips, which are starting to be known as mini-internet chips, use two techniques impossible with data busses – low-swing signaling and "virtual bypassing."

Virtual bypassing is a way to reduce the amount of time each router on the Internet holds a packet by having the router that was its last stop send a message ahead so the next router down the line will be able to change its settings so it doesn't have to hold and examine the packet before sending it on.

Low-swing signaling, which reduces the amount of change in voltage necessary for each data packet created by each core.

Read more of Kevin Fogarty's CoreIT blog and follow the latest IT news at ITworld. Follow Kevin on Twitter at @KevinFogarty. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.

Top 10 Hot Internet of Things Startups
Join the discussion
Be the first to comment on this article. Our Commenting Policies