More efficient hardware could be bad for your software

MIT researchers develop a new programming language to allow developers to deal with the unreliability of smaller, more efficient processors

By  


Smaller, more efficient microprocessors can be more unreliable, which isn't good for software

Image credit: Adapteva

As technology continues to advance at lightning speed, microprocessors keep getting smaller and more efficient. This is good news because it means computers and mobile devices get faster and use less energy. However, this is also bad news because these newer chips have higher soft error rates (i.e., read/write or computation errors) which can cause the software running on them to fail. 

Good news on this front, though, comes from researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), who’ve developed a software-based approach to managing this efficiency/reliability tradeoff. Reasoning that some software, in reality, can allow for small amount of errors and still work just fine (like in video rendering), they’ve developed a new programming language and analysis framework that enables programmers to account for unreliable hardware and specify an acceptable level of errors in their software.

They call this new language Rely. It supports a number of standard programming constructs, such as integer, logical and floating point expressions, as well as arrays, conditionals, while loops, and function calls. Developers can write programs that allocate variables to unreliable memory, indicate potentially unreliable computations using a simple notation (putting a dot after operators, e.g., x = y +. z) and specify reliability thresholds for functions. Given this code and a specification of hardware unreliability (i.e., the probabilities that read/writes or arithmetic/logical operations are correct), Rely can then do a reliability analysis and indicate the likelihood that a program will run correctly.

Using this framework, programmers can decide just how much error they’re willing to tolerate in their program. It may be that a certain level of software unreliability is worth the gain in performance from running on more efficient, unreliable hardware. However, if the benefit isn't worth the cost, then the developer can choose to sacrifice performance for increased software reliability.

This software is still in the research phase and isn’t something you can get ahold of yet, as more work needs to be done. But it’s a significant first step in allowing developers to better take advantage of leaps in hardware performance - and freeing hardware manufacturers to keep creating smaller/better/faster chips without fear of impacting software. As the researchers wrote:

"By enabling developers to better understand the probabilities with which this hardware enables approximate computations to produce correct results, these guarantees can help developers safely exploit the significant that unreliable hardware platforms offer."

[h/t MIT News]

Read more of Phil Johnson's #Tech blog and follow the latest IT news at ITworld. Follow Phil on Twitter at @itwphiljohnson. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question
randomness