Intel is pushing a series of studies it said can help corporations save as much as $2 billion per year (collectively) just by turning up the heat in their data centers.
The heat generated by spinning disks and number-crunching processors costs IT departments $27 billion per year in air conditioning because, common data-center wisdom says, hot computers will run badly, crash and cause irreparable damage to some data-center manager's career.
By 2014 data centers could be using 1.5 percent of all the electricity produced worldwide, according to estimates released by Intel as part of an energy-saving product rollout.
Turning up the heat just five degrees Celsius can save 20 percent of that cost.
Intel is now pushing a series of design guidelines and more precise temperature monitoring and maintenance products designed to run hotter than the typical 69 Fahrenheit.
HTA Data centers (higher ambient temperature) are spec'd to run comfortably at 81 degrees Farenheit. Facebook retooled its Santa Clara, Calif. Data center to run hot, according to Intel, as has Yahoo's Computing Coop data center.
Intel is also urging data-center managers to get rid of UPS's and rely instead on batteries built (in the future) into the servers themselves.
Using Intel's Note Manager servers can switch to internal batteries when the power goes out, monitor electricity usage while it's on and conglomerate operational data for whole data centers to help manage temperature more closely than most data centers do now.
Intel claims Node Manager alone can help data centers save 30 percent of their total power bill.
Is it true?
Data centers are notoriously inefficient users of electricity.
Until about five years ago one standard way to decide where to put a new server in a data center was to walk around until you found the spot that was coolest and plunk it down there.
Typical data centers were so inefficient that studies during the early 2000s often found ducting work that routed cold air from air conditioners directly into hot-air venting ducts, rather than into the servers where they would do some good.
A little attention paid to design, air flow and the advisability of shutting off machines that aren't used at night turned out to be huge savers of electricity.
But getting rid of all the UPSs, running the data center and 81 Fahrenheit and hoping all the non-Intel servers and components in the glass house are as happy to run as hot as Intel wants?
That's putting an awful lot of faith in the ability of a processor manufacturer to control the heat, degradation of non-Intel components and resistance to heat exhaustion of data-center geeks who may have gone in to that line of work for the quiet and easy access to air conditioning in the first place.
So, sure, go ahead and run your data center hot. Great idea. It will save a lot in electricity this year.
And the new servers, storage and networking gear you buy to replace all the gear that's fried, locked up or caught fire will undoubtedly be much more energy efficient than the old stuff.
It shouldn't take more than five or 10 years for a difference like that to make up the difference in capital cost of replacing half the hardware in the data center.
It's all in a good cause, right?