Some years ago, we were in a joint venture with the dearly departed Digital Equipment Corporation. The purpose was to build a regional "computer conferencing" system, called, yes, New England Commons, that would then link to the other regional computer conferencing systems, notably, Metanet in Washington, DC., Unison, a similar system in Denver, and, in the Bay Area, you guessed it, The Well. Some day, the business plan read, all of these regional systems would meld into one national system allowing people all over the country to talk to one another online. Name of the parent company: Internetwork Communications. We called it Internet and the year was 1985.
Why were there regional systems in those days? Because the cost of connection--via dial-up--was prohibitive way back then...as in $25/hour. That's right, $25 per hour, or, if you worked a package, $22.50. (Now go complain about your montly cable-modem or DSL fee..)
For those unfamiliar with the term, a computer conferencing system was the ancestor of bulletin boards, discussion forums, and even the much-tossed-about term-du-jour, wikis.
Point of all this is that in order to install the hardware for this enterprise, we had to build a room outfitted with its own power system, air conditioning, raised floor and what I recall as a zillion other things we had to become quickly expert in. Transporting "the computer" (a VAX 11/780 plus racks and racks for the modems) to our second-floor office in Waltham, Mass., required hiring a crane. And as soon as it was operational, our power bill went through the third-floor roof.
I've been sensitive to the power needs of computing ever since and go to some effort, i.e. crawl around the floor turning off power strips, to reduce the drain on electricity when my machine is off (this in my home office).
All of which leads to a good post today that goes a bit deeper into what server farms and the like require in major operations centers, like hospitals. CareGroup's CIO "geekdoctor" John Halamka, a low-carbon-footprint kinda guy, sheds some light, so to speak, on the tradeoffs that he and his folks think about when adding MIPS. They've even hired a full-time power engineer:
Power consumption and heat is increasing to the point that data centers
cannot sustain the number of servers that the real estate can
accommodate. The solution is to deploy servers much more strategically.
We’ve started a new “Kill-a-watt” program and are now balancing our
efforts between supply and demand. We are more conservative about
adding dedicated servers for every new application, challenging vendor
requirements when dedicated servers are requested, examining the
efficiency of power supplies, and performing energy efficiency checks
on the mechanical/electrical systems supporting the data center.