Efficiency of cloud computing is a giant myth. Their utilization is still terrible if they are not a Google or Facebook.
------------------------------------------------------
The sorry state of server utilization and the impending post-hypervisor era
by Alex Benik, Battery Ventures
NOV. 30, 2013 - 10:30 AM PST
gigaom.com
But after I reached out to contacts in data-center operations at various companies, I learned that I was wrong. One conversation really stuck with me. I’ll paraphase:
Me: Do you track server and CPU utilization?
Wall Street IT Guru: Yes
Me: So it’s a metric you report on with other infrastructure KPIs?
Wall Street IT Guru: No way, we don’t put it in reports. If people knew how low it really is, we’d all get fired.
While exact figures vary greatly depending the type of hardware being used in a data center, its specific characteristics and the peak-to-average ratio of the workload, low utilization has been widely observed. A few data points from the past five years:
A McKinsey study in 2008 pegging data-center utilization at roughly 6 percent.
A Gartner report from 2012 putting industry wide utilization rate at 12 percent.
An Accenture paper sampling a small number on Amazon EC2 machines finding 7percent utilization over the course of a week.
The charts and quote below from Google, which show three-month average utilization rates for 20,000 server clusters. The typical cluster on the left spent most of its time running between 20-40 percent of capacity, and the highest utilization cluster on the right reaches such heights only because it’s doing batch work.