Saturday, June 25, 2011

Infrastructure as a Service: Benchmarking Cloud Computing

Building and deploying a heavy duty web service from the ground up is a long and costly process. At the IT section of AnandTech, we mostly focus on the fun part of the process: choosing and buying a server.

However, there is much more to it. Designing the software and taking care of cooling, networking, security, availability, patching and performance is a lot of work. Add all these time investments to the CAPEX investments in your server and it is clear that doing everything yourself is a huge financial risk.

These days, almost everybody outsources a part of this process. The most basic form is collocation: you rely on a hosting provider to provide the internet bandwidth and access, the electricity, and the rack space; you take control of rest of the process. A few steps higher is unmanaged dedicated hosting services. The hosting provider takes care of all the hardware and networking. You get full administrative access to the server (for example root access for Linux), which means the client is responsible for the security and maintenance of his own dedicated box.

The next step is to outsource that part too. With managed hosting services you won’t get full control, but the hosting provider takes care of almost everything: you only have to worry about the look and content of your web service. The Service Level Agreement (SLA) guarantees the quality of service that you get.

The problem with managed and unmanaged hosting services is that they are in many cases too restrictive and don't offer enough control. If performance is lacking, for example, the hosting provider often points to the software configuration while the customer feels that the hardware and network might be the problem. It is also quite expensive to enable the web server to scale to handle peak loads, and high availability may come at a premium.

Cloud Hosting

Enter cloud hosting. Many feel that cloud computing is just old wine in new bottles, but cloud hosting is an interesting evolution. A good cloud hosting starts by building on a clustered hosting solution: instead of relying on one server, we get the high availability and the load balancing capabilities of a complete virtualized cluster.

Virtualization allows the management software to carve up the cluster any way the customers like--choose the number of CPUs, RAM and storage that you want and make your own customized server; if you need more resources for a brief period, the cluster can provide this in a few seconds and you only pay for the time that you actually use this extra capacity. Best of all, cloud hosting allows you to set up a new server in less than an hour. Cloud hosting, or Infrastructure as a Service (IaaS), is definitely something new. Technically it is evolutionary, but from the customer point of view it offers a kind of flexibility that is revolutionary.

There is a downside to the whole cloud IaaS solution: most of the information about the subject is so vague and fluffy that it is nearly useless. What exactly are you getting when you start up an Amazon Instance or your own cloud at the Terremark Enterprise Cloud?

As always, we don’t care much about the marketing fluff; we're more interested in benchmarking in true AnandTech style. We want to know what kind of performance we get when we buy a certain amount of resources. Renting 5GB of RAM is pretty straightforward: it means that our applications should be able to use up to 5GB of RAM space. But what about 5GHz--what does that mean? Is that 5GHz of nostalgic Pentium goodness; or is it 5GHz of the newest complex, out-of-order, integrated memory controller, 1 billion transistor CPU monsters? We hope to provide some answers with our investigations.

No comments:

Post a Comment