The blogsphere has been really active over the past couple of weeks with discussions on the economic sense of cloud computing to customers. This seems to be dominating the usual technology discussions. The debate has been triggered by some changes in public cloud pricing models, most notably with Amazon’s EC2 reserved instances - “Pre-pay for your compute and then get a lesser rate per CPU hour”.
What’s our take?
Well let’s start with what you plan to use the cloud for. Let’s assume you have some degree of predictability in the workloads of your IT infrastructure. Very few enterprise or federal customers can “switch the lights off” at night time on their infrastructure; email doesn’t stop and databases are still used to build reports. Also very few enterprise apps like to scale (both in terms of technology and license model) in fractions of a machine. So this means enterprises have a natural “commit” level that matches their workloads.
So making the assumption you need some degree of servers available all the time, we used the online cloud pricing tools and modeled the costs for customers. We took a few sample configs for our servers, other managed hosting companies who publish prices online, and compared them to public cloud providers. What we found is, if you figure out the number of CPU hours for a month then create a comparable spec machine (cores, ghz, memory and disk capacity) in all cases purchasing a managed server was cheaper than the cloud equivalent. The same goes for storage - and even more so for bandwidth - where in the cloud model you typically pay for actual bytes transferred vs. some form of 95th percentile or bandwidth average.
So where does cloud make sense for enterprise customers?
At Carpathia we are seeing demand for all three of the scenarios presented below and have been very busy the past few months engineering solutions to meet these requirements.
- Burstable capacity to support production environments where some demand event - be it seasonal or more dynamic - requires compute/storage for a short duration.
- Labs, development and test environments where the ability to take advantage of the underlying virtualization software to rollback, play forward configurations, revisions, types of servers is important to simulate or test scenarios and you can “switch off the lights” when not in use.
- DR, if your recovery time objective is in hours, why pay for a copy of production that’s always powered up? Why not pay for data synchronization and use the cloud when you need it?
Lets focus for now on #1. Our solution to this problem is a family of services we call AlwaysOn and InstantOn. AlwaysOn delivers the predictable “commit” portion of their IT infrastructure using traditional IT infrastructure. Customers can take advantage of servers, virtualization, san, loadbalancers, etc., etc. InstantOn connects AlwaysOn to cloud-based technology allowing storage and compute to be seamlessly added to a production environment. This provides customers the benefits of a traditional managed environment; availability, security, predictability plus the ability to take advantage of the cloud to meet bursts of capacity in very granular units.
Most importantly AlwaysOn/InstantOn are delivered as managed services so our customers know who to call if they need help. We monitor the performance 24×7 and proactively take action.
Expect to hear a lot more about these services in the coming weeks, we have lots of things we are looking forward to sharing…