Is Google and Microsoft's minutely pricing better value than AWS's hourly?

Is Google and Microsoft's minutely pricing better value than AWS's hourly?

If you start a virtual machine (VM) on AWS, it is always rounded up to the nearest hour. Used it for three hours and 45 minutes? Then you'll have to pay for four hours.

This was the de-facto standard until Microsoft and Google entered the IaaS market, and announced they would offer minutely billing. On these clouds, if you only use a resource for 3 hours 45 minutes, then that is all you pay for.

So surely this is good news. In our example above, the 15 minutes' worth of savings we've made can mount up over time, right? Well, yes, those 15 minutes can represent a saving. But the saving is not as big as you'd think.

Is it worthwhile?

How long will a virtual machine typically run for? Seconds, minutes? No, probably not yet. It's more likely a VM will run for days, weeks or even months.

Let's take a virtual machine charged per hour – is it more likely to be terminated at the start of the hour, in the middle, or at the end? If we looked at millions of VMs, we'd probably find no pattern – a VM is as likely to be terminated at the end of an hour, as at the beginning.

So, on average, a VM will terminate halfway through an hour – 30 minutes.

This means that on average, we'll only save half the cost of one hour of usage on the hourly provider if we moved to a minute-by-minute provider, regardless of how long we used the VM (assuming the cost of 1 min on provider A = cost of 1/60 of an hour on provider B).

If the VM has been running for days, months, or even years, our average saving will be half the cost of an hour of the resource: that's about half a cent on AWS. Is that saving really worthwhile?

Cloudbursting

However, if cloudbursting becomes a reality then things are different. If a virtual machine can run for seconds or minutes, then smaller time units are important. Let's say our application experiences a sudden rise in demand, causing hundreds of VMs to be created and shutdown within 5 minutes.

So now, our average saving is 55 minutes, so 55/60 (91%) of the cost of an instance charged by the hour for every single VM. If the app constantly spikes in these kinds of timeframes, then large savings can be made.

Should I be concerned?

For most users, the charge period shouldn't be a major consideration when evaluating cloud providers. It's more important to understand how the whole application will be priced and scaled, including VMs, bandwidth and other supporting services, and use this total as the basis of comparison, rather than any individual element.

The charge period only becomes important when the typical life of a virtual machine is less than the minimum charge unit. Then the cost of paying for time not used by short-lived VMs can rapidly add up.

This cloudbursting capability is still a vision rather than a reality – few applications can provision, configure and interact with a virtual machine in small timescales. But as cloud matures perhaps this will change and eventually maybe units will shrink to milliseconds or even smaller.

If AWS can charge virtual machines in thousandths of a dollar, why not thousandths of a second?

  • Owen Rogers is a Senior Analyst at 451 Research, helping clients understand the digital and cloud economy.