One of the biggest problems businesses face today is finding the balance between availability and cost. It’s likely that this question has come up in your organization when discussing cloud computing: should we sacrifice application availability to save on cloud spending… or vice versa?
In traditional data centers, companies own the servers and host applications on these servers. In a majority of the cases, these servers will run 24 hours. Your internal application users can access applications (like QA, test applications) whenever and wherever they want.
Most of the time, companies don’t pay much attention to per-seconds costs – after all, the overhead of configuring their hosts in that way is a cost of its own.
Availability vs. Cost
Things aren’t like they used to be. Now, enterprises are moving to the cloud primarily for two reasons:
Cloud technology comfortably solved these two problems. Companies who are running their production servers in cloud are getting these advantages in all dimensions. But, the problem came when companies started moving their non-production servers to cloud.
Because cloud uses a pay-per-second model and because companies use legacy on-premises data center practices, like leaving non-production servers on 24 hours a day, companies ending up paying for the resources when they are not using them.
There’s a clear disconnect here. If you can utilize a more granular pricing model, why wouldn’t you? Anything else is simply wasteful. A few smart companies realized this leak in cost and started using strategies to reduce their cloud spend on these non-production servers. Let’s take a deeper look at a couple of these strategies and their tradeoffs.
Running Servers 24 Hours a Day
When servers are always powered on, availability is prioritized, but unnecessary expenses are also incurred. Following traditional data center practices, companies are leaving these non-production application instances running 24 hours. This is unnecessary cloud expense. In reality, users might be using these applications for only certain hours during the day, say 3 hours per day. This means enterprises are paying for 21 unused hours. This means 85% of cloud expenses could be saved.
Scheduled or Manual Start & Stop Instances
Another strategy enterprises are using is bringing instances up at certain time in the morning and stopping them at the end of the day. This isn’t always ideal. However, enterprises are not achieving true savings on cloud spending because billing is now calculated per-second for most of the cloud computing resources.
There are a few scenarios when this can be a problem:
The first situation is when users don’t really access the application for the whole day. There will be meetings, lunch breaks, sick days off, etc. Assuming the unused time in a day is 3 hours, this means 720 hours/year is an unnecessary expense per instance. This is better than running servers 24 hours a day, but it’s not perfect.
Another example of the availability tradeoff is that servers can be unavailable for large portions of the day. In this cloud era where connectivity is available everywhere, people like agility and availability. By using schedulers, enterprises essentially limit their workforce to certain hours. This can be a good thing in many cases, but if an environment is needed on short notice (in an emergency, for instance), the manual effort to activate it may be more harmful than helpful.
None of these approaches perfectly balance cloud spend and application availability, while also keeping the user experience similar to that of on-premises data centers. Should we sacrifice availability for cloud spend saving? Or should we continue to spend unnecessarily on the cloud for availability? How can we have both?
Finding the Balance
The answer to this question, like most others, is dependent on your organizational needs. If your application has the potential for on-demand maintenance requirements, you probably need to make availability a top priority. If you run a relatively low-risk deployment, you may want to trade some availability for cost. What it comes down to is your tolerance for risk – how much money are you willing to lose in the event that your environments fail or become unavailable or how much money are you willing to lose by running your servers twenty-four hours a day.
There’s no one-size-fits-all answer to this question. Fortunately, cloud platforms like Amazon Web Services make it very simple to create a solution that works for your needs. If your current needs are relatively simple, you may consider running an automated Lambda function to schedule servers to power down on demand. If your infrastructure is more complex, you may want to use a more sophisticated service like the offered at INVOKE, which spins up servers upon receiving a request rather than statically automating downtime.
As you weigh your options, remember that this problem was much more difficult to solve twenty years ago, before the rise of cloud technology. While it might seem like a complex decision, the reality is that it’s very similar to the decisions your company already makes in terms of your existing software and infrastructure. The choice comes down to the tradeoffs you’re willing to accept. And with the cost-saving power of cloud computing, finding the balance between availability and spend has never been easier.