So, you’ve designed your basic application and have an idea for the infrastructure needed to run it, or at least the pieces of the architecture and how they will fit into the design. How you deploy your application with Kubernetes really doesn’t matter architecturally. Whether you are deploying on-premises, on a public cloud, or using a hybrid or multi-cloud infrastructure, you will still use some of the same basic principles such as putting your pods behind some kind of load balancer, connecting to some type of database or storage, etc.
The main differentiators to decide whether to house your Kubernetes cluster on-premises or on a public cloud will be business based.
What do I mean by business based? Basically, there will be business-related reasons for your choice, such as your company already having the infrastructure for on-premises, making it a less expensive option, or maybe there are some legal requirements for you to house any data within your control on your own hardware and within your own datacenter. You may also find that your application and traffic is small enough, or large enough, that hosting it somewhere else is the less expensive option. It could also make sense for you to host your application but utilize an outside provider to host part of your application, or for you to use an outside provider to auto-scale your deployment when needed without you having to keep extra hardware available.
So let’s take a look at some use cases which would meet these scenarios in more detail.
Now cost goes without saying, most businesses will go with the most cost-effective solution that meets their requirements, but I also mentioned the need to keep data in-house, so let’s take a look at that first. Keeping data in-house usually is the result of one of two reasons; either there are contracts in place stating data will be kept in the company’s full control, or the company has higher than normal security requirements. We’re going to look at a use case where security is the main reason for choosing an on-premises solution.
We will use the storage of personal data for this security related use case. In this example, our application stores social security numbers alongside confidential health records. Due to government regulations such as HIPAA, any company that deals with protected health information is required to have physical, network, and process security measures in place and often that is easier within your own datacenter. In our hypothetical application, medical offices might connect to the application over a VPN which is routed to the internal data center where their access is authenticated before they can log in. Once they are logged in, they are directed to a pod with the web service application running on a private network which then connects to either a pod running a containerized database cluster, a virtualized database cluster, or a bare metal database cluster. Now, in this use case, the main advantage of running a Kubernetes cluster would be for the seamless deployment of code, or the possibility of autoscaling.
The next use case we’re going to look at is for an application where a public cloud offering was chosen. In this case, we have an application that has a user base which can fluctuate quickly, such as a fantasy football site. Choosing a public Kubernetes offering would allow for quick scaling of the site without the need to have extra infrastructure on hold to meet the influx of users and then returning to an unused state when the rush is over. Now diving deeper, let’s assume it’s the start of the season, and everyone is trying to pick their teams; your normal site traffic could double or triple, and you need to be able to handle that growth and then be able to return to ‘normal’ to save money. This is where autoscale plays a big part, as well as the ability to shift load quickly, should the underlying infrastructure have an issue as you don’t want your customers to have a bad experience. So here you might start with a set number of pods which connect to a database to retrieve the players available and then saves your team for future use during the ‘season’. As the number of users increases, your normal number of pods will increase and decrease based on the number of users hitting the site. In this use case, the main advantage of running a Kubernetes cluster would actually be for autoscaling and autohealing, with seamless deployments being an added bonus.