If you’re working in technology, or hope to someday, you have most likely heard the word Kubernetes or seen k8s; even if it was only in passing. To say it’s the hottest thing out there would hardly be an understatement. So what is Kubernetes?

According to kubernetes.io, the definition of Kubernetes is as follows:

“Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers.”

And the nickname, k8s, is the leading K in Kubernete and the number of letters in the rest of the word, not including the s, as that is present after the 8. The name is derived from the Greek word for pilot or helmsman as that is what its purpose is; it pilots the cluster to deploy your applications and scale on the fly among its other features.

Google created Kubernetes and released it to Open Source in 2014; it grew out of the Borg project that Google used to manage their own large-scale container architecture. With the popularity of Docker, it made sense for Google to release Kubernetes to help users implement their applications with microservices and to encourage more users to use the Google Compute Engine and Google App Engine. Kubernetes, however, has a wider reach and is now used on multiple clouds and is one of the largest Open Source communities on GitHub.

KubernetesSo now that you know the origin of both the name and the project, we can take a look at why it is so popular.

First, it’s portable, so you can use it in a single cloud type or across multiple clouds or combinations of clouds. That means you can run it in your private cloud, a public cloud, a hybrid cloud, or even in a multi-cloud scenario.

Second, it’s extensible, which means it is modular in nature, is pluggable, you can write hooks for it, and it is composable.

And lastly, it is self-healing if you want it to be. So you can implement auto-placement, auto-restart, auto-replication, and auto-scaling to your infrastructure to minimize downtime.

As mentioned, one of the reasons for the Open Sourcing of Kubernetes was the popularity of Docker which takes advantage of Operating System virtualization, unlike Virtual Machines which utilized virtualization on the hardware level. Utilizing virtualization at the Operating System level containers provides environmental consistency across development, testing, and production environment and resource isolation.

So how does Kubernetes help with these new containerized environments? Well, it can auto-scale the number of your pods to meet the demand for one thing. Rolling updates is another big feature of Kubernetes, as it allows for a seamless experience for your users who will not experience downtime while you update to your latest build, or —in the case of an issue rollback— to your previous build. Additional, a feature that works to maintain responsiveness and availability is the replication of pods so that a given number is available and the ability to balance loads between the pods while they reside behind a proxy, which is known as Kube-proxy.

Now those are good reasons as to why, if you’re utilizing containers, you would want to use Kubernetes to orchestrate them, but there are instances where Kubernetes isn’t the answer. If you need a traditional Platform as a Service (PaaS) to run your application, Kubernetes isn’t the answer, though there are PaaS services such as OpenShift that do run on it. Ultimately, Kubernetes will get you from start to finish in a way that allows you to use it the way you need.

Cloud Native Foundation Kubernetes

 

Get actionable training and tech advice

We'll email you our latest articles up to once per week.