We are currently in the midst of the biggest KubeCon to date, KubeCon San Diego 2019. With 12,000 expected attendees, KubeCon is the conference for Kubernetes; the one to rule them all, the open-source container-orchestration platform. In honor of this great event, I hand-picked the top-3 mistakes you can make when moving to Kubernetes. These are based on our own experience as a dev-facing company on a K8s journey, as well as the experience of our customers. So without further ado, let me share with you these hard-earned mistakes you should avoid like the plague!
Kubernetes deployments almost feel like magic the first time you get them working. You use a (hopefully) short YAML file to specify the application you want to run, and Kubernetes just makes it so. Make a change to the file, apply it, and it will update in near-realtime.
But as powerful as kubectl is, as instructive as it can be to explore Kubernetes using it, you should not come to rely on it too much. Of course, you’ll come back to it (or it’s an amazing cousin - k9s) when you need to troubleshoot issues in Kubernetes, but do not use it for managing your cluster.
Kubernetes was made for the Configuration as Code paradigm, and all those YAML files belong in a Git repo. You should commit any and all of your desired change into a repo and have an automated pipeline deploy to changes to production. Some of your options include:
Let’s assume you have got all those workloads up and running with all the goodness of Kubernetes and Configuration as Code. But now, you are orchestrating containers, not virtual machines, well then, how do you make sure they get the CPU and RAM they need? Through resource allocation!
Kubernetes will pack all of your Pods (workloads in Kubernetes-speak) into a handful of nodes. They won’t get the resources they need. The cluster won’t scale itself up as needed.
Resource requests let the scheduler know how many resources you expect your application to consume. When assigning pods to nodes, Kubernetes budgets them so that all of their requirements are met by the node’s resources.
A single pod may consume all the CPU or memory available on the node, causing its neighbors to be starved of CPU or hit Out of Memory errors.
Resource limits let the container-runtime know how many resources you allow your application to consume. For the CPU limit, your application will be able to get that much CPU time but no more. Unfortunately (for the application), if it hits the memory limit, it will OOMKilled by the container-runtime.
So, go ahead and define requests and limits for each of your containers (read more about it here). If you aren’t sure, just take a guess, and keep in mind the safe side is higher. And whether you are certain or not, make sure to monitor actual resources usage by your pods and containers through using your cloud provider or APM tools such as AppDynamics and Datadog.
Immutable infrastructure and clean upgrades. Easy scalability. Highly-available, self-healing services. Kubernetes can provide you with lots of values, directly out of the box.
Unfortunately, those values may not be a priority for the devs who are working on the product. They have other concerns:
For many of those tasks, Kubernetes pulls the rug from under the developer. Running development environments locally is much harder, moving many of those dev and test workloads to the cloud. The code-level visibility developers rely on is often poor in those environments, and direct access to the application and its filesystem is virtually impossible.
To lead a successful adoption of a new platform such as Kubernetes, you should obviously start out by getting everyone to see the value in it. But you must not forget developers require the right tools to understand what their code is doing in development, staging, and — if you truly believe in code ownership — in production environments. Many of our customers tell us that’s exactly why they use Rookout.