5 Tips to Improve Kubernetes Understandability
June 10, 2021
When you talk to an enterprise development team these days, there is a good chance they are in the midst of either migrating applications to the cloud or building a Cloud Native greenfield application. While there are many approaches to running those applications in the cloud, Kubernetes often comes to the forefront as the platform of choice. It provides a powerful container orchestration platform, which provides plenty of room for growth as your application evolves.
Of course, with all the power and amazing new abilities that Kubernetes brings, it can often be difficult to navigate for the inexperienced user (and sometimes even the experienced user). Most would agree that managing, deploying, and troubleshooting applications running in Kubernetes has a learning curve. The rest of this article is dedicated to some tips on improving the ease of use and understandability of Kubernetes when getting started.
1. Use Helm to Manage Packages
I’ve talked about Helm from a technical perspective in a prior article, but it’s worth discussing how it can add value to not only your process but also to how your customers consume your software. By packaging up your Kubernetes applications using Helm charts, it allows your customers to quickly and easily consume those applications or services, without having deep insight into the inner workings of those services. Simply execute a few commands from the CLI and your application is up and running.
This is also invaluable when your customer-facing team goes out to deploy your software with your customers. Applications with long and arduous setup processes add additional burden on both your customer and your internal team responsible for deploying them. The long-term maintenance costs of managing those deployed applications can add up as technical debt over time, slowing down your development and engineering — as developers are often required to jump in to help when things go wrong. Now Helm is by no means a magic bullet to solving all of these issues, but it does help to reduce complexity and improves the understandability of the deployment process of your applications in Kubernetes.
2. Use a Remote Debugger for Debugging
Application debugging is tough enough as it is without adding the complexity of Kubernetes as your run-time environment. When you discover a bug in your production or pre-production environments, sometimes you just want to jump right in and start debugging your code right where it’s running. It’s not always a simple task to connect a debugger; and sometimes, it is not even possible due to security and compliance reasons. In those cases, it’s back to the typical approach of adding logging code and running through the build, deploy and release process until you find the data that helps you pinpoint the defect.
Remote debuggers are the perfect solution to be able to better understand your code right where it’s running, without all the overhead of adding logging or creating defect reproduction environments. Remote debuggers typically run alongside your application as an agent or an SDK and allow you to have a traditional “local debugger” experience, but with a remotely deployed application. These can be an invaluable part of a modern development workflow and cut defect resolution times dramatically.
3. Manage Kubernetes Resources with Namespaces
Using Kubernetes namespaces can help to logically partition deployed services, so that your cluster can be more easily understandable and usable by multiple teams or users. By default, Kubernetes instantiates a default namespace when a cluster is provisioned and it’s up to the administrator to create additional namespaces. These namespaces provide a nice way to attach multiple authorization strategies or policies to these logical partitions of the cluster. This means that multiple services, teams, or projects can use the same cluster for different purposes or with different levels of security, without the need to create separate clusters for each use case. Teams could even utilize one cluster to simulate multiple deployment environments such as dev, staging, or UAT. This partitioning of environments ensures that naming conventions of deployed components can be maintained without conflicts across each environment.
4. Try a Terminal or Browser-Based UI
In working with many customers who have dove headfirst into microservices development on Kubernetes, I have seen many of them adopt interactive terminals or graphical UI tools for managing and interacting with their cluster. This can make navigation of the cluster components (pods, services, deployments) much faster and even reduce the learning curve in finding your way around. They’re often quite easy to install and can bring efficiency improvements to day-to-day workflows.
There are many different options out there. One that is free and easy to use is K9s. K9s provides a simple and efficient cluster viewer directly in your command line terminal. Rather than having to rely heavily on kubectl commands, you can simply navigate your cluster using simple keyboard shortcuts. Another open source option (but this time browser-based) for better understanding your cluster is Octant, which allows you to navigate and interact with your cluster directly from your browser. Whether you’re using one of the above or perhaps the cluster management capabilities directly in your cloud provider, having visual tools with shortcuts can help speed up your workflow and reduce complexity.
5. Consider a Service Mesh
A service mesh is a tool that is used to manage and simplify high volume communication between microservices running inside your Kubernetes cluster. They attempt to abstract and make easier tasks that are consistently required regardless of the type of application being built — such as inter-microservice communication, discovery, security, tracing, and monitoring. By using a service mesh, you can separate networking, security and observability logic from your application-specific business logic, effectively eliminating the need to maintain your own code for these specific capabilities.
Some of the main players in this space are Istio, Linkerd, and Consul. These solutions typically run on top of your existing infrastructure as a sidecar to your pods. When it comes to the understandability and maintainability of your application, the more of these core application runtime capabilities you can offload from your development team, the better.
Diving headfirst into Kubernetes can be a daunting task, but there are technologies out there that can make life easier. Whether you’re just getting started down the path of Kubernetes adoption or are well on your way, it never hurts to consider adopting new technologies or processes that can improve the understandability of your Kubernetes environment.
This article was originally published on The New Stack