In our previous articles, we discussed Kubernetes architecture and analyzed its main functions and features. We then went on to install Kubernetes and deploy a Kubernetes cluster on a Bare Metal server.
Our goal is to create a highly robust framework for software deployment. An important aspect is building effective container images that enhance and complement Kubernetes’ features.
This article explores the best ways to create and optimize containers for a Kubernetes cluster.
Best Practices for Creating Container Images
Kubernetes is a highly automated tool, and it is vital to optimize your containers during the image creation process.
A container image is a collection of all the files that make up an executable application. This collection includes the application, all the binaries, libraries, and other dependencies. The files are read-only, which in turn means that the content of the image is constant. Here are some basic concepts to keep in mind when creating your container images:
- Try not to pack too much functionality within a single image. Focus on a specific service.
- Containers are dynamic. You need to fully understand how your Container Runtime software, such as Docker, manages containers to complement Kubernetes.
- Use image layers to your advantage. Properly layering your applications makes them much more flexible in an automated cluster.
- Kubernetes is an automation tool. Your images should reflect that and not depend on manual management or input once deployed.
- Take advantage of additional third-party container tools and features to enhance your cluster’s capabilities further.
- Make sure that the images you use come from trusted and secure sources. Adhering to Kubernetes security best practices at the entry point can prevent potentially disastrous consequences.
1. Container Images Need Strict Focus
Draw up the architecture of your application and try to split them into multiple services. A container should focus on doing one small function and do it well. This approach makes it much easier to scale apps horizontally and reuse containers.
Do not try to solve every problem inside the container. Take advantage of Kubernetes’ container linking abilities and have containers communicate with each other. Kubernetes introduces another level of abstraction with pods. This additional layer of abstraction enhances Kubernetes’ ability to complement and control containers.
A Kubernetes pod allows for scenarios in which multiple containers run in a single pod. An additional, tightly coupled container can support or enhance the core functionality of the main container or help it adapt to its deployment environment.
For example, you can create a proxy container to help connect to an external database.
2. Containers are Short-Lived
Automating container deployment with Kubernetes means that most operations now run without your direct input. Create your container images so that they are interchangeable and do not require constant micromanagement.
Using a container runtime software like Docker makes it easy to run, stop, add, or list containers within a live environment.
Note: A container runtime is software that creates, stores, and executes container images. The most popular container runtime is Docker, but alternatives such as CRI-O, containerd, or frakti support Kubernetes as well.
Containers are immutable, which means you should not modify them but instead restart them from your base image. When using Kubernetes, assume that your containers are short-term entities that are going to be stopped and restarted regularly.
3. Small Parent Images and Efficient Layering
The number of image layers should reflect the complexity of your application. The purpose of layering is to provide a thin level of abstraction above the previous layer to build a more complex function.
Layers are logical units where the contents are the same type of object or perform a similar task. Too many layers can become too complicated and also challenging to deploy and manage.
If every container image in your data center uses the same base image, you need only one copy of that base image on each Kubernetes node where the container runs. Additional layers need to be downloaded only when you pull an image that does a specific job.
Ready-made images considerably shorten container build times as the container runtime calculates the differences in the source code. Likewise, container orchestration tools, such as Kubernetes, benefit significantly from these small and efficient images.
In this simple example, the basis of the image is an empty image layer called Scratch. Scratch allows us to add an operating system on top.
To facilitate a smooth and secure connection between servers, we add Apache. Finally, we add our application to the structure. Once created, this specific container image file is immutable, but it is possible to use each instance to build on top of it.
Many Linux distributions now offer a base image that includes the minimal components you need to create a container based on their distribution. These base images are configured to help you quickly install additional software within that container.
4. Help Kubernetes Monitor and Manage Containers
Avoid including software packages used to investigate or troubleshoot the container within the container itself. Configure your infrastructure to take advantage of Kubernetes and allow it to manage the availability and health of your applications.
If you instructed Kubernetes to keep five container instances running and one fails, Kubernetes creates another container to replace the failed process. For Kubernetes to understand how to monitor and interpret the health of your applications, consider defining liveness and readiness probes.
- A liveness check is a tool that monitors the health of containers. If a process fails the check, the container ends, and Kubernetes creates a new instance to take its place.
- Use readiness probes to determine whether a pod is ready to accept traffic. If it determines that a pod is unresponsive, it triggers a process to restart it.
When working with a large configuration file that takes a lot of time to load, set up a time delay before sending the readiness probe. That gives the pod the necessary time to load the configuration file, thus stopping it from entering a restart loop. The time delay ultimately impacts the cluster much less.
The documentation for configuring these probes is readily available on the official Kubernetes website.
To learn more about maintaining Kubernetes applications health and availability, check out our article on Kubernetes monitoring best practices.
5. Use Multi-Stage Builds
By adding too many layers, you quickly overlap with existing tools and other layers. Multi-stage builds are a significant improvement that allows you to create much slimmer container images. This feature is available in Docker, starting with version 17.05.
FROM statements can now be used in a single Dockerfile to create different sections; each section is referencing a different base image. This feature reduces the size of the final Docker container significantly. The previous layers are no longer stored in the final file/image. Each layer is being ‘pulled’ based on the
FROM command located in the deployed container.
With multi-stage builds, you can separate build, test, and runtime environments that need separate Docker files. You can now run these stages in parallel.
Multi-stage builds can create container images that are half the size of previous instances. The reduction in size solves one of the primary concerns when optimizing containers for Kubernetes.
6. Run Container Images from Trusted Parties
Untrusted container images downloaded from the public domain, may result in malevolent action. Combining untrusted images with automation tools like Kubernetes makes it difficult to limit the extent of the damage.
Don’t run random and untested container images on your system, instead use trusted repositories of images and containers. Regularly analyze your already existing container images for security flaws and do not disable the security features of the host operating system.
Note: An image registry is a place that stores images for public or private access. Software developers use them to efficiently create new and composite applications. All stored images go through multiple validations, verification, and refinement. The quality and security of these images is high.
The advice outlined in this article helped you create fully optimized containers. Use the advanced features Kubernetes offers, and automate the management and deployment of these containers.
Another way you can improve cluster performance is by using Kubernetes DaemonSets to deploy Pods that perform maintenance tasks and support services to every node. Learn more about Kubernetes best practices when building efficient clusters.
You are guaranteed to see a significant improvement in your development and production operations.