Kubernetes Cookbook. Kirill Kazakov

Читать онлайн.
Название Kubernetes Cookbook
Автор произведения Kirill Kazakov
Жанр
Серия
Издательство
Год выпуска 0
isbn 9785006465633



Скачать книгу

world of modern development. The following section will explain the critical role of Kubernetes in orchestration and how it fits into the contemporary development landscape. Let’s explore how Kubernetes can orchestrate containerized applications.

      Understanding Kubernetes’ Role in Orchestration

      Building on our prior knowledge, we understand that container deployment is straightforward. What Kubernetes brings to the table, as detailed earlier, is large-scale container orchestration – particularly beneficial in complex microservice and multi-cloud environments.

      Kubernetes, often regarded as the cloud’s operating system, extends beyond its origins as Google’s internal project, now serving as a cornerstone in the orchestration of containerized applications. It is a decent system for automating containerized application deployment, scaling, and management. It is a portable, extensible, and open-source platform. It is also a production-ready platform that powers the most extensive applications worldwide. Google, Spotify, The New York Times, and many other companies use Kubernetes at scale.

      With the increasing complexity of microservices, Kubernetes’ vibrant community, including contributors from leading entities like Google and Red Hat, continually enhances its capabilities to simplify its management. Its active development mirrors the characteristic rapid evolution of open-source projects. Expect more discussions about Kubernetes involving IT professionals and individuals from diverse technical backgrounds, even those less familiar with technology.

      Comparing Docker Compose and Kubernetes

      Docker is a container platform. Kubernetes is a platform for orchestrating containers. It’s crucial to recognize that these two platforms cater to distinct purposes. An alternative to Kubernetes, even if incomplete, is Docker Compose. It presents a simpler solution for running Docker applications with multiple containers, finding its niche in local development environments. Some fearless individuals even deploy it in production. However, when comparing them, Docker Compose is like a small forklift that moves containers. On the other hand, Kubernetes can be envisioned as a cutting-edge logistics center comparable to the top-tier facilities in Amazon’s warehouses. It gives advanced automation, offering unparalleled container management at scale.

      Docker Compose for Multi-Container Applications

      With Docker Compose, you can define and run multiple containers. It uses a simple YAML file structure to configure the services. A service definition contains the configuration that is applied to each container. You can create and start all the services from your configuration with a single command.

      Let’s enhance our auth-app application. Let’s assume it requires in-memory storage to keep the user’s data. We will use Redis for that. Also, we need a broker to send messages to the queue. We will use RabbitMQ as a traditional way to do that. Let’s create a “compose. yml’ file with the following content:

      version: “3”

      services:

      auth-app:

      image: <username> /auth-app: latest

      ports:

      – “8080:8080”

      environment:

      RUST_LOG: info

      REDIS_HOST: redis

      REDIS_PORT: 6379

      RABBITMQ_HOST: rabitmq

      RABBITMQ_PORT: 5672

      redis:

      image: redis: latest

      volumes:

      – redis:/data

      ports:

      – 6379

      rabitmq:

      image: rabbitmq: latest

      volumes:

      – rabbitmq:/var/lib/rabbitmq

      environment:

      RABBITMQ_DEFAULT_USER: guest

      RABBITMQ_DEFAULT_PASS: guest

      ports:

      – 5672

      volumes:

      redis:

      rabbitmq:

      To run two containers, you need to use the following command:

      docker-compose up

      Ofter it’s practical to run containers in the background:

      docker-compose up -d

      And follow the logs in the same terminal session:

      docker-compose logs -f

      To stop all the compose’s containers, use the following command:

      docker-compose down

      Transitioning from Docker Compose to Kubernetes Orchestration

      Migrating from Docker Compose to Kubernetes can offer several benefits and enhance the capabilities of your containerized applications. There are various reasons why Kubernetes can be a suitable option for this transition:

      – Docker Compose is constrained by a single-cluster limitation, restricting deployment to just one host. Conversely, Kubernetes is a platform that effectively manages containers across multiple hosts.

      – In Docker Compose, the failure of the host running containers results in the failure of all containers on that host. In contrast, Kubernetes employs a primary node to oversee the cluster and multiple worker nodes. If a worker node fails, the cluster can operate with minimal disruption.

      – Kubernetes boasts many features and possibilities that can be expanded with new components and functionalities. Although Docker Compose allows adding a few features, it generally needs to catch up to Kubernetes in popularity and scope.

      – With robust cloud-native support, Kubernetes facilitates deployment on any cloud provider. This flexibility has contributed to its growing popularity among software developers in recent years.

      Conclusion

      This section discusses how software packaging has evolved from traditional methods to modern containerization techniques using Docker and Kubernetes. It explains the benefits and considerations associated with Docker Engine, Docker Desktop, Podman, and Colima. The book will further explore the practical aspects of encapsulating applications into containers, the importance of Docker in current development methods, and the crucial role Kubernetes plays in orchestrating containerized applications at scale.

      Docker and Kubernetes: Understanding Containerization

      Creating a Local Cluster with Minikube

      Minikube is a tool that makes it easy to run Kubernetes locally. It simplifies the process by running a single-node cluster inside a virtual machine (VM) on your device, which can emulate a multi-node Kubernetes cluster. Minikube is the most used local Kubernetes cluster. It is a great way to get started with Kubernetes. It is also an excellent environment for testing Kubernetes applications before deploying them to a production cluster.

      There are equivalent alternatives to Minikube, such as Kubernetes support in Docker Desktop and Kind (Kubernetes in Docker), where you can also run Kubernetes clusters locally. However, Minikube is the most favored and widely used tool. It is also the most straightforward. It is a single binary that you can quickly download and run on your machine. It is also available for Windows, macOS, and Linux.

      Installing Minikube

      To install Minikube, download the binary from the [official website] (https://minikube.sigs.k8s.io/docs/start/). For example, If you use macOS with Intel Chip, apply this command:

      curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64

      sudo