Название | Kubernetes Cookbook |
---|---|
Автор произведения | Kirill Kazakov |
Жанр | |
Серия | |
Издательство | |
Год выпуска | 0 |
isbn | 9785006465633 |
brew install minikube
Configuring and Launching Your Minikube Cluster
You can start Minikube simply as much as possible with the default configuration:
minikube start
While the provided command is generally functional, it’s recommended to explicitly specify the Minikube driver to enhance understanding of future provisioning configurations. For instance, the Container Network Interface (CNI) is set to auto by default, potentially leading to unforeseen consequences depending on the Minikube-selected driver.
It’s worth noting that Minikube often selects the driver based on the underlying operating system configuration. For example, if the Docker service runs, Minikube might default to using the Docker driver. Explicitly specifying the driver ensures a more predictable and tailored configuration for your specific needs.
minikube start – cpus=4 – memory=8192 – disk-size=50g – driver=docker – addons=ingress – addons=metrics-server
Most options are self-explanatory. The ` – driver’ option specifies the virtualization driver. By default, Minikube prefers the Docker driver or VM on macOS if Docker is not installed. On Linux – Docker, KVM2, and Podman drivers are favored; however, you can use all seven currently available options. The ` – addons’ option specifies the list of add-ons to enable. You can list the available add-ons by using the following command:
minikube addons list
If you use Docker Desktop, make sure the virtual machine’s CPU and memory settings are higher than Minikube’s settings. Otherwise, you will get an error like:
Exiting due to MK_USAGE: Docker Desktop has only 7959MB memory, but you specified 8192MB.
Once you’ve started, use this command to check the cluster’s status:
minikube status
And get:
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
Interacting with Minikube Cluster
The kubectl command-line tool is the most common way to interact with Kubernetes. It has to be the first tool for any Kubernetes user. It’s an official client for Kubernetes API. Minikube already has it, and we can use it – however, the recommended way is to install Kubectl from the [official website] (https://kubernetes.io/docs/tasks/tools/) and use it separately from Minikube. At least, that’s because Minikube’s kubectl is not always up to date and can be a few versions behind.
You can check Minikube’s kubectl version by using the following command:
minikube kubectl – version
Alternatively, if you have kubectl installed separately, you can use it by using the following command:
kubectl version
From now on, we will use the kubectl command-line tool installed separately from Minikube.
You will receive the client version, also known as kubectl, and the server version, the Kubernetes cluster. It’s okay if the versions differ, as the Kubernetes server has a different release cycle than kubectl. While it’s better to aim for identical versions, it’s not always necessary.
To get the list of nodes in the cluster, use the following command:
kubectl – get nodes
You will get our cluster’s single node:
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 10m v1.24.1
This output means that we have one node that was created 10 minutes ago. The node has a role control plane, which is the primary node. Usually, cluster-plane nodes are for Kubernetes components (things that make Kubernetes run), not for user workloads (applications that users deploy on Kubernetes). But, due to Minikube’s development purposes, it is the only node in the cluster for everything.
It is also worth noting that this single node exposes the Kubernetes API server. You can find out the URL of it by using the following command:
kubectl – cluster-info
You will get the same address where kubectl is requesting to:
Kubernetes control plane is running at https://127.0.0.1:59813
Finally, let’s use the first add-on we enabled earlier. The metrics server is a cluster-wide aggregator of resource usage data. It collects metrics from Kubernetes, such as CPU and memory usage per node and pod. It is a prerequisite for the autoscaling mechanism we will discuss later in this book. For now, let’s check cluster node resource usage:
kubectl – top node
You will receive data showing the utilization of CPU and memory resources by the node. In our case, the usage might appear minimal because nothing has been deployed yet. The specific percentages can vary depending on background processes and Minikube’s overhead.
NAME CPU (cores) CPU% MEMORY (bytes) MEMORY%
minikube 408m 10% 1600Mi 20%
Stopping and Deleting Your Minikube Cluster
To stop Minikube, use the following command:
minikube stop
You can also delete the cluster by using the following command:
minikube delete
Recipe: Deploying Your First Application to Kubernetes
In this recipe, we will deploy our first application to the Kubernetes cluster. We will use the same application we containerized in the previous recipe. That said, we will use the same Docker image we built earlier. However, we will deliberately use a non-common imperative approach using command-line commands to start with simple things. We will use the declarative way in this chapter as soon as we warm up. For now, let’s refresh our fundamental computer science knowledge and recall the differences between these two approaches.
Understanding Imperative vs. Declarative Management Model
Imperative paradigm is a term that is mainly, but not always, related to programming. In this programming style, the engineer tells the computer step-by-step how to do a task. The imperative approach is used to operate programs or issue direct commands to configure infrastructure. For example, using terminal command-line commands to start a Docker container demonstrates the use of the imperative approach.
In the declarative paradigm, the engineer tells the computer what to do, not how. The goal is to describe the desired state of the system. The declarative approach is mostly used to configure infrastructure, especially a cloud one. The “compose. yml’ file also describes and runs a containerized application in a declarative way. Usually, the declarative approach contains a manifest file, which is a text file with the system’s final state.
Even if the declarative approach is necessary for infrastructure, particularly for Kubernetes, in some rare situations, such as debugging and real-time troubleshooting, the imperative method is still the case, so let’s start with it.
Pushing Your Container Image to a Registry
Before we start, we need to push the image to the registry. We will use the Docker Hub registry. You can create a free account on the [official website] (https://hub.docker.com/). Once you’ve created an account and generated an access token, you can log in to the registry by using the following command:
docker