IT Cloud. Eugeny Shtoltc

Читать онлайн.
Название IT Cloud
Автор произведения Eugeny Shtoltc
Жанр Зарубежная компьютерная литература
Серия
Издательство Зарубежная компьютерная литература
Год выпуска 2021
isbn



Скачать книгу

slightly more complicated situation is with images. When creating a container, if there is no image, it will be downloaded. Since one image can be for several containers, then when the container itself is deleted, it is not deleted. You will have to delete it manually docker rmi name_image , and if it is used, a warning will simply be issued. The cost of saving disk space comes at the cost of the fact that Docker cannot simply determine whether an image is needed yet or not. Since version 1.13, it can, using the docker imgae prune -a command , analyze which images are not used by containers and delete them. You need to be more careful here if Docker cannot get the image again, but the assumption of such a situation is not very correct. One such situation is the creation of a clustered image, while the Dockerfile config describing the process of its creation was lost, otherwise you can get the image from the Dockerfile using the docker build name_image command . It is correct to immediately take action and restore the Dockerfile from the image by looking at the commands that create images using Docker history name_image . The second situation is to create an image from a running container using the Docker commit command , and not from the Dockerfile, which is so actively popularized, but also actively deprecated.

      Since an image consists of layers that are shared in different images, these layers remain in different emergency situations. Since we cannot use them separately, it is safe to delete them with the docker image prune command .

      To save the results of the container's work, you can mount the host machine folder to the container folder. We can explicitly specify the folder on the host machine, for example, docker run -v / page_host: / page_container nama_image , or enable it to be generated by docker run -v / page_container nama_image . To remove generated folders (volumes) that are no longer used by containers, enter the Docker volume prune command . For the collection of unused networks, there is also a garbage collector.

      There is also a single garbage collector, in fact, simply combining specialized docker system prune parameters into one with logically compatible parameters . There is a tendency to put it in crowns. You can also look at the space occupied by all containers, all images and all volumes using the docker system df command , and also without grouping – docker system df -v .

      Many of the issues described here by garbage collection are handled by Docker-compose. In addition, it greatly simplifies life, unless you run the container once for experiments. So the command Docker-compose up starts the containers, and docker-compose down -v removes them, and all dependencies between them are also removed. All container launch parameters are described in Docker-compose.YML, as well as the relationships between them. Thanks to this, when changing the launch parameters of containers, you do not need to worry about deleting the old ones and creating new ones, you do not need to register all the parameters of the containers – just fill in with the up parameter , and it will either re-create or update the container configuration.

      To prevent cluttering the system, Docker has a built-in configurable limit on the number of containers and images, reminding you to clean the system by running the garbage collector.

      Saving time on container creation

      We already met in the previous topic about images, about their layers and caching. Let's look at them in terms of container creation time. Why is this so important, after all, by analogy with virtualization, the system administrator started the creation of the container and while he passes it to the programmer, by this time he will definitely be assembled. It is important to note that a lot has changed since then, namely, the principles and requirements for the ecosystem and its use have changed. So, for example, if earlier the developer, having developed and tested his code at his workplace, sent it to the QA manager for testing it for compliance with business requirements, and when his turn comes to this code, the tester at his workplace will run this code and check … Now the infrastructure is handled by DevOps, which establishes a continuous process for delivering features developed by programmers, and containers are created automatically with each submission to the production branch for automated testing. At the same time, so that the work of some tests does not affect the work of others, a separate container is created for each test, and often the tests run in parallel in order to instantly show the result to the developer, while he remembers what he did and did not switch his attention to another task.

      For standard programs: no need to install, no need to maintain

      We often use a huge number of ready-made solutions. When choosing a solution, we are faced with a dilemma: on the one hand, it is more universal and more proven than we can afford to do, on the other hand, it is complex enough to figure out how to properly install and configure it ourselves, in order to install all dependencies, resolve conflicts, set up for initial use. Now installation and configuration has become much easier, standardized, low-level problems are largely absent. But before we continue, let's digress and take a look at the process from getting started to starting to use the app within the story:

      * In those days, when all programs were written in assembler, the programs were distributed by mail, users had already installed and tested them, because testing in the companies was not provided. In case of problems, the user informed the developer about the problems to the company and, after fixing them, received by mail the already corrected version on the disk. The process is very long and the user tested it himself.

      * During the distribution on disks, companies already wrote their software products in higher-level languages, tested them for different OS versions. Hereinafter, we will consider free software. The program already contained a MakeFile, which itself compiled and installed the program.

      * Since the advent of the Internet, software is massively installed using package managers, when they exit, it is downloaded and installed from the remote OS repository. He tries to monitor and maintain the compatibility of the compatibility of programs. Further study and use of the program: how to start it, how to configure it, how to understand that it works falls on the user or the system administrator.

      * With the advent of Docker Hub and WEB, applications are downloaded and run by a container. It usually does not need to be configured for initial operation.

      For containers and images in general, the server can adjust the amount of free space and the occupied space. By default, 10G is allocated for all containers and images, while this volume should remain as dm.min_free_space = 5%, but it is better to put it in the config, which may have to be created as /etc/docker/daemon.json :

      {

      "storage-opts": [

      "dm.basesize = 50G",

      "dm.min_free_space = 5%",

      ]

      }

      You can limit the resources consumed by the container in its settings:

      * -m 256m – maximum size of RAM consumption (here 256Mb);

      * -c 512 – CPU usage priority weight (1024 by default);

      * —Cpuset = "0,1" – numbers of allowed processor cores.

      Product transfer and distribution

      To transfer a project, for example, to a customer, and distribute it between developers and servers, you can use installation scripts, archives, images, and containers. Each of these ways to distribute a project has its own characteristics, disadvantages and advantages. Let's talk about them and compare.

      lines, but the main thing is that it has a special mode, enabled by the -p switch , which dynamically outputs the number of lines we need, when new ones arrive, it updates the output, for example, docker logs name_container | tail -p .

      When there are too many applications to manually monitor their work separately, it is advisable to centralize application logs. For centralization, numerous programs can be used that collect logs from different services and send them to a central repository, for example, Fluentd. It is convenient to use ElasticSearch to store logs, simply by writing them to a search engine. It is highly desirable that the logs are in a structured format – JSON. This will allow you to sort them, select the ones you need, identify trends using built-in aggregate functions, perform analysis and forecasting, and not just search by text. For analysis, the Kubana web interface included in the Elastic stack.

      Logging is important not only for long-running applications. So for test containers,