IT Cloud. Eugeny Shtoltc

Читать онлайн.
Название IT Cloud
Автор произведения Eugeny Shtoltc
Жанр Зарубежная компьютерная литература
Серия
Издательство Зарубежная компьютерная литература
Год выпуска 2021
isbn



Скачать книгу

@ kubernetes-master: ~ / mongo-rs $ docker run –name redis -p 6379 -d redis

      f3916da35b6ba5cd393c21d5305002b78c32b089a6cc01e3e2425930c9310cba

      essh @ kubernetes-master: ~ / mongo-rs $ docker ps | grep redis

      f3916da35b6b redis "docker-entrypoint.s…" 8 seconds ago Up 6 seconds 0.0.0.0:32769->6379/tcp redis

      essh @ kubernetes-master: ~ / mongo-rs $ docker port reids

      Error: No such container: reids

      essh @ kubernetes-master: ~ / mongo-rs $ docker port redis

      6379 / tcp -> 0.0.0.0:32769

      essh @ kubernetes-master: ~ / mongo-rs $ docker port redis 6379

      0.0.0.0:32769

      Build is the first solution to copy all files and install. As a result, when any file changes, all packages will be reinstalled:

      COPY ./ / src / app

      WORKDIR / src / app

      RUN NPM install

      Let's use caching and split the static files and the installation:

      COPY ./package.json /src/app/package.json

      WORKDIR / src / app

      RUN npm install

      COPY. / src / app

      Using the base image template node: 7-onbuild:

      $ cat Dockerfile

      FROM node: 7-onbuild

      EXPOSE 3000

      $ docker build.

      In this case, files that do not need to be included in the image, such as system files, for example, Dockerfile, .git, .node_modules, files with keys, they need to be added to node_modules, files with keys, they need to be added to .dockerignore .

      –v / config

      docker cp config.conf name_container: / config /

      Real-time statistics of used resources:

      essh @ kubernetes-master: ~ / mongo-rs $ docker ps -q | docker stats

      CONTAINER ID NAME CPU% MEM USAGE / LIMIT MEM% NET I / O BLOCK I / O PIDS

      c8222b91737e mongo-rs_slave_1 19.83% 44.12MiB / 15.55GiB 0.28% 54.5kB / 78.8kB 12.7MB / 5.42MB 31

      aa12810d16f5 mongo-rs_backup_1 0.81% 44.64MiB / 15.55GiB 0.28% 12.7kB / 0B 24.6kB / 4.83MB 26

      7537c906a7ef mongo-rs_master_1 20.09% 47.67MiB / 15.55GiB 0.30% 140kB / 70.7kB 19.2MB / 7.5MB 57

      f3916da35b6b redis 0.15% 3.043MiB / 15.55GiB 0.02% 13.2kB / 0B 2.97MB / 0B 4

      f97e0697db61 node_api 0.00% 65.52MiB / 15.55GiB 0.41% 862kB / 8.23kB 137MB / 24.6kB 20

      8c0d1adc9b9c portainer 0.00% 8.859MiB / 15.55GiB 0.06% 102kB / 3.87MB 57.8MB / 122MB 20

      6018b7e3d9cd node_payin 0.00% 9.297MiB / 15.55GiB 0.06% 222kB / 3.04kB 82.4MB / 24.6kB 11

      ^ C

      When creating images, you need to consider:

      ** changing a large layer, it will be recreated, so it is often better to split it, for example, create one layer with 'NPM i' and copy the code on the second;

      * if the file in the image is large and the container is changed, then from the read-only image layer the file will be completely copied to the editing layer, therefore, the containers are supposed to be lightweight, and the content is usually placed in a special storage. code-as-a-service: 12 factors (12factor.net)

      * Codebase – one service – they are a repository;

      * Dependeces – all dependent services in the config;

      * Config – configs are available through the environment;

      * BackEnd – exchange data with other services via an API-based network;

      * Processes – one service – one process, which allows in the event of a fall to unambiguously track (the container itself ends) and restart it;

      * Independence of the environment and no influence on it.

      * СI / CD – code control (git) – build (jenkins, GitLab) – relies (Docker, jenkins) – deploy (helm, Kubernetes). Keeping the service lightweight is important, but there are programs not designed to run in containers like databases. Due to their peculiarity, certain requirements are imposed on their launch, and the profit is limited. So, because of big data, they are not only slow to scale, and rolling-abdate is unlikely, and the restart must be performed on the same nodes as their data for reasons of performance of access to them.

      * Config – service relationships are defined in the configuration, for example, docker-compose.yml;

      * Port bindign – services communicate through ports, while the port can be selected automatically, for example, if EXPOSE PORT is specified in the Dockerfile, then when a container is called with the -P flag, it will be terminated to the free one automatically.

      * Env – environment settings are passed through environment variables, not through configs, which allows them to be added to the service config configuration, for example, docker-compose.yml

      * Logs – logs are streamed over the network, for example, ELK, or printed to the output, which is already streamed by Docker.

      Dockerd internals:

      essh @ kubernetes-master: ~ / mongo-rs $ ps aux | grep dockerd

      root 6345 1.1 0.7 3257968 123640? Ssl Jul05 76:11 / usr / bin / dockerd -H fd: // –containerd = / run / containerd / containerd.sock

      essh 16650 0.0 0.0 21536 1036 pts / 6 S + 23:37 0:00 grep –color = auto dockerd

      essh @ kubernetes-master: ~ / mongo-rs $ pgrep dockerd

      6345

      essh @ kubernetes-master: ~ / mongo-rs $ pstree -c -p -A $ (pgrep dockerd)

      dockerd (6345) – + – docker-proxy (720) – + – {docker-proxy} (721)

      | | – {docker-proxy} (722)

      | | – {docker-proxy} (723)

      | | – {docker-proxy} (724)

      | | – {docker-proxy} (725)

      | | – {docker-proxy} (726)

      | | – {docker-proxy} (727)

      | `– {docker-proxy} (728)

      | -docker-proxy (7794) – + – {docker-proxy} (7808)

      Docker-File:

      * cleaning caches from package managers: apt-get, pip and others, this cache is not needed in production, only

      takes up space and loads the network, but nowadays it is not often not relevant, since there are multi-stage

      assembly, but more on that below.

      * group commands of the same entities, for example, get APT cache, install programs and uninstall

      cache: in one instruction – the code of only programs, with the spaced version – the code of the programs and the cache,

      because if you do not delete the cache in one instruction, then it will be saved in the layer, regardless of

      follow-up actions.

      * separate instructions by frequency of change, so for example, if not split installation

      software and code, then when you change something in the code, then instead of using the ready-made

      layer with programs, they will be reinstalled, which will entail significant preparation time

      image that is critical for developers:

      ADD ./app/package.json / app

      RUN npm install

      ADD ./app / app

      Docker alternatives

      ** Rocket or rkt – containers for the CoreOS operating environment from RedHut, specially designed to use containers.

      **