IT Cloud. Eugeny Shtoltc

Читать онлайн.
Название IT Cloud
Автор произведения Eugeny Shtoltc
Жанр Зарубежная компьютерная литература
Серия
Издательство Зарубежная компьютерная литература
Год выпуска 2021
isbn



Скачать книгу

@ cloudshell: ~ / bitrix (essch) $ kubectl config set-context $ (kubectl config current-context) –namespace = development

      Context "gke_essch_europe-north1-a_bitrix" modified.

      Now create a new cluster in the scope dev (it is now the default, and it can be omitted –namespace = dev ) and removed from the field by default visibility default (it is no longer the default for our cluster, and it is necessary to specify –namespace = default ):

      esschtolts @ cloudshell: ~ (essch) $ cd bitrix /

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl create -f deploymnet.yaml -f loadbalancer.yaml

      deployment.apps "Nginxlamp" created

      service "frontend" created

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl delete -f deploymnet.yaml -f loadbalancer.yaml –namespace = default

      deployment.apps "Nginxlamp" deleted

      service "frontend" deleted

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods

      NAMEREADY STATUS RESTARTS AGE

      Nginxlamp-b5dcb7546-8sl2f 1/1 Running 0 1m

      Now let's look at the external IP address and open the page:

      esschtolts @ cloudshell: ~ / bitrix (essch) $ curl $ (kubectl get -f loadbalancer.yaml -o json

      | jq -r .status.loadBalancer.ingress [0] .ip) 2> / dev / null | grep '<h2>'

      <h2> Welcome to github.com/mattrayner/docker-lamp "target =" _blank "> Docker-Lamp aka mattrayner / lamp </ h2>

      Customization

      Now we need to change the standard solution to our needs, namely, add configs and our application. For simplicity's sake, we'll mark (change the default) .htaccess file at the root of our application , making it simple to place our application in the / app folder . The first thing that begs to be done is to create a POD and then copy our application from the host to the container (I took Bitrix):

      While this solution works, it has a number of significant disadvantages. The first thing is that we need to wait from outside by constantly polling the POD when it will raise the container and we will copy the application into it and should not do this if the container has not been raised, as well as handle the situation when it breaks our POD, external services, can rely on the status of the POD, although the POD itself will not be ready yet until the script is executed. The second drawback is that we have some kind of external script that needs to be logically not separated from the POD, but at the same time it needs to be manually launched from outside, where it is stored and somewhere there should be instructions for its use. And finally, we can have a lot of these PODs. At first glance, the logical solution is to put the code in the Dockerfile:

      esschtolts @ cloudshell: ~ / bitrix (essch) $ cat Dockerfile

      FROM mattrayner / lamp: latest-1604-php5

      MAINTAINER ESSch [email protected]>

      RUN cd / app / && (\

      wget https://www.1c-bitrix.ru/download/small_business_encode.tar.gz \

      && tar -xf small_business_encode.tar.gz \

      && sed -i '5i php_value short_open_tag 1' .htaccess \

      && chmod -R 0777. \

      && sed -i 's / # php_value display_errors 1 / php_value display_errors 1 /' .htaccess \

      && sed -i '5i php_value opcache.revalidate_freq 0' .htaccess \

      && sed -i 's / # php_flag default_charset UTF-8 / php_flag default_charset UTF-8 /' .htaccess \

      ) && cd ..;

      EXPOSE 80 3306

      CMD ["/run.sh"]

      esschtolts @ cloudshell: ~ / bitrix (essch) $ docker build -t essch / app: 0.12. | grep Successfully

      Successfully built f76e656dac53

      Successfully tagged essch / app: 0.12

      esschtolts @ cloudshell: ~ / bitrix (essch) $ docker image push essch / app | grep digest

      0.12: digest: sha256: 75c92396afacefdd5a3fb2024634a4c06e584e2a1674a866fa72f8430b19ff69 size: 11309

      esschtolts @ cloudshell: ~ / bitrix (essch) $ cat deploymnet.yaml

      apiVersion: apps / v1

      kind: Deployment

      metadata:

      name: Nginxlamp

      namespace: development

      spec:

      selector:

      matchLabels:

      app: lamp

      replicas: 1

      template:

      metadata:

      labels:

      app: lamp

      spec:

      containers:

      – name: lamp

      image: essch / app: 0.12

      ports:

      – containerPort: 80

      esschtolts @ cloudshell: ~ / bitrix (essch) $ IMAGE = essch / app: 0.12 kubectl create -f deploymnet.yaml

      deployment.apps "Nginxlamp" created

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods -l app = lamp

      NAME READY STATUS RESTARTS AGE

      Nginxlamp-55f8cd8dbc-mk9nk 1/1 Running 0 5m

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl exec Nginxlamp-55f8cd8dbc-mk9nk – ls / app /

      index.php

      This happens because the developer of the images, which is correct and written in his documentation, expected that the image would be mounted to the host and the app folder was deleted in the script launched at the end. Also, in this approach, we will face the problem of constant updates of images, config (we cannot set the image number of a variable, since it will be executed on the cluster nodes) and container updates, we also cannot update the folder, since when the container is recreated, the changes will be returned to the original state.

      The correct solution would be to mount the folder and include in the POD lifecycle the launch of the container, which starts in front of the main container and performs preparatory environment operations, often downloading the application from the repository, building, running tests, creating users and setting rights. For each operation, it is correct to launch a separate init container, in which this operation is the basic process, which are executed sequentially – by a chain that will be broken if one of the operations is performed with an error (it will return a non-zero process termination code). For such a container, a separate description is provided in the POD – InitContainer , listing them sequentially, they will build a chain of init container launches in the same order. In our case, we created an unnamed volume and, using InitContainer, delivered the installation files to it. After the successful completion of InitContainer , of which there may be several, the main one starts. The main container is already mounted in our volume, which already has the installation files, we just need to go to the browser and complete the installation:

      esschtolts @ cloudshell: ~ / bitrix (essch) $ cat deploymnet.yaml

      kind: Deployment

      metadata:

      name: Nginxlamp

      namespace: development

      spec:

      selector:

      matchLabels:

      app: lamp

      replicas: 1

      template:

      metadata:

      labels:

      app: lamp

      spec:

      initContainers:

      – name: