Название | IT Cloud |
---|---|
Автор произведения | Eugeny Shtoltc |
Жанр | Зарубежная компьютерная литература |
Серия | |
Издательство | Зарубежная компьютерная литература |
Год выпуска | 2021 |
isbn |
To https://source.developers.google.com/p/node-cluster-243923/r/nodejs
* [new branch] master -> master
Branch 'master' set up to track remote branch 'master' from 'origin'.
Now it's time to set up image creation when creating a new version of the product: go to GCP -> Cloud Build -> triggers -> Create trigger -> Google Cloud source code repository -> NodeJS. Trigger tag type so that the image is not created during normal commits. I will change the name of the image from gcr.io/node-cluster-243923/NodeJS:$SHORT_SHA to gcr.io/node-cluster-243923/NodeJS:$SHORT_SHA and the timeout to 60 seconds. Now I'll commit and add a tag:
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ cp ../../Dockerfile.
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add Dockerfile
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add Dockerfile
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ cp ../../Dockerfile.
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add Dockerfile
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git commit -m 'add Dockerfile'
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git remote -v
origin https://source.developers.google.com/p/node-cluster-243923/r/nodejs (fetch)
origin https://source.developers.google.com/p/node-cluster-243923/r/nodejs (push)
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin master
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 380 bytes | 380.00 KiB / s, done.
Total 3 (delta 0), reused 0 (delta 0)
To https://source.developers.google.com/p/node-cluster-243923/r/nodejs
46dd957..b86c01d master -> master
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git tag
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git tag -a v0.0.1 -m 'test to run'
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin v0.0.1
Counting objects: 1, done.
Writing objects: 100% (1/1), 161 bytes | 161.00 KiB / s, done.
Total 1 (delta 0), reused 0 (delta 0)
To https://source.developers.google.com/p/node-cluster-243923/r/nodejs
* [new tag] v0.0.1 -> v0.0.1
Now, if we press the start trigger button, we will see the image in the Container Registry with our tag:
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ gcloud container images list
NAME
gcr.io/node-cluster-243923/nodejs
gcr.io/node-cluster-243923/nodejs_cluster
Only listing images in gcr.io/node-cluster-243923. Use –repository to list images in other repositories.
Now if we just add the changes and the tag, the image will be created automatically:
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ sed -i 's / HOSTNAME \} / HOSTNAME \} \ n /' server.js
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git add server.js
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git commit -m 'fix'
[master 230d67e] fix
1 file changed, 2 insertions (+), 1 deletion (-)
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin master
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 304 bytes | 304.00 KiB / s, done.
Total 3 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1)
To https://source.developers.google.com/p/node-cluster-243923/r/nodejs
b86c01d..230d67e master -> master
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git tag -a v0.0.2 -m 'fix'
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin v0.0.2
Counting objects: 1, done.
Writing objects: 100% (1/1), 158 bytes | 158.00 KiB / s, done.
Total 1 (delta 0), reused 0 (delta 0)
To https://source.developers.google.com/p/node-cluster-243923/r/nodejs
* [new tag] v0.0.2 -> v0.0.2
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ sleep 60
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ gcloud builds list
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
2b024d7e-87a9-4d2a-980b-4e7c108c5fad 2019-06-22T17: 13: 14 + 00: 00 28S [email protected] gcr.io/node-cluster-243923/nodejs:v0.0.2 SUCCESS
6b4ae6ff-2f4a-481b-9f4e-219fafb5d572 2019-06-22T16: 57: 11 + 00: 00 29S [email protected] gcr.io/node-cluster-243923/nodejs:v0.0.1 SUCCESS
e50df082-31a4-463b-abb2-d0f72fbf62cb 2019-06-22T16: 56: 48 + 00: 00 29S [email protected] gcr.io/node-cluster-243923/nodejs:v0.0.1 SUCCESS
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git tag -a latest -m 'fix'
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ git push origin latest
Counting objects: 1, done.
Writing objects: 100% (1/1), 156 bytes | 156.00 KiB / s, done.
Total 1 (delta 0), reused 0 (delta 0)
To https://source.developers.google.com/p/node-cluster-243923/r/nodejs
* [new tag] latest -> latest
essh @ kubernetes-master: ~ / node-cluster / app / nodejs $ cd ../ ..
Creating multiple environments with Terraform clusters
When trying to create several clusters from the same configuration, we will encounter duplicate identifiers that must be unique, so we isolate them from each other by creating and placing them in different projects. To manually create a project, go to GCP -> Products -> IAM and administration -> Resource management and create a NodeJS-prod project and switch to the project, wait for its activation. Let's look at the state of the current project:
essh @ kubernetes-master: ~ / node-cluster $ cat main.tf
provider "google" {
credentials = file ("./ kubernetes_key.json")
project = "node-cluster-243923"
region = "europe-west2"
}
module "kubernetes" {
source = "./Kubernetes"
}
data "google_client_config" "default" {}
module "Nginx" {
source = "./nodejs"
image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"
endpoint = module.kubernetes.endpoint
access_token = data.google_client_config.default.access_token
cluster_ca_certificate = module.kubernetes.cluster_ca_certificate
}
essh @ kubernetes-master: ~ / node-cluster