IT Cloud

Text
Read preview
Mark as finished
How to read the book after purchase
Font:Smaller АаLarger Aa

It is worth warning the reader against abandoning a rash relational database, although ElasticSearch contains a NoSQL database, but it is intended solely for search and does not contain full-fledged tools for normalization and recovery.

ElasticSearch does not have a console client in the standard delivery – all interaction is carried out via http calls GET, PUT and DELETE. Here is an example of using the Curl program (command) from the linux OS BASH shell:

# Create records (table and database are created automatically)

curl -XPUT mydb / mytable / 1 -d '{

....

} '

# Received values by id

curl -XGET mydb / mytable / 1

curl -XGET mydb / mytable / 1

# Simple search

curl -XGET mydb -d '{

"search": {

"match": {

"name": "my"

}

}

} '

# Removing base

curl -XDELETE mydb

Cloud systems as a source of continuous scaling: Google Cloud and Amazon AWS

In addition to hosting and renting a server, in particular a virtual VPS, you can use cloud solutions (SAS, Service As Software) solutions, that is, to carry out the work of our WEB application (s) only through the control panel using a ready-made infrastructure. This approach has both pros and cons, which depend on the customer's business. If from the technical side the server itself is remote, but we can connect to it, and as a bonus we get the administration panel, then for the developer the differences are more significant. We will divide projects into three groups according to the place of deployment: on hosting, in your data center, or using VPS and in the cloud. Companies using hosting due to significant restrictions imposed on development – the inability to install their software and the instability and size of the provided capacity – mainly specialize in custom (streaming) development of sites and stores, which, due to small requirements for the qualifications of developers and undemanding knowledge of the infrastructure the market is ready to pay for their labor at a minimum. The second group includes companies that implement completed projects, but developers are excluded from working with the infrastructure by the presence of system administrators, build engineers, DevOps and other infrastructure specialists. Companies choosing cloud solutions generally justify overpaying for ready-made infrastructure and capacities by their extensibility (relevant for startups when the load growth is not predictable). For the implementation of such projects, they generally hire highly qualified specialists of a wide range to implement non-standard solutions, where the infrastructure is already just a tool, and there are simply no specialists in it. The developers are entrusted with the functions of designing the project as a whole, as a whole, and not a program in isolation from the infrastructure. These are mainly foreign companies that are ready to pay well for the labor of valuable employees.

For deployment, we will use Kubernetes to counter the vender lock, when the project infrastructure is tied to the API of a specific cloud provider and will not allow moving to other or our own clouds without significant changes in the application itself. Kubernetes is supported by Amazon AWS, Google Cloud, Microsoft Azure, on-premises installation of one instance using MiniKube.

We will use Google Cloud, for the current 2018 it provides free use for one year of limited resources ($ 300), while there are limits that can be viewed in the IAM and Administration -> Quotas menu . It is important to note that cloud providers do not provide tariffs in the modern range, but provide tariffs for the use of certain capacities, that is, the site is visited little – we pay little, it is difficult to process a lot of data – we pay a lot. For this reason, when the company's computing power needs are predictable (not a startup), it may be advisable to use its own capabilities for a constant load, which can be economically feasible, without risking limited computing power.

And so we go to cloud.google.com, register, bind a debit card with a minimum balance and go to the console.cloud.google.com console, where you can take a tutorial on the interface for general familiarization. In the menu, click the Payment item: I have $ 300 untouched demo money and 356 days left (funds are not debited in real time).

If you look at it as a basis for Back-End for mobile development (MBasS, Mobile backend as a service), then it is provided by different providers: Google Firebase, AWS Mobile, Azure Mobile

Google App Engine

Cluster creation via WEB interface

Let's first check the restrictions (quotas) Menu -> Products -> IAM and administration -> Quotas, and if you are on a test account, then Static IP addresses will be equal to 1, then the balancer will not be able to create and you will have to delete the cluster. Let's create a cluster in Menu – Resources – Kubernetes Engine in three replicas of the micromachine and the latest version of Kubernetes. In the lower left corner in the Marketplace item, create 2 NGINX instances. After creating the cluster, click on the Services tab and go to the IP address.

Marketplace: Networking, Free, Kubernetes Applications: NGINX Let's create a custom standard-cluster- NGINX cluster, choosing a minimum of CPU and RAM, 2 nodes instead of 3 and the latest version of Kubernetes (I chose 1.11.3, and my code will be compatible with – at least 1.10). In the Menu – Resources – Kubernetes Engine in the Cluster tab, click the Connect button. Cluster management on the command line is carried out using the cubectl command, you can read about it in the documentation: https://kubernetes.io/docs/reference/kubectl/overview/ and the list at https://gist.github.com/ipedrazas/95391ffd88190bea94ca188d3d2c1cbe …

Creating a virtual machine:

You can create a software project, but you can only use it on a paid account:

NAME_PROJECT = bitrix_12345;

NAME_CLUSTER = bitrix;

gcloud projects create $ NAME_CLUSTER –name $ NAME_CLUSTER;

gcloud config set project $ NAME_CLUSTER;

gcloud projects list;

A few subtleties: the –zone key is required and put at the end, the disk should not be less than 10Gb, and the type of machines can be taken from https://cloud.google.com/compute/docs/machine-types. If we have only one replica, then by default a minimal configuration for testing is created:

gcloud container clusters create $ NAME_CLUSTER –zone europe-north1-a

You can see it in the admin panel by expanding the drop-down list in the header and opening the All projects tab.

gcloud projects delete NAME_PROJECT;

, if more – standard, the parameters of which we will edit:

$ gcloud container clusters create mycluster \

–-machine-type = n1-standard-1 –disk-size = 10GB –image-type ubuntu \

–-scopes compute-rw, gke-default \

–-machine-type = custom-1-1024 \

–-cluster-version = 1.11 –enable-autoupgrade \

–-num-nodes = 1 –enable-autoscaling –min-nodes = 1 –max-nodes = 2 \

–-zone europe-north1-a

The –enable-autorepair key starts the work of monitoring the availability of the node, and if it crashes, it will be recreated. The key requires a Kubernetes version of at least 1.11, and at the time of this writing, the default version is 1.10 and therefore you need to set it with a key, for example, –cluster-version = 1.11.4-gke.12 . But you can fix only the major version –cluster-version = 1.11 and set the auto-update version –enable-autoupgrade . We will also set auto-assuring the number of nodes if there are not enough resources: –num-nodes = 1 –min-nodes = 1 –max-nodes = 2 –enable-autoscaling .

Now let's talk about virtual cores and RAM. By default, the n1-standart-1 machine is raised , which has one virtual core and 3.5Gb of RAM, in triplicate, which together gives three virtual cores and 10.5Gb of RAM. It is important that the cluster has only at least two virtual processor cores, otherwise, formally, according to the limits for Kubernetes system containers, they will not be enough for full operation (containers, for example, system containers, may not rise). I will take two nodes, one core each and the total number of cores will be two. The same situation is with RAM, 1Gb (1024Mb) of RAM was enough for me to raise a container with NGINX, but to raise a container with LAMP (Apache MySQL PHP) is no longer there, the system service kube-dns-548976df6c- mlljx , which is responsible for DNS in the pod. Despite the fact that it is not vitally important and will not be useful to us, the next time it may not rise up a more important one instead. It is important to note that my cluster with 1Gb was normally raised and everything was fine, my total volume of 2Gb turned out to be a borderline value. I set 1080Mb (1.25Gb), taking into account that the minimum level of RAM is 256Mb (0.25Gb) and my volume must be a multiple of it and be at least 1Gb for one core. As a result, the cluster has 2 cores and 2.5Gb instead of 3 cores and 10.5Gb, which is a significant optimization of resources and prices on a paid account.

Now we need to connect to the server. We already have the key on the server $ {HOME} /. Kube / config and now we just need to log in:

$ gcloud container clusters get-credentials b –zone europe-north1-a –project essch

$ kubectl port-forward Nginxlamp-74c8b5b7f-d2rsg 8080: 8080

Forwarding from 127.0.0.1:8080 -> 8080

Forwarding from [:: 1]: 8080 -> 8080

$ google-chrome http: // localhost: 8080 # this won't work in Google Shell

$ kubectl expose Deployment Nginxlamp –type = "LoadBalancer" –port = 8080

To use kubectl locally, you need to install gcloud and use it to install kubectl using the gcloud components install kubectl command , but let's not complicate the first steps for now.

 

In the Services section of the admin panel, POD will be available not only through the front-end balancer service, but also through the internal balancer Deployment. Although it will be saved after the re-creation, the config is more maintainable and obvious.

It is also possible to make it possible to adjust the number of nodes in automatic mode depending on the load, for example, the number of containers with established resource requirements, using the keys –enable-autoscaling –min-nodes = 1 –max-nodes = 2 .

Simple cluster in GCP

There are two ways to create a cluster: through the Google Cloud Platform graphical interface or through its API with the gcloud command. Let's see how this can be done through the UI. Next to the menu, click on the drop-down list and create a separate project. In the Kubernetes Engine section, choose to create a cluster. Let's give the name, 2CPU, the europe-north-1 zone (the data center in Finland is closest to St. Petersburg) and the latest version of Kubernetes. After creating the cluster, click on connect and select Cloud Shell. To create through the API, click the button in the upper right corner to display the console panel and enter in it:

gcloud container clusters create mycluster –zone europe-north1-a

After a while, it took me two and a half minutes, 3 virtual machines will be raised, the operating system is installed on them and the disk is mounted. Let's check:

esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters list –filter = name = mycluster

NAME LOCATION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS

mycluster europe-north1-a 35.228.37.100 n1-standard-1 1.10.9-gke.5 3 RUNNING

esschtolts @ cloudshell: ~ (essch) $ gcloud compute instances list

NAME MACHINE_TYPE EXTERNAL_IP STATUS

gke-mycluster-default-pool-43710ef9-0168 n1-standard-1 35.228.73.217 RUNNING

gke-mycluster-default-pool-43710ef9-39ck n1-standard-1 35.228.75.47 RUNNING

gke-mycluster-default-pool-43710ef9-g76k n1-standard-1 35.228.117.209 RUNNING

Let's connect to the virtual machine:

esschtolts @ cloudshell: ~ (essch) $ gcloud projects list

PROJECT_ID NAME PROJECT_NUMBER

agile-aleph-203917 My First Project 546748042692

essch app 283762935665

esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters get-credentials mycluster \

–-zone europe-north1-a \

–-project essch

Fetching cluster endpoint and auth data.

kubeconfig entry generated for mycluster.

We don't have a cluster yet:

esschtolts @ cloudshell: ~ (essch) $ kubectl get pods

No resources found.

Let's create a cluster:

esschtolts @ cloudshell: ~ (essch) $ kubectl run Nginx –image = Nginx –replicas = 3

deployment.apps "Nginx" created

Let's check its composition:

esschtolts @ cloudshell: ~ (essch) $ kubectl get deployments –selector = run = Nginx

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

Nginx 3 3 3 3 14s

esschtolts @ cloudshell: ~ (essch) $ kubectl get pods –selector = run = Nginx

NAME READY STATUS RESTARTS AGE

Nginx-65899c769f-9whdx 1/1 Running 0 43s

Nginx-65899c769f-szwtd 1/1 Running 0 43s

Nginx-65899c769f-zs6g5 1/1 Running 0 43s

Let's make sure that all three replicas of the cluster are distributed evenly across all three nodes:

esschtolts @ cloudshell: ~ (essch) $ kubectl describe pod Nginx-65899c769f-9whdx | grep Node:

Node: gke-mycluster-default-pool-43710ef9-g76k / 10.166.0.5

esschtolts @ cloudshell: ~ (essch) $ kubectl describe pod Nginx-65899c769f-szwtd | grep Node:

Node: gke-mycluster-default-pool-43710ef9-39ck / 10.166.0.4

esschtolts @ cloudshell: ~ (essch) $ kubectl describe pod Nginx-65899c769f-zs6g5 | grep Node:

Node: gke-mycluster-default-pool-43710ef9-g76k / 10.166.0.5

Now let's install the load balancer:

esschtolts @ cloudshell: ~ (essch) $ kubectl expose Deployment Nginx –type = "LoadBalancer" –port = 80

service "Nginx" exposed

Let's check that it was created:

esschtolts @ cloudshell: ~ (essch) $ kubectl expose Deployment Nginx –type = "LoadBalancer" –port = 80

service "Nginx" exposed

esschtolts @ cloudshell: ~ (essch) $ kubectl get svc –selector = run = Nginx

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

Nginx LoadBalancer 10.27.245.187 pending> 80: 31621 / TCP 11s

esschtolts @ cloudshell: ~ (essch) $ sleep 60;

esschtolts @ cloudshell: ~ (essch) $ kubectl get svc –selector = run = Nginx

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

Nginx LoadBalancer 10.27.245.187 35.228.212.163 80: 31621 / TCP 1m

Let's check its work:

esschtolts @ cloudshell: ~ (essch) $ curl 35.228.212.163:80 2> \ dev \ null | grep h1

<h1> Welcome to Nginx! </ h1>

In order not to copy the full names every time, save them in variables (more about the JSONpath format in the Go documentation: https://golang.org/pkg/text/template/#pkg-overview):

esschtolts @ cloudshell: ~ (essch) $ pod1 = $ (kubectl get pods -o jsonpath = {. items [0] .metadata.name});

esschtolts @ cloudshell: ~ (essch) $ pod2 = $ (kubectl get pods -o jsonpath = {. items [1] .metadata.name});

esschtolts @ cloudshell: ~ (essch) $ pod3 = $ (kubectl get pods -o jsonpath = {. items [2] .metadata.name});

esschtolts @ cloudshell: ~ (essch) $ echo $ pod1 $ pod2 $ pod3

Nginx-65899c769f-9whdx Nginx-65899c769f-szwtd Nginx-65899c769f-zs6g5

Let's change the pages in each POD by copying the unique pages to each replica, and check the balancing by checking the distribution of requests across the POD:

esschtolts @ cloudshell: ~ (essch) $ echo 1> test.html;

esschtolts @ cloudshell: ~ (essch) $ kubectl cp test.html $ {pod1}: / usr / share / Nginx / html / index.html

esschtolts @ cloudshell: ~ (essch) $ echo 2> test.html;

esschtolts @ cloudshell: ~ (essch) $ kubectl cp test.html $ {pod2}: / usr / share / Nginx / html / index.html

esschtolts @ cloudshell: ~ (essch) $ echo 3> test.html;

esschtolts @ cloudshell: ~ (essch) $ kubectl cp test.html $ {pod3}: / usr / share / Nginx / html / index.html

esschtolts @ cloudshell: ~ (essch) $ curl 35.228.212.163:80 && curl 35.228.212.163:80 && curl 35.228.212.163:80

3

2

one

esschtolts @ cloudshell: ~ (essch) $ curl 35.228.212.163:80 && curl 35.228.212.163:80 && curl 35.228.212.163:80

3

one

one

Let's check the failover of the cluster by deleting one POD:

esschtolts @ cloudshell: ~ (essch) $ kubectl delete pod $ {pod1} && kubectl get pods && sleep 10 && kubectl get pods

pod "Nginx-65899c769f-9whdx" deleted

NAME READY STATUS RESTARTS AGE

Nginx-65899c769f-42rd5 0/1 ContainerCreating 0 1s

Nginx-65899c769f-9whdx 0/1 Terminating 0 54m

Nginx-65899c769f-szwtd 1/1 Running 0 54m

Nginx-65899c769f-zs6g5 1/1 Running 0 54m

NAME READY STATUS RESTARTS AGE

Nginx-65899c769f-42rd5 1/1 Running 0 12s

Nginx-65899c769f-szwtd 1/1 Running 0 55m

Nginx-65899c769f-zs6g5 1/1 Running 0 55m

As we can see, immediately after the POD became unavailable (the process of deleting it began) its replacement began to be created. Soon, the cluster will fully restore its structure. After we have finished our experiments, remove the virtual machines with the cluster:

esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters delete mycluster –zone europe-north1-a;

The following clusters will be deleted.

– [mycluster] in [europe-north1-a]

Do you want to continue (Y / n)? Y

Deleting cluster mycluster … done.

Deleted [https://container.googleapis.com/v1/projects/essch/zones/europe-north1-a/clusters/mycluster].

esschtolts @ cloudshell: ~ (essch) $ gcloud container clusters list –filter = name = mycluster

Total. We created a cluster and created a load balancer with just two run and expose commands, now we can go to the balancer's IP address and watch the NGINX welcome page in the browser. In this case, the cluster recovers itself, for this we emulated a failure of the pod by deleting it – it was created again.

Cluster Reproducibility

Let's take a look at the situation from the previous chapter, in which we created a cluster, deleted a replica, and it recovered. The fact is that we do not manage commands directly, but with the help of commands we create descriptions of the required configuration of the cluster and place it in the distributed storage, after which the state of the nodes is maintained in accordance with this description in the distributed storage. We can also get and edit these descriptions, or write ourselves and then upload them to a distributed storage. This will allow us to save the state on disk in the form of YAML files and restore it back, as is often done when moving from a production server to a test one. In addition, we get the opportunity to more flexibly customize the state, but since we are not limited to commands.

esschtolts @ cloudshell: ~ (essch) $ kubectl get deployment / Nginx –output = yaml

apiVersion: extensions / v1beta1

kind: Deployment

metadata:

annotations:

deployment.kubernetes.io/revision: "1"

creationTimestamp: 2018-12-16T10: 23: 26Z

generation: 1

labels:

run: Nginx

name: Nginx

namespace: default

resourceVersion: "1612985"

selfLink: / apis / extensions / v1beta1 / namespaces / default / deployments / Nginx

uid: 9fb3ad6a-011c-11e9-bfaa-42010aa60088

spec:

progressDeadlineSeconds: 600

replicas: 1

revisionHistoryLimit: 10

selector:

matchLabels:

run: Nginx

strategy:

rollingUpdate:

maxSurge: 1

maxUnavailable: 1

type: RollingUpdate

template:

metadata:

creationTimestamp: null

labels:

run: Nginx

spec:

containers:

– image: Nginx

imagePullPolicy: Always

name: Nginx

resources: {}

terminationMessagePath: / dev / termination-log

terminationMessagePolicy: File

dnsPolicy: ClusterFirst

restartPolicy: Always

schedulerName: default-scheduler

securityContext: {}

terminationGracePeriodSeconds: 30

status:

availableReplicas: 1

conditions:

– lastTransitionTime: 2018-12-16T10: 23: 26Z

lastUpdateTime: 2018-12-16T10: 23: 26Z

message: Deployment has minimum availability.

reason: MinimumReplicasAvailable

status: "True"

type: Available

– lastTransitionTime: 2018-12-16T10: 23: 26Z

lastUpdateTime: 2018-12-16T10: 23: 28Z

message: ReplicaSet "Nginx-64f497f8fd" has successfully progressed.

reason: NewReplicaSetAvailable

status: "True"

type: Progressing

observedGeneration: 1

readyReplicas: 1

replicas: 1

updatedReplicas: 1

This will be superfluous for us, so I will delete the unnecessary, because when creating, we specified only the name and image, the rest was filled with default values:

apiVersion: extensions / v1beta1

kind: Deployment

metadata:

labels:

run: Nginx

name: Nginx

spec:

selector:

matchLabels:

run: Nginx

template:

metadata:

labels:

run: Nginx

spec:

containers:

– image: Nginx

name: Nginx

You can also create a template:

gcloud services enable compute.googleapis.com –project = $ {PROJECT}

gcloud beta compute instance-templates create-with-container $ {TEMPLATE} \

–-machine-type = custom-1-4096 \

–-image-family = cos-stable \

–-image-project = cos-cloud \

–-container-image = gcr.io / kuar-demo / kuard-amd64: 1 \

–-container-restart-policy = always \

–-preemptible \

–-region = $ {REGION} \

–-project = $ {PROJECT}

gcloud compute instance-groups managed create $ {TEMPLATE} \

–-base-instance-name = $ {TEMPLATE} \

–-template = $ {TEMPLATE} \

–-size = $ {CLONES} \

–-region = $ {REGION} \

–-project = $ {PROJECT}

High service availability

To ensure high availability, you need to redirect traffic to the spare in the event of an application crash. Also, it is often important that the load is evenly distributed, since the application in a single instance is not able to handle all the traffic. To do this, a cluster is created, for example, let's take a more complex image in order to parse a larger number of nuances:

 

esschtolts @ cloudshell: ~ / bitrix (essch) $ cat deploymnet.yaml

apiVersion: apps / v1

kind: Deployment

metadata:

name: Nginxlamp

spec:

selector:

matchLabels:

app: lamp

replicas: 1

template:

metadata:

labels:

app: lamp

spec:

containers:

– name: lamp

image: mattrayner / lamp: latest-1604-php5

ports:

– containerPort: 80

esschtolts @ cloudshell: ~ / bitrix (essch) $ cat loadbalancer.yaml

apiVersion: v1

kind: Service

metadata:

name: frontend

spec:

type: LoadBalancer

ports:

– name: front

port: 80

targetPort: 80

selector:

app: lamp

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods

NAME READY STATUS RESTARTS AGE

Nginxlamp-7fb6fdd47b-jttl8 2/2 Running 0 3m

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

frontend LoadBalancer 10.55.242.137 35.228.73.217 80: 32701 / TCP, 8080: 32568 / TCP 4m

kubernetes ClusterIP 10.55.240.1 none> 443 / TCP 48m

Now we can create identical copies of our clusters, for example, for Production and Develop, but balancing will not work as expected. The balancer will find PODs by label, and PODs in both production and Developer clusters correspond to this label. Also, placing clusters in different projects will not be an obstacle. Although, for many tasks, this is a big plus, but not in the case of a cluster for developers and production. The namespace is used to delimit the scope. We use them discreetly, when we list PODs without specifying a scope, we are displayed by default , but the PODs are not taken out of the system scope:

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get namespace

NAME STATUS AGE

default Active 5h

kube-public Active 5h

kube-system Active

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods –namespace = kube-system

NAME READY STATUS RESTARTS AGE

event-exporter-v0.2.3-85644fcdf-tdt7h 2/2 Running 0 5h

fluentd-gcp-scaler-697b966945-bkqrm 1/1 Running 0 5h

fluentd-gcp-v3.1.0-xgtw9 2/2 Running 0 5h

heapster-v1.6.0-beta.1-5649d6ddc6-p549d 3/3 Running 0 5h

kube-dns-548976df6c-8lvp6 4/4 Running 0 5h

kube-dns-548976df6c-mcctq 4/4 Running 0 5h

kube-dns-autoscaler-67c97c87fb-zzl9w 1/1 Running 0 5h

kube-proxy-gke-bitrix-default-pool-38fa77e9-0wdx 1/1 Running 0 5h

kube-proxy-gke-bitrix-default-pool-38fa77e9-wvrf 1/1 Running 0 5h

l7-default-backend-5bc54cfb57-6qk4l 1/1 Running 0 5h

metrics-server-v0.2.1-fd596d746-g452c 2/2 Running 0 5h

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods –namespace = default

NAMEREADY STATUS RESTARTS AGE

Nginxlamp-b5dcb7546-g8j5r 1/1 Running 0 4h

Let's create a scope:

esschtolts @ cloudshell: ~ / bitrix (essch) $ cat namespace.yaml

apiVersion: v1

kind: Namespace

metadata:

name: development

labels:

name: development

esschtolts @ cloudshell: ~ (essch) $ kubectl create -f namespace.yaml

namespace "development" created

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get namespace –show-labels

NAME STATUS AGE LABELS

default Active 5h none>

development Active 16m name = development

kube-public Active 5h none>

kube-system Active 5h none>

The essence of working with scope is that for specific clusters we set the scope and we can execute commands specifying it, while they will apply only to them. At the same time, except for the keys in commands such as kubectl get pods I do not appear in the scope, therefore the configuration files of controllers (Deployment, DaemonSet and others) and services (LoadBalancer, NodePort and others) do not appear, allowing them to be seamlessly transferred between the scope, which especially relevant for the development pipeline: developer server, test server, and production server. Scopes are set in the cluster context file $ HOME / .kube / config using the kubectl config view command . So, in my cluster context entry, the scope entry does not appear (default is default ):

– context:

cluster: gke_essch_europe-north1-a_bitrix

user: gke_essch_europe-north1-a_bitrix

name: gke_essch_europe-north1-a_bitrix

You can see something like this:

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl config view -o jsonpath = '{. contexts [4]}'

{gke_essch_europe-north1-a_bitrix {gke_essch_europe-north1-a_bitrix gke_essch_europe-north1-a_bitrix []}}

Let's create a new context for this user and cluster:

esschtolts @ cloudshell: ~ (essch) $ kubectl config set-context dev \

> –namespace = development \

> –cluster = gke_essch_europe-north1-a_bitrix \

> –user = gke_essch_europe-north1-a_bitrix

Context "dev" modified.

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl config set-context dev \

> –namespace = development \

> –cluster = gke_essch_europe-north1-a_bitrix \

> –user = gke_essch_europe-north1-a_bitrix

Context "dev" modified.

As a result, the following was added:

– context:

cluster: gke_essch_europe-north1-a_bitrix

namespace: development

user: gke_essch_europe-north1-a_bitrix

name: dev

Now it remains to switch to it:

esschtolts @ cloudshell: ~ (essch) $ kubectl config use-context dev

Switched to context "dev".

esschtolts @ cloudshell: ~ (essch) $ kubectl config current-context

dev

esschtolts @ cloudshell: ~ (essch) $ kubectl get pods

No resources found.

esschtolts @ cloudshell: ~ (essch) $ kubectl get pods –namespace = default

NAMEREADY STATUS RESTARTS AGE

Nginxlamp-b5dcb7546-krkm2 1/1 Running 0 10h

You could add a namespace to the existing context:

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl config set-context $ (kubectl config current-context) –namespace = development

Context "gke_essch_europe-north1-a_bitrix" modified.

Now create a new cluster in the scope dev (it is now the default, and it can be omitted –namespace = dev ) and removed from the field by default visibility default (it is no longer the default for our cluster, and it is necessary to specify –namespace = default ):

esschtolts @ cloudshell: ~ (essch) $ cd bitrix /

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl create -f deploymnet.yaml -f loadbalancer.yaml

deployment.apps "Nginxlamp" created

service "frontend" created

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl delete -f deploymnet.yaml -f loadbalancer.yaml –namespace = default

deployment.apps "Nginxlamp" deleted

service "frontend" deleted

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods

NAMEREADY STATUS RESTARTS AGE

Nginxlamp-b5dcb7546-8sl2f 1/1 Running 0 1m

Now let's look at the external IP address and open the page:

esschtolts @ cloudshell: ~ / bitrix (essch) $ curl $ (kubectl get -f loadbalancer.yaml -o json

| jq -r .status.loadBalancer.ingress [0] .ip) 2> / dev / null | grep '<h2>'

<h2> Welcome to github.com/mattrayner/docker-lamp "target =" _blank "> Docker-Lamp aka mattrayner / lamp </ h2>

Customization

Now we need to change the standard solution to our needs, namely, add configs and our application. For simplicity's sake, we'll mark (change the default) .htaccess file at the root of our application , making it simple to place our application in the / app folder . The first thing that begs to be done is to create a POD and then copy our application from the host to the container (I took Bitrix):

While this solution works, it has a number of significant disadvantages. The first thing is that we need to wait from outside by constantly polling the POD when it will raise the container and we will copy the application into it and should not do this if the container has not been raised, as well as handle the situation when it breaks our POD, external services, can rely on the status of the POD, although the POD itself will not be ready yet until the script is executed. The second drawback is that we have some kind of external script that needs to be logically not separated from the POD, but at the same time it needs to be manually launched from outside, where it is stored and somewhere there should be instructions for its use. And finally, we can have a lot of these PODs. At first glance, the logical solution is to put the code in the Dockerfile:

esschtolts @ cloudshell: ~ / bitrix (essch) $ cat Dockerfile

FROM mattrayner / lamp: latest-1604-php5

MAINTAINER ESSch ESSchtolts@yandex.ru>

RUN cd / app / && (\

wget https://www.1c-bitrix.ru/download/small_business_encode.tar.gz \

&& tar -xf small_business_encode.tar.gz \

&& sed -i '5i php_value short_open_tag 1' .htaccess \

&& chmod -R 0777. \

&& sed -i 's / # php_value display_errors 1 / php_value display_errors 1 /' .htaccess \

&& sed -i '5i php_value opcache.revalidate_freq 0' .htaccess \

&& sed -i 's / # php_flag default_charset UTF-8 / php_flag default_charset UTF-8 /' .htaccess \

) && cd ..;

EXPOSE 80 3306

CMD ["/run.sh"]

esschtolts @ cloudshell: ~ / bitrix (essch) $ docker build -t essch / app: 0.12. | grep Successfully

Successfully built f76e656dac53

Successfully tagged essch / app: 0.12

esschtolts @ cloudshell: ~ / bitrix (essch) $ docker image push essch / app | grep digest

0.12: digest: sha256: 75c92396afacefdd5a3fb2024634a4c06e584e2a1674a866fa72f8430b19ff69 size: 11309