banner banner banner
IT Cloud
IT Cloud
Оценить:
Рейтинг: 0

Полная версия:

IT Cloud

скачать книгу бесплатно


"Flags": 0,

"Value": "dmFsdWUx",

"CreateIndex": 277,

"ModifyIndex": 277

}

]

essh @ kubernetes-master: ~ / consul $ firefox $ (docker inspect dev-consul | jq -r '. [] | .NetworkSettings.Networks.bridge.IPAddress'): 8500 / ui

With the determination of the location of the containers, it is necessary to provide authorization; for this, key stores are used.

dockerd -H fd: // –cluster-store = consul: //192.168.1.6: 8500 –cluster-advertise = eth0: 2376

* –cluster-store – you can get data about keys

* –cluster-advertise – can be saved

docker network create –driver overlay –subnet 192.168.10.0/24 demo-network

docker network ls

Simple clustering

In this article, we will not consider how to create a cluster manually, but will use two tools: Docker Swarm and Google Kubernetes – the most popular and most common solutions. Docker Swarm is simpler, it is part of Docker and therefore has the largest audience (subjectively), and Kubernetes provides much more capabilities, more tool integrations (for example, distributed storage for Volume), support in popular clouds, and more easily scalable for large projects (large abstraction, component approach).

Let's consider what a cluster is and what good it will bring us. A cluster is a distributed structure that abstracts independent servers into one logical entity and automates work on:

* In the event of a server crash, containers are dropped (new ones created) to other servers;

* even distribution of containers across servers for fault tolerance;

* creating a container on a server suitable for free resources;

* Expanding the container in case of failure;

* unified management interface from one point;

* performing operations taking into account the parameters of servers, for example, the size and type of disk and the characteristics of containers specified by the administrator, for example, associated containers with a single mount point are placed on this server;

* unification of different servers, for example, on different OS, cloud and non-cloud.

We will now move from looking at Docker Swarm to Kubernetes. Both of these systems are orchestration systems, both work with Docker containers (Kubernetes also supports RKT and Containerd), but the interactions between containers are fundamentally different due to the additional Kubernetes abstraction layer – POD. Both Docker Swarm and Kubernetes manage containers based on IP addresses and distribute them to nodes, inside which everything works through localhost, proxied by a bridge, but unlike Docker Swarm, which works for the user with physical containers, Kubernetes for the user works with logical – POD. A logical Kubernetes container consists of physical containers, the networking between which occurs through their ports, so they are not duplicated.

Both orchestration systems use an Overlay Network between host nodes to emulate the presence of managed units in a single local network space. This type of network is a logical type that uses ordinary TCP / IP networks for transport and is designed to emulate the presence of cluster nodes in a single network to manage the cluster and exchange information between its nodes, while at the TCP / IP level they cannot be connected. The fact is that when a developer develops a cluster, he can describe a network for only one node, and when a cluster is deployed, several of its instances are created, and their number can change dynamically, and in one network there cannot be three nodes with one IP address and subnets (for example, 10.0.0.1), and it is wrong to require the developer to specify IP addresses, since it is not known which addresses are free and how many will be required. This network takes over tracking the real IP addresses of nodes, which can be allocated randomly from the free ones and change as the nodes in the cluster are re-created, and provides the ability to access them via container IDs / PODs. With this approach, the user refers to specific entities, rather than the dynamics of changing IP addresses. Interaction is carried out using a balancer, which is not logically allocated for Docker Swarm, but in Kubernetes it is created by a separate entity to select a specific implementation, like other services. Such a balancer must be present in every cluster and, but within the Kubernetes ecosystem, is called a Service. It can be declared either separately as a Service or in a description with a cluster, for example, as a Deployment. The service can be accessed by its IP address (see its description) or by its name, which is registered as a first-level domain in the built-in DNS server, for example, if the name of the service specified in the my_service metadata , then the cluster can be accessed through it like this: curl my_service; … This is a fairly standard solution, when the components of the system, along with their IP addresses, change over time (re-created, new ones are added, old ones are deleted) – send traffic through a proxy server, IP or DNS addresses for the external network remain constant, while internal ones can change, leaving taking care of their approval on the proxy server.

Both orchestration systems use the Ingress overlay network to provide access to themselves from the external network through the balancer, which matches the internal network with the external one based on the Linux kernel IP address mapping tables (iptalbes), separating them and allowing information to be exchanged even if there are identical IP addresses in internal and external network. And, here, to maintain the connection between these potentially conflicting networks at the IP level, an overlay Ingress network is used. Kubernetes provides the ability to create a logical entity – an Ingress controller, which will allow you to configure the LoadBalancer or NodePort service depending on the traffic content at a level above HTTP, for example, routing based on address paths (application router) or encrypting TSL / HTTPS traffic, like GCP does and AWS.

Kubernetes is the result of evolution through internal Google projects through Borg, then through Omega, based on the experience gained from experiments, a fairly scalable architecture has developed. Let's highlight the main types of components:

* POD – regular POD;

* ReplicaSet, Deployment – scalable PODs;

* DaemonSet – it is created in each cluster node;

* services (sorted in order of importance): ClusterIP (by default, basic for the rest), NodePort (redirects ports open in the cluster, for each POD, to ports from the range 30000-32767 for accessing specific PODs from the external), LoadBalancer ( NodePort with the ability to create a public IP address for Internet access in public clouds such as AWS and GCP), HostPort (opens ports on the host machine corresponding to the container, that is, if port 9200 is open in the container, it will also be open on the host machine for forward traffic) and HostNetwork (the containers in the POD will be in the host's network space).

The wizard contains at least: kube-APIserver, kube-sheduler and kube-controller-manager. Slave composition:

* kubelet – checking the health of a system component (nodes), creating and managing containers. It is located on each node, accesses the kube-APIserver and matches the node on which it is located.

* cAdviser – node monitoring.

Let's say we have hosting and we have created three AVS servers. Now you need to install Docker and Docker-machine on each server, how to do this was described above. Docker-machine itself is a virtual machine for Docker containers, we will only build an internal driver for it – VirtualBox – so as not to install additional packages. Now, from the operations that must be performed on each server, it remains to create Docker machines, the rest of the operations for setting up and creating containers on them can be performed from the master node, and they will be automatically launched on free nodes and redistributed when their number changes. So, let's start the Docker-machine on the first node:

docker-machine create –driver virtualbox –virtualbox-cpu-count "2" –virtualbox-memory "2048" –virtualbox-disk-size "20000" swarm-node-1

docker-machine env swarm-node-1 // tcp: //192.168.99.100: 2376

eval $ (docker-machine env swarm-node-1)

We launch the second node:

docker-machine create –driver virtualbox –virtualbox-cpu-count "2" –virtualbox-memory "2048" –virtualbox-disk-size "20000" swarm-node-2

docker-machine env swarm-node-2

eval $ (docker-machine env swarm-node-2)

We launch the third node:

docker-machine create –driver virtualbox –virtualbox-cpu-count "2" –virtualbox-memory "2048" –virtualbox-disk-size "20000" swarm-node-3

eval $ (docker-machine env swarm-node-3)

Let's connect to the first node, initialize the distributed storage in it and pass it the address of the manager (leader) node:

docker-machine ssh swarm-node-1

docker swarm init –advertise-addr 192.168.99.100:2377

docker node ls // will display the current

docker swarm join-token worker

If tokens are forgotten, they can be obtained by executing the commands docker swarm join-token manager and docker swarm join-token worker in a node with a distributed storage .

To create a cluster, it is necessary to register (join) all its future nodes with the Docker swarm join –token … 192.168.99.100:2377 command , a token is used for authentication, to discover them, they must be on the same subnet. You can view all servers with the docker node info command

The docker swarm init command will create a cluster leader, while alone, but in the response received, the command necessary to connect other nodes to this cluster will be given, important information in which is a token, for example, docker swarm join –token … 192.168 .99.100: 2377 . Connect to the remaining nodes via SSH using the docker-machine SSH name_node command and execute it.

For the interaction of containers, the bridge network is used, which is a switch. But for several replicas to work, a subnet is needed, since all replicas will have the same ports, and proxying is done by ip using a distributed storage, and it does not matter whether they are physically located on the same server or different. It should be noted right away that balancing is carried out according to the roundrobin rule, that is, one by one to each replica. It is important to create an overlay network to create DNS on top of it and be able to refer to replicas by name. Let's create a subnet:

docker network create –driver overlay –subnet 10.10.1.0/24 –opt encrypted services

Now we need to fill the cluster with containers. To do this, we create not a container, but a service, which is a template for creating containers on different nodes. The number of replicas to be created is specified during creation in the –replicas key , and the distribution is random over the nodes, but as uniform as possible. In addition to replicas, the service has a load balancer, the ports from which (input ports for all replicas) are proxied are specified in the -p switch , and Server Discovery (discovery of working replicas, determining their IP addresses, restarting) replicas are performed by the balancer independently.

docker service create -p 80:80 –name busybox –replicas 2 –network services busybox sleep 3000

Let's check the status of the docker service ls service , the status and uniformity of the distribution of the docker service ps busybox container replicas and its work wget -O- 10.10.1.2 . Service is a higher-level abstraction, which includes the container and organizing its update (by no means only), that is, to update the container parameters, you do not need to delete and create it, but simply update the service, and the service will first create a new one with the updated configuration, and only after it starts will delete the old one.

Docker Swarm has its own load balancer Ingress load balacing, which balances the load between replicas on the port declared when creating the service, in our case it is port 80. The entry point can be any server with this port, but the response will be received from the server to which the request was forwarded by the balancer.

We can also save data to the host machine, as in a regular container, there are two options for this:

docker service create –mount type = bind, src = …, dst = .... name = .... ..... #

docker service create –mount type = volume, src = …, dst = .... name = .... ..... # to host

The application is deployed via Docker-compose running on nodes (replicas). When updating the Docker-compose configuration, you need to update the Docker stack, and the cluster will be consistently updated: one replica will be deleted and a new one will be created in its place in accordance with the new config, then the next one. If an error occurs, a rollback will be made to the previous configuration. Well, let's get started:

docker stack deploy -c docker-compose.yml test_stack

docker service update –label-add foo = bar Nginx docker service update –label-rm foo Nginx docker service update –publish-rm 80 Nginx docker node update –availability = drain swarm-node-3 swarm-node-3

Docker swarm

$ sudo docker pull swarm

$ sudo docker run –rm swarm create

docker secrete create test_secret docker service create –secret test_secret cat / run / secrets / test_secret health check: hello-check-cobbalt example pipeline: trevisCI -> Jenkins -> config -> https://www.youtube.com/watch? v = UgUuF_qZmWc https://www.youtube.com/watch?v=6uVgR9WPjYM

Docker company-wide

Let's take a company-wide view: we have containers and we have servers. It doesn't matter if these are two virtual machines and several containers or hundreds of servers and thousands of containers, the problem is to distribute containers on the servers, you need a system administrator and time, if there is little time and a lot of containers, you need a lot of system administrators, otherwise they will be suboptimally distributed. that is, the server (virtual machine) is working, but not at full capacity and the resources are being sold. In this situation, container orchestration systems are designed to optimize distribution and save human resources.

Consider evolution:

* The developer creates the necessary containers by hand.

* The developer creates the necessary containers using previously prepared scripts.

* The system administrator, using any configuration and deployment management system, such as Chef, Pupel, Ansible, Salt, sets the topology of the system. The topology indicates which container is located in which place.

* Orchestration (schedulers) – semi-automatic distribution, maintenance of the state and reaction to system changes. For example: Google Kubernetes, Apache Mesos, Hashicorp Nomad, Docker Swarm mode, and YARN, which we'll cover. New ones also appear: Flocker (https://github.com/ClusterHQ/Flocker/), Helios (https://github.com/spotify/helios/).

There is a native Docker-swarm solution. Of the adult systems, Kubernetes (Kubernetes) and Mesos showed the most popularity, while the former is a universal and completely ready-to-use system, and the latter is a complex of various projects combined into a single package and allowing you to replace or change their components. There is also a huge number of less popular solutions that are not promoted by such giants as Google, Twitter and others: Nomad, Scheduling, Scalling, Upgrades, Service Descovery, but we will not consider them. Here we will consider the most ready-made solution – Kubernetes, which has gained great popularity for its low entry threshold, support and sufficient flexibility in most cases, pushing Mesos into the niche of customizable solutions when customization and development is economically justified.

Kubernetes has several ready-made configurations:

* MiniKube – a cluster of one local machine, designed to overcome the threshold of entry and experiments;

* kubeadm;

* kops;

* Kubernetes-Ansible;

* microKubernetes;

* OKD;

* MicroK8s.

To start the cluster yourself, you can use

KubeSai – Free Kubernetes

The smallest structural unit is called POD, which corresponds to the YML file in Docker-compose. The process of creating a POD, like other entities, is done declaratively: by writing or changing a configuration YML file and applying it to a cluster. And so, let's create a POD:

# test_pod.yml

# kybectl create -f test_pod.yaml

containers:

– name: test

image: debian

To run multiple replicas:

# test_replica_controller.yml

# kybectl create -f test_replica_controller.yml

apiVersion: v1

kind: ReplicationController

metadata:

name: Nginx