Preface:

This blog post is based on the Docker Swarm cluster environment and extends the functions of Docker Swarm.

Blog Outline:
I. docker swarm network management
II. service management and version update of Swarm

The environment of this blog is still based on the environment set up in the previous blog. For details, please refer to the blog: Docker Swarm cluster configuration practice (1) Before the next operation, you must ensure that when you access the Web UI of docker swarm, you can see the following interface:

I. docker swarm network management

Swarm clustering produces two different types of traffic:

  • Control and management level: including Swarm message management, such as request to join or leave Swarm, this type of traffic is always encrypted. (it involves the hostname, IP address, subnet, gateway, etc. in the cluster);
  • Application data layer: including the communication between container and client. (related to firewall, port mapping, network port mapping, VIP, etc.)

There are three important network concepts in Swarm service:

  • Overlay networks manages communication between docker daemons in Swarm. Containers can be attached to one or more existing overlay networks to enable communication between containers.
  • ingress network is a special overlay network, which is used for load balancing among service nodes. When any Swarm node receives a request on a published port, it hands it over to a module called IPVS. IPVS tracks all IP addresses participating in the service, selects one of them, and routes the request to it through the inress network;
    When you initialize or join Swarm cluster, you will automatically create an ingress network. In most cases, users do not need to customize the configuration, but docker 17.05 and later allow you to customize.
  • Docker? Gwbridge is a bridge network that connects the overlay network (including the inress network) to a separate physical network of docker daemons. By default, each container that the service is running is connected to the docker ﹣ gwbridge network of the local docker daemons host.
    Docker? Gwbridge networks are created automatically when they are initialized or joined to Swarm. In most cases, users do not need to customize the configuration, but docker allows customization.

Check the default network above docker01 as follows (note the SCOPE column to confirm its effective range):

In addition to the two networks created by Swarm cluster by default, we can also create overlay networks by ourselves. The containers connected to this network can communicate with each other. However, it should be noted that in addition to the overlay networks created by docker01 manager, other nodes cannot be viewed by executing the "docker network ls" command before joining this network.

Create a custom overlay network and verify

[root@docker01 ~]# docker network create -d overlay --subnet 192.168.22.0/24 --gateway 192.168.22.1 --attachable my_net1
# Create an overlay network named my_net1;
# "-- subnet": specify its network segment (optional); "-- gateway": specify its gateway (optional);
# But when you create an overlay network in the docker swarm cluster, you must add the "- attachable" option.
# Otherwise, the network cannot be used when the container of other nodes is running

After creation, the newly created overlay network cannot be viewed on other docker nodes. However, you can use this network (when running the container, you can specify it directly. After the container runs, you can view this network.)

Test whether the overlay network just created is available. Run a container based on the created overlay network on docker01 and docker02 respectively, and then ping the container to confirm that it can ping:

#Create a container based on the overlay network on the docker01 host:
[root@docker01 ~]# docker run -tid --network my_net1 --name test1 busybox
#For the same operation as docker01, create one on docker02:
[root@docker02 ~]# docker run -tid --network my_net1 --name test2 busybox

After the container is created, on the docker02 host, use test2 to Ping the container test1. The test results are as follows (because it is a custom network, you can ping the container name of the opposite container directly):

II. service management and version update of Swarm

1. Specify a service to run on the same docker server

As tested in the first blog post, if the manager in Swarm cluster issues a service task, the distributed task will be randomly distributed on the docker server in the cluster to run. If we need to unify and standardize the configuration of our production environment, I will only run web services for one docker server and another docker host. Running PHP services, then, how to solve it?

Solution 1:

[root@docker01 ~]# docker service create --replicas 3 --constraint node.hostname==docker03 --name test nginx
#On the docker03 host, based on the nginx image, run three containers named test

The above command is executed as follows:

Solution 2:

[root@docker01 ~]# docker node update --label-add mem=max docker02
#Label the docker02 host with "mem=max" in the way of key value pairs. The contents on both sides of the equal sign can be customized.
[root@docker01 ~]# docker service create --name test01 --replicas 3 --constraint 'node.labels.mem==max' nginx
#Running three services named test01 on the host with label "mem==max" based on nginx image
[root@docker01 ~]# docker node inspect docker02   #You can execute this command to view the label of the dokcer02 host
#Label related information in Spec {}

Check the Web UI interface to confirm:

2. Update a service version

1) prepare the image to be used and run the service based on the image

[root@docker01 aa]# cat html/index.html    #Preparing a web page file
127.0.0.1
[root@docker01 aa]# cat Dockerfile      
#Based on the nginx container, mount the html directory under the current directory as the root directory of the nginx web page
FROM nginx
ADD html /usr/share/nginx/html
[root@docker01 aa]# docker build -t 192.168.20.6:5000/testnginx:1.0 .     #Generate a mirror image
[root@docker01 aa]# docker push 192.168.20.6:5000/testnginx:1.0
#Upload the newly generated image to the private warehouse
[root@docker01 aa]# docker service create --name newnginx -p 80:80 --replicas 3  192.168.20.6:5000/testnginx:1.0 
#Based on the image uploaded to the private warehouse, run three service s and map to the local port 80
#When the above command is executed successfully, as long as the service is running on the docker host, you can access the nginx service through its port 80.

After running, the Web UI interface is displayed as follows:

You can see that each node runs that service, that is to say, when you visit port 80 of which node, you can see the same page, as follows:

View the service details on docker01 as follows:

[root@docker01 aa]# docker service ps newnginx    #View service details

The result of the command execution (note the image label, that is, note which image it is based on):

2) prepare version 2.0 of the image (simulate online version upgrade):

[root@docker01 aa]# docker tag nginx:latest 192.168.20.6:5000/testnginx:2.0 
#Preparing a 2.0 version of the image
[root@docker01 aa]# docker push 192.168.20.6:5000/testnginx:2.0 
#Upload to private warehouse
[root@docker01 aa]# docker service update --image 192.168.20.6:5000/testnginx:2.0 newnginx 
#Upgrade the image of the newnginx service to 2.0
[root@docker01 aa]# docker service ps newnginx    #View the service details again

The results of command execution are as follows. It is found that the service state of newnginx running on 1.0 image has changed to shutdown, and the service running on 2.0 image has changed to running, as follows:

At this time, if you visit the web page again, it will become the default homepage of nginx (because our 2.0 image only changes the label of the next nginx image, and does not modify its file), as follows:

Its Web UI interface can view the last upgrade time of the service.

3) upgrade 2.0 to 3.0 (when upgrading, carry out fine control)

[root@docker01 aa]# docker tag nginx:latest 192.168.20.6:5000/testnginx:3.0 
#Preparing a 3.0 version of the image
[root@docker01 aa]# docker push 192.168.20.6:5000/testnginx:3.0 
#Upload to private warehouse
[root@docker01 ~]# docker service update --replicas 6 --image 192.168.20.6:5000/testnginx:3.0 --update-parallelism 3 --update-delay 1m newnginx
#The meaning of the above options is as follows:
# "-- replicas 6": the number of updated service s is 6 (originally 3)
# "-- update parallelism 2": sets the number of copies of parallel updates.
# "-- update delay 1m": specifies a rolling update interval of 1 minute
[root@docker01 ~]# docker service ps newnginx    #Self compare the details of the newnginx service

4) version rollback operation

When we upgrade to a new version and find some problems with the image of the new version, and have to return to the previous version, we can do the following:

[root@docker01 ~]# docker service update --rollback newnginx   #Roll back the service of newnginx to the previous version
[root@docker01 ~]# docker service ps newnginx   #Self view

After executing the rollback command, the rollback process is as follows:

After the rollback is successful, I will change from the original 3.0 to 2.0. Although the specified number of services is 6 when upgrading 3.0, there are only 3 before. So after the rollback operation, the number of services will also change back to 3.

Note: when we perform a rollback operation, the default is to rollback to the version of the last operation, and we can't roll back continuously.

————————Thank you for reading————————