Environment Version Description:

  • Three VMware virtual machines, system version centos7.6.
  • Kubernetes 1.16.0, the latest version.
  • flannel v0.11
  • docker 18.09

Using kubeadm can simply build a k8s cluster environment without paying attention to the details of installation and deployment. Moreover, the k8s version update frequency is very fast, so this method is highly recommended.

Related preparation

Note: the related operations in this section should be performed on all nodes.

hardware environment

Three VMware virtual machines are used to configure the network and ensure the network can be connected.

  • K8s master 4G 4-core centos7 192.168.10.20
  • K8s-node1 2g 2 core centos7 192.168.10.21
  • K8s-node2 2g 2 core centos7 192.168.10.22

Host Division

  • K8s master as cluster management node: etcd kubeadm Kube apiserver Kube scheduler Kube Controller Manager kubelet flanneld docker
  • K8s-node1 as work node: kubeadm kubelet flanneld docker
  • K8s-node2 as work node: kubeadm kubelet flanneld docker

preparation

Install the necessary RPM software:

 yum install -y wget vim net-tools epel-release

Turn off the firewall

systemctl disable firewalld
systemctl stop firewalld

Shut down SELinux

#Temporarily disable SELinux
setenforce 0
#Permanently close to modify / etc / sysconfig / SELinux file settings
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

Disable swap partition

swapoff -a

#Disable it permanently. Open / etc / fstab and comment out the line swap.
sed -i 's/.*swap.*/#&/' /etc/fstab

Modify / etc / hosts

cat <<EOF >> /etc/host

192.168.10.20 k8s-master
192.168.10.21 k8s-node1
192.168.10.22 k8s-node2
EOF

Modifying kernel parameters

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Installing docker18.09

Configure Yum source

##Configure default source
##Backup
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

##Download aliyuan
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

##Refresh
yum makecache fast

##Configure k8s source
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF

##Rebuild Yum cache
yum clean all
yum makecache fast
yum -y update

Install docker

Download docker’s Yum source file

yum -y install yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

The docker version is specified here. You can view the supported versions first

[[email protected] ~]# yum list docker-ce --showduplicates |sort -r               
 * updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
 * extras: mirrors.aliyun.com
 * epel: hkg.mirror.rackspace.com
docker-ce.x86_64            3:19.03.2-3.el7                     docker-ce-stable
docker-ce.x86_64            3:19.03.1-3.el7                     docker-ce-stable
docker-ce.x86_64            3:19.03.0-3.el7                     docker-ce-stable
docker-ce.x86_64            3:18.09.9-3.el7                     docker-ce-stable
...
 * base: mirrors.aliyun.com

At present, the latest version is 19.03, and the specified download is 18.09

yum install -y docker-ce-18.09.9-3.el7
systemctl enable docker
systemctl start docker

Modify the starting parameters of docker

cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://xxxx.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

systemctl restart docker

Installation of k8s

Manage node configuration

First install the management node on the k8s master

Download kubeadm, kubelet

yum install -y kubeadm kubelet

Initialize kubeadm

There is no direct initialization, because domestic users can’t directly pull the relevant image, so here you want to view the required image version

[[email protected] ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.16.0
k8s.gcr.io/kube-controller-manager:v1.16.0
k8s.gcr.io/kube-scheduler:v1.16.0
k8s.gcr.io/kube-proxy:v1.16.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2

According to the required version, pull the domestic image directly and modify the tag

# vim kubeadm.sh

#!/bin/bash

##Use the following script to download the domestic image and modify the tag to Google's tag
set -e

KUBE_VERSION=v1.16.0
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.15-0
CORE_DNS_VERSION=1.6.2

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
done

Run the script and pull the image

sh ./kubeadm.sh

Master node execution

sudo kubeadm init \
 --apiserver-advertise-address 192.168.10.20 \
 --kubernetes-version=v1.16.0 \
 --pod-network-cidr=10.244.0.0/16

Note: the pod network CIDR here should not be changed, and it is related to the following steps.

Result return

...
...

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
...

##To add a node, you need to execute the following command, which can be obtained by using the command kubeadm token create -- print join command 
 
kubeadm join 192.168.10.20:6443 --token lixsl8.v1auqmf91ty0xl0k \
    --discovery-token-ca-cert-hash sha256:c3f92a6ed9149ead327342f48a545e7e127a455d5b338129feac85893d918a55

If you want to install multiple master nodes, the initialization command uses the

kubeadm init  --apiserver-advertise-address 192.168.10.20 --control-plane-endpoint 192.168.10.20  --kubernetes-version=v1.16.0  --pod-network-cidr=10.244.0.0/16  --upload-certs

To add a master node, use the following command:

kubeadm join 192.168.10.20:6443 --token z34zii.ur84appk8h9r3yik --discovery-token-ca-cert-hash sha256:dae426820f2c6073763a3697abeb14d8418c9268288e37b8fc25674153702801     --control-plane --certificate-key 1b9b0f1fdc0959a9decef7d812a2f606faf69ca44ca24d2e557b3ea81f415afe

Note: do not copy the token directly. After kubeadm init succeeds, the command to add master node will be output

Work node configuration

Download kubedm kubelet

All three nodes need to run

yum install -y kubeadm kubelet

Add node

All three nodes need to run. The error is ignored here

kubeadm join 192.168.10.20:6443 --token lixsl8.v1auqmf91ty0xl0k \
    --discovery-token-ca-cert-hash sha256:c3f92a6ed9149ead327342f48a545e7e127a455d5b338129feac85893d918a55 \
   --ignore-preflight-errors=all 

If adding a node fails or you want to add it again, you can use the command

kubeadm reset

Note: do not use on easy master, it will delete all kubeadm configurations
At this time, you can use the command on the node to view the added node information

[[email protected] ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   master   45m   v1.16.0
k8s-node1    NotReady   <none>   26s   v1.16.0
k8s-node2    NotReady   <none>   12s   v1.16.0

But the state of the node isNotReady, some more operations are required

Install flanneld

Operate on the master and copy the configuration to make kubectl available

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Download flannel configuration file

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Because of Kube- flannel.yml The image used in the file is quay.io It can’t be pulled in China, so download it from the domestic source first, and then modify the tag. The script is as follows

# vim flanneld.sh

#!/bin/bash

set -e

FLANNEL_VERSION=v0.11.0

#Modify the source here
QUAY_URL=quay.io/coreos
QINIU_URL=quay-mirror.qiniu.com/coreos

images=(flannel:${FLANNEL_VERSION}-amd64
flannel:${FLANNEL_VERSION}-arm64
flannel:${FLANNEL_VERSION}-arm
flannel:${FLANNEL_VERSION}-ppc64le
flannel:${FLANNEL_VERSION}-s390x)

for imageName in ${images[@]} ; do
  docker pull $QINIU_URL/$imageName
  docker tag  $QINIU_URL/$imageName $QUAY_URL/$imageName
  docker rmi $QINIU_URL/$imageName
done

Run the script, which needs to be executed on each node

sh flanneld.sh

Install flanneld

kubectl apply -f kube-flanneld.yaml

Flanneld is installed in the Kube system namespace by default. Use the following command to view it:

# kubectl -n kube-system get pods
NAME                                 READY   STATUS         RESTARTS   AGE
coredns-5644d7b6d9-h9bxt             0/1     Pending        0          57m
coredns-5644d7b6d9-pkhls             0/1     Pending        0          57m
etcd-k8s-master                      1/1     Running        0          57m
kube-apiserver-k8s-master            1/1     Running        0          57m
kube-controller-manager-k8s-master   1/1     Running        0          57m
kube-flannel-ds-amd64-c4hnf          1/1     Running        1          38s
kube-flannel-ds-amd64-djzmx          1/1     Running        0          38s
kube-flannel-ds-amd64-mdg8b          1/1     Running        1          38s
kube-flannel-ds-amd64-tjxql          0/1     Terminating    0          5m34s
kube-proxy-4n5dr                     0/1     ErrImagePull   0          13m
kube-proxy-dc68d                     1/1     Running        0          57m
kube-proxy-zplgt                     0/1     ErrImagePull   0          13m
kube-scheduler-k8s-master            1/1     Running        0          57m

An error occurred because the two work nodes cannot pull the pause and cube proxy images. They can be packaged directly from the master and used on the node

##Execute on master
docker save -o pause.tar k8s.gcr.io/pause:3.1
docker save -o kube-proxy.tar k8s.gcr.io/kube-proxy

##Execute on node
docker load -i pause.tar 
docker load -i kube-proxy.tar

Re install flanneld

 kubectl delete -f kube-flannel.yml 
 kubectl create -f kube-flannel.yml 

Modify kubelet

After adding node with kubeadm, the node is always in theNotReadyStatus, error message is:

runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Addresses:

The solution is to modify / var / lib / kubelet / kubeadm- flags.env Files, deleting parameters--network-plugin=cni

cat << EOF > /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --pod-infra-container-image=k8s.gcr.io/pause:3.1"
EOF

systemctl restart kubelet
[[email protected] ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   79m   v1.16.0
k8s-node1    Ready    <none>   34m   v1.16.0
k8s-node2    Ready    <none>   34m   v1.16.0

Error resolution

About mistakescni config uninitialized AddressesPreviously, the parameter was deleted directly--network-plugin=cniHowever, this can only change the node status to ready, but the network is still unavailable between pods.

The right solution: modify Kube- flannel.yaml , add the parameter on line 111cniVersion

vim kube-flanneld.yaml

{
      "name": "cbr0",
      "cniVersion": "0.3.1",
      ....

Install flannel

##If it was installed before, remove it first
## kubectl delete -f kube-flannel.yaml

kubectl apply -f kube-flannel.yaml