The local development environment takes macOS as an example.
Github warehouse address: github.com/jxlwqq/kubernetes-examp...
Install VirtualBox and Vagrant
- install VirtualBox
- install Vagrant
Start the virtual machine
Three virtual machines are configured, namely k8s-1(192.168.205.10), k8s-2(192.168.205.11) and k8s-3(192.168.205.12). See configuration for details Vagrantfile This file.
When the virtual machine is initialized, it has helped you install the Docker environment. See config.vm.provision "shell" Information in. Vagrant is written in Ruby. The syntax is universal and should be able to understand it. It doesn't matter if you don't understand it.
git clone https://github.com/jxlwqq/kubernetes-examples.git # clone warehouse to local cd installing-kubernetes-with-deployment-tools # Enter this directory vagrant box add centos/7 # Download the operating system image file in advance to facilitate subsequent quick startup vagrant up # Start the virtual machine
Log in to the virtual machine
Open three command line windows to log in to the three virtual machines respectively:
cd installing-kubernetes-with-deployment-tools # Be sure to enter the directory where Vagrantfile is located vagrant ssh k8s-1 # This is the master vagrant ssh k8s-2 # node vagrant ssh k8s-3 # node
Allow iptables to check bridge traffic
Tip: execute commands in virtual machines: k8s-1, k8s-2, and k8s-3.
Ensure br_netfilter module is loaded. This can be done by running lsmod | grep br_netfilter. To explicitly load the module, execute sudo modprobe.br_netfilter.
In order for iptables on your Linux node to correctly view the bridge traffic, you need to ensure that net. Net is set in your sysctl configuration bridge. Set bridge NF call iptables to 1. As follows:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system
Install kubedm, kubelet, and kubectl
Command prompt: 8k2-8k2 in virtual machine and 8k2-8k2.
Install using alicloud image:
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF # Set SELinux to permissive mode (equivalent to disabling it) sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config # Close swap sudo swapoff -a sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes sudo systemctl enable --now kubelet
Configure cgroup driver
Tip: execute commands in virtual machines: k8s-1, k8s-2, and k8s-3.
Configure the Docker daemon, especially using systemd to manage the cgroup of the container:
sudo mkdir /etc/docker cat <<EOF | sudo tee /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } sudo systemctl daemon-reload sudo systemctl restart docker
Verification and GCR Connectivity of IO container image warehouse
Tip: execute commands in virtual machines: k8s-1, k8s-2, and k8s-3.
Use kubedm config images pull to verify with GCR The IO container mirrors the connectivity of the warehouse, but it will fail.
We solve this problem by pulling the Docker image warehouse of Alibaba cloud and then tagging.
Some image s on the node can't be used. We'll pull them down for the time being.
kubeadm config images list # View the desired image # Replace Kube apiserver sudo docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0 sudo docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0 k8s.gcr.io/kube-apiserver:v1.21.0 sudo docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0 # Replace Kube Controller Manager sudo docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0 sudo docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0 k8s.gcr.io/kube-controller-manager:v1.21.0 sudo docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0 # Replace Kube scheduler sudo docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0 sudo docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0 k8s.gcr.io/kube-scheduler:v1.21.0 sudo docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0 # Replace Kube proxy sudo docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0 sudo docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0 k8s.gcr.io/kube-proxy:v1.21.0 sudo docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0 # Replace pause sudo docker pull registry.aliyuncs.com/google_containers/pause:3.4.1 sudo docker tag registry.aliyuncs.com/google_containers/pause:3.4.1 k8s.gcr.io/pause:3.4.1 sudo docker rmi registry.aliyuncs.com/google_containers/pause:3.4.1 # Replace etcd sudo docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0 sudo docker tag registry.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0 sudo docker rmi registry.aliyuncs.com/google_containers/etcd:3.4.13-0 # Replace coredns sudo docker pull coredns/coredns:1.8.0 sudo docker tag coredns/coredns:1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0 sudo docker rmi coredns/coredns:1.8.0
Initialize the master node
Tip: execute commands in virtual machine: k8s-1.
sudo kubeadm init --kubernetes-version=v1.21.0 --apiserver-advertise-address=192.168.205.10 --pod-network-cidr=10.244.0.0/16
According to the returned prompt: Set:
# To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Finally, the information of the join is returned:
# Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.205.10:6443 --token g012n6.65ete4bw7ys92tuv --discovery-token-ca-cert-hash sha256:fdae044c194ed166f7b1b0746f5106008660ede517dd4cf436dfe68cc446c878
node join cluster
Tip: execute commands in virtual machines: k8s-2 and k8s-3.
Replace the values of the two parameters token and hash with the values returned by your own cluster:
sudo kubeadm join 192.168.205.10:6443 --token g012n6.65ete4bw7ys92tuv \ --discovery-token-ca-cert-hash sha256:fdae044c194ed166f7b1b0746f5106008660ede517dd4cf436dfe68cc446c878
Install Pod network add ons
Tip: execute commands in virtual machine: k8s-1.
Choose flannel here:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
View node status
Tip: execute commands in virtual machine: k8s-1.
kubectl get nodes
return:
NAME STATUS ROLES AGE VERSION k8s-1 Ready control-plane,master 12m v1.21.0 k8s-2 Ready <none> 12m v1.21.0 k8s-3 Ready <none> 11m v1.21.0
Clearing
Destroy virtual machine:
vagrant destroy