kubernetes Off-line Installation Package, only three steps
Basic environment
Close firewall selinux
$ systemctl stop firewalld && systemctl disable firewalld $ setenforce 0
Open forward
sysctl -w net.ipv4.ip_forward=1
<!--more-->
Close swap
swapoff -a
Delete the line with swap in the / etc/fstab file and ignore it if not
Install these two tools if they are not installed
yum install -y ebtables socat
IPv4 iptables chain settings CNI plug-in needs
sysctl net.bridge.bridge-nf-call-iptables=1
Installation outside wall
It is very difficult to install it in China. It is recommended to check the offline installation scheme.
Install docker
yum install -y docker
systemctl enable docker && systemctl start docker
Install kubeadm kubectl kubelet
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF setenforce 0 yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet
Close SElinux
setenforce 0
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
Then start master like offline installation, kubeadm init
Offline Installation
Welfare, I have packaged all the dependency images, binary files and configuration files to solve all your dependencies. I spent a lot of time sorting this out and put it on the Aliyun market. I hope you can give me some support.
Give me a cup of coffee
In this package, most of the operations are written in a simple script, init-master.sh is executed on master node, init-node.sh is executed on node node node, dashboard is installed on dashboard to execute init-dashboard.sh.
Then you can execute the master output join command on the node node. The greatest value of the package is that it has no dependence, and no longer has to visit a foreign network and have a headache.
Install kubelet services, and kubeadm
Download bin file address
Copy the downloaded kubelet kubectl kubeadm directly below / usr/bin
Configuring the kubelet system D Service
cat <<EOF > /etc/systemd/system/kubelet.service [Unit] Description=kubelet: The Kubernetes Node Agent Documentation=http://kubernetes.io/docs/ [Service] ExecStart=/usr/bin/kubelet Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target EOF
cat <<EOF > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true" Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local" Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt" Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0" Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki" ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS EOF
The idea here is to see that docker's cgroup driver is consistent with -- cgroup-driver. You can view it with docker info |grep Cgroup, possibly system D or CGroup FS
Added host name resolution
To prevent the host name from being resolved, modify / etc/hosts to write the mapping between host name and ip
Start master node
Here's how to get a mirror image of google
kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version v1.8.0 --skip-preflight-checks
- --pod-network-cidr parameter required for calico network installation
- If not, kubernetes-version will ask the public network for version information.
- Skp-preflight-checks solves a small bug in a kubelet directory that is not empty
When you see these outputs, you succeed:
To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
Following the implementation:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install calico network
kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
join node node node
Also install kubelet and kubeadm to node node, just like master node operation, without further elaboration.
Then execute the command output by master node init:
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
Verify the health of the master node with kubectl after execution
[root@dev-86-202 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION dev-86-202 NotReady master 17h v1.8.1
Note that the master node is not a node by default, nor is it recommended to be a node. If you need to treat master as node:
[root@dev-86-202 ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
Install dashboard
It's not difficult to install dashboard, but it's a bit winding when it's used. It's mainly RBAC. Here's a brief introduction.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
After installation, access using nodeport mode
kubectl -n kube-system edit service kubernetes-dashboard
Change type: ClusterIP to type: NodePort and save
$ kubectl -n kube-system get service kubernetes-dashboard NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard 10.100.124.90 <nodes> 443:31707/TCP 21h
https://masterip 31707 can access dashboard, however. Not yet.
Create a dashboard-admin.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system
kubectl create -f dashboard-admin.yaml
Then just click skip directly on the interface, but you know, it's not safe. Really safe practices, please pay attention to my further discussion: https://github.com/fanux
Adding roles to nodes
kubectl label node node1 kubernetes.io/role=node
Common problem
Can't the kubelet service start?
The cgroup driver configuration should be the same
See docker cgroup driver:
docker info|grep Cgroup
There are two kinds of system D and cgroupfs. Change the configuration of kubelet service to the same as docker
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs# This configuration is consistent with docker
Node not ready?
It is recommended to install calico network. If you want to treat the primary node as node node, you need to add a command:
[root@dev-86-202 ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
dashboard can't access it?
If it is accessed in NodePort mode, you need to know which node the dashboard service is scheduled to. Access the IP of that node instead of the master ip.
If not, try changing https to http.
Look at the specific node
kubectl get pod -n kube-system -o wide
Failed to retrieve the mirror?
You can load the image of node and master node at each node.
dashboard crash, dns can't get up?
You can load the image of node and master node at each node.
192.168 segment conflicts with calico segment?
If you happen to be a 192.168 segment, then it's recommended to modify calico's segment.
So init
kubeadm init --pod-network-cidr=192.168.122.0/24 --kubernetes-version v1.8.1
Modify calico.yaml
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION value: "ACCEPT" # Configure the IP Pool from which Pod IPs will be chosen. - name: CALICO_IPV4POOL_CIDR value: "192.168.122.0/24" - name: CALICO_IPV4POOL_IPIP value: "always" # Disable IPv6 on Kubernetes. - name: FELIX_IPV6SUPPORT value: "false"
Can't dns get up in half a day?
If the load is successful, the dns mirror may be too low, starting very slowly, and a friend failed to boot for 15 minutes on a single core 2G. Recommended dual-core 4G or more resources
If you can't get up and ask kubeadm reset to do it again, some customers can solve this problem in this way.
kubelet unhealthy?
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy.
Maybe manifast already exists. Delete it:
[root@dev-86-205 kubeadm]# rm -rf /etc/kubernetes/manifests
When the time exceeds 24 hours, can't the nodes be added?
[root@dev-86-208 test]# kubeadm token create [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --ttl 0) 887ac7.e82f0e13ad72c367
The above command regenerates token, replaces it with token above when executing kubeadm join, and sets ttl to 0 if you want to never expire init
--token-ttl duration
Calico node'xxx'is already using the IPv4 address 192.168.152.65
rm -rf /var/etcd/ kubeadm reset
Reload
Stuck in the mirror
Close firewalls and selinux
$ systemctl stop firewalld && systemctl disable firewalld $ setenforce 0
$ echo 'Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"' > /etc/systemd/system/kubelet.service.d/90-local-extras.conf $ systemctl daemon-reload $ systemctl restart kubelet
Failed to get system container stats for "/system.slice/docker.service"
kubelet startup parameter plus:
--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
Nodes cannot join
dns join s when it's not up, or server time is not synchronized
Specify external etcd clusters using configuration files
config.yaml:
apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration etcd: endpoints: - http://10.1.245.94:2379 networking: podSubnet: 192.168.0.0/16 kubernetesVersion: v1.8.1
etcd.yaml:
version: '2' services: etcd: container_name: etcd_infra0 image: quay.io/coreos/etcd:v3.1.10 command: | etcd --name infra0 --initial-advertise-peer-urls http://10.1.245.94:2380 --listen-peer-urls http://10.1.245.94:2380 --listen-client-urls http://10.1.245.94:2379,http://127.0.0.1:2379 --advertise-client-urls http://10.1.245.94:2379 --data-dir /etcd-data.etcd --initial-cluster-token etcd-cluster-1 -initial-cluster infra0=http://10.1.245.94:2380 --initial-cluster-state new volumes: - /data/etcd-data.etcd:/etcd-data.etcd network_mode: "host"
$ pip install docker-compose $ docker-compose -f etcd.yaml up -d $ kubeadm init --config config.yaml //Scanning Focus on sealyun ![](https://sealyun.com/img/qrcode1.jpg)