CentOS7 builds k8s clusters

 

I Machine information

[root@kube-gmg-03-master-1 ~]# uname -a
Linux kube-gmg-03-master-1 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@kube-gmg-03-master-1 ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)

2, Host information

Three machines are prepared for deployment k8s in this paper. The details are as follows:

 

Nodes and functions

host name

IP

Master,etcd,registry

K8s-master

10.255.61.1

Node1

K8s-node-1

10.255.61.2

Node2

K8s-node-2

10.255.61.3

 

3, Set the host names of the three machines

Execute on Master:

hostnamectl --static set-hostname  k8s-master

Execute on Node1:

hostnamectl --static set-hostname  k8s-node-1

Execute on Node2:

hostnamectl --static set-hostname  k8s-node-2

4, Set hosts

All three machines execute the following commands:

echo '10.255.61.1    k8s-master
10.255.61.1   etcd
10.255.61.1   registry
10.255.61.2   k8s-node-1
10.255.61.3    k8s-node-2' >> /etc/hosts

5, Turn off the firewall on the three machines

systemctl disable firewalld.service
systemctl stop firewalld.service

Vi. deployment of etcd

k8s operation depends on etcd. You need to deploy etcd first. This article uses yum to install it:

yum install etcd -y

The default etcd configuration file installed using yum is located in / etc / etcd / etcd.exe conf. Edit the configuration file to change the following colored information:

vi /etc/etcd/etcd.conf
# [member]
ETCD_NAME=master
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""

Start and verify status

systemctl start etcd
etcdctl set testdir/testkey0 0
etcdctl get testdir/testkey0 
etcdctl -C http://etcd:4001 cluster-health
etcdctl -C http://etcd:2379 cluster-health

See: Etcd cluster deployment—— http://www.cnblogs.com/zhenyuyaodidiao/p/6237019.html

7, Deploy master

7.1 installing docker

yum install docker -y

Configure the Docker configuration file to allow pulling images from the registry.

vim /etc/sysconfig/docker
# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
    DOCKER_CERT_PATH=/etc/docker
fi
OPTIONS='--insecure-registry registry:5000'

Set startup self startup and start service

systemctl enable docker.service
systemctl start docker

7.2 installing kubernets

yum install kubernetes -y

7.3 configuring and starting kubernetes

The following components need to be installed and run on the kubernetes master:

Kubernets API Server
Kubernets Controller Manager
Kubernets Scheduler

7.4 accordingly, the colored information in the following configurations shall be changed:

$ vim /etc/kubernetes/apiserver

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""
$ vim /etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"

7.5 start the service and set the startup self startup, and run the following commands

 systemctl enable kube-apiserver.service
 systemctl start kube-apiserver.service
 systemctl enable kube-controller-manager.service
 systemctl start kube-controller-manager.service
 systemctl enable kube-scheduler.service
 systemctl start kube-scheduler.service

8, Deploy node node

8.1 refer to 7.1 installing docker and 7.2 installing kubernets

8.2 start kubernets on node

The following components need to be run on kubernetes node:

    Kubelet

    Kubernets Proxy

8.2.1 configuration file

$ vim /etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://k8s-master:8080"
[root@K8s-node-1 ~]# vim /etc/kubernetes/kubelet

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node-1"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

8.3 start the service and set the startup self startup

systemctl enable kubelet.service
systemctl start kubelet.service
systemctl enable kube-proxy.service
systemctl start kube-proxy.service

8.4 viewing status

View nodes and node status in the cluster on the master

$  kubectl -s http://k8s-master:8080 get node
NAME         STATUS    AGE
k8s-node-1   Ready     3m
k8s-node-2   Ready     16s
$ kubectl get nodes
NAME         STATUS    AGE
k8s-node-1   Ready     3m
k8s-node-2   Ready     43s

So far, a kubernetes cluster has been built, but the cluster can not work well at present. Please continue with the next steps.

9, Create overlay network - Flannel

9.1 installing Flannel

Execute the following commands on the master and node to install

[root@k8s-master ~]# yum install flannel

9.2 configuring Flannel

Edit / etc/sysconfig/flanneld on the master and node, and modify the red part

[root@k8s-master ~]# vi /etc/sysconfig/flanneld

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

9.3 configuring the flannel key in etcd

Flannel is configured using etcd to ensure configuration consistency among multiple flannel instances. Therefore, the following configuration needs to be performed on etcd: ('/ atomic.io/network/config' key corresponds to the configuration item FLANNEL_ETCD_PREFIX in / etc/sysconfig/flannel above. If there is an error, the startup will be an error.)

[root@k8s-master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
{ "Network": "10.0.0.0/16" }

9.4 startup

After starting Flannel, you need to restart docker and kubernete in turn.

Execute on the master:

systemctl enable flanneld.service 
systemctl start flanneld.service 
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service

Execute on node:

systemctl enable flanneld.service 
systemctl start flanneld.service 
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service

Books involved—— Kubernetes actual combat,Kubernetes authoritative guide: full contact from Docker to kubernetes practice,Play Docker container technology for 5 minutes every day,Docker container: build and deploy with Kubernetes, Flannel, Cockpit, and Atomic

thank https://www.cnblogs.com/zhenyuyaodidiao/p/6500830.html

end!

Keywords: Kubernetes

Added by gid on Thu, 20 Jan 2022 13:48:36 +0200