k8s resource list defines haproxy load balancing
Core resources of k8s
workload type resource
- Pod
- ReplicaSet
- Deployment
- StatefulSet
- DaemonSet
- Job
- Cronjob
Service discovery and load balancing resources
- Service
- Ingress
Configure storage related resources
-
Storage Volume:
- cloud storage
- Amazon Elastic Block Store
- SAN
- Gluster (distributed storage)
- NFS
-
Container storage interface
-
Configmap (configuration center)
-
Configmap (configuration center)
-
Downwardapi (output information of external environment to container)
Cluster level resources
- Nampspace
- Node
- Role
- ClusterRole
- RoleBinding
- ClusterRoleBinding
Metadata resource
- HPA
- Podtemplate (the template used by the controller to create the Pod)
- LlmitRange
How to create resources
Create with imperative resource list
- apiserver only receives resource definitions in json format
- The configuration list is provided in yaml format, and apiserver can automatically convert it to json format and execute it
Create with declarative resource manifest
The declarative resource list is used to create a declaration, and the declaration can be changed and new declarations can be applied at any time
Creating resource objects from YAML files
apiVersion | API version |
---|---|
kind | Resource type |
metadata | Resource metadata |
spec | Resource specification |
replicas | Number of copies |
selector | tag chooser |
template | Pod template |
metadata | Pod metadata |
spec | Pod specification |
containers | Container configuration |
YAML file format description
K8s is a container orchestration engine, which uses YAML files to deploy applications. Therefore, before learning, you should first understand the YAML syntax format:
-
Indents indicate hierarchical relationships
-
Tab "tab" indentation is not supported. Use space indentation · usually indent 2 spaces at the beginning
-
Indent 1 space after characters, such as colon, comma, etc
-
"---" indicates YAML format, the beginning of a file
-
"#" comment
Restart policy:
Always: always restart the container after the container terminates and exits. The default policy is.
OnFailure: restart the container only when the container exits abnormally (the exit status code is not 0).
Never: never restart the container when the container terminates and exits.
Type of health examination:
livenessProbe (survival check): if the check fails, the container will be killed and operated according to the restart policy of Pod.
Readiness probe: if the check fails, Kubernetes will exclude Podservice endpoints.
Supported inspection methods:
httpGet: sends an HTTP request and returns the status code in the range of 200-400 as success.
exec: executing the Shell command returns 0 as success.
tcpSocket: TCP Socket initiated successfully.
Initialize container
InitContainer: as the name suggests, it is used to initialize work and ends after execution. It can be understood as a one-time task
Most application container configurations are supported, but health checks are not supported
Priority application container execution
Application scenario:
Environment check: for example, ensure that the service that the application container depends on is started before starting the application container. Initialize configuration: for example, prepare a configuration file for the application container
haproxy
[root@master haproxy]# cat haproxy.yml --- apiVersion: apps/v1 kind: Deployment metadata: name: haproxy namespace: default spec: replicas: 1 selector: matchLabels: app: haproxy template: metadata: labels: app: haproxy spec: initContainers: - name: cp volumeMounts: - name: haproxy-cfg mountPath: /tmp/ restartPolicy: Always #Restart the container if there is a problem with the health check containers: - image: best2001/haproxy:v3.0 imagePullPolicy: Always env: - name: RSs value: "10.97.0.10 10.97.0.50" name: haproxy ports: - containerPort: 80 hostPort: 80 livenessProbe: #Check whether port 80 exists tcpSocket: port: 80 volumes: - name: haproxy-cfg emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: haproxy namespace: default spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: haproxy type: NodePort
nginx
[root@master haproxy]# cat nginx.yml apiVersion: apps/v1 kind: Deployment metadata: name: nginx1 labels: app: nginx1 namespace: default spec: replicas: 1 selector: matchLabels: app: nginx1 template: metadata: labels: app: nginx1 spec: initContainers: - name: in command: - "wget" - "-O" - "/usr/local/nginx/html" - "http://www.baidu.com" imagePullPolicy: IfNotPresent volumeMounts: - mountPath: "/usr/local/nginx/html" name: document-root containers: - image: best2001/nginx:v0.3 imagePullPolicy: Always name: nginx1 --- apiVersion: v1 kind: Service metadata: name: nginx1 labels: app: nginx1 spec: ports: - port: 80 targetPort: 80 selector: app: nginx1 clusterIP: 10.97.0.50
httpd
[root@master haproxy]# cat apache1.yml apiVersion: apps/v1 kind: Deployment metadata: name: httpd1 labels: app: httpd1 namespace: default spec: replicas: 1 selector: matchLabels: app: httpd1 template: metadata: labels: app: httpd1 spec: containers: - image: best2001/httpd imagePullPolicy: Always name: httpd1 --- apiVersion: v1 kind: Service metadata: name: httpd1 labels: app: httpd1 spec: ports: - port: 80 targetPort: 80 selector: app: httpd1 clusterIP: 10.97.0.10
Create container
//Create nginx container [root@master haproxy]# kubectl create -f nginx.yml deployment.apps/nginx1 created service/nginx1 created //Create httpd container [root@master haproxy]# kubectl create -f apache1.yml deployment.apps/httpd1 created service/httpd1 created //Create haproxy container [root@master haproxy]# kubectl create -f haproxy.yml deployment.apps/haproxy created service/haproxy created //see [root@master haproxy]# kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/haproxy-7565dc6587-h8sdg 1/1 Running 0 15s pod/httpd1-57c7b6f7cb-sk86h 1/1 Running 0 31s pod/nginx1-7cf8bc594f-t5btg 1/1 Running 0 45s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/haproxy NodePort 10.97.0.68 <none> 80:31884/TCP 11s service/httpd ClusterIP 10.97.0.10 <none> 80/TCP 47s service/kubernetes ClusterIP 10.97.0.1 <none> 443/TCP 50m service/nginx ClusterIP 10.97.0.50 <none> 80/TCP 152s