Cluster deployment of Consul

1, Consul's understanding

1. Service registration and discovery

Service registration and discovery is an indispensable component in microservice architecture. At first, services are single node, which does not guarantee high availability, and does not consider the pressure bearing of services. Calls between services are simply accessed through interfaces. Until the distributed architecture of multiple nodes appeared later, the initial solution was load balancing at the service front end. In this way, the front end must know the network location of all back-end services and configure them in the configuration file. Here are a few questions:
● if you need to call back-end services A-N, you need to configure the network location of N services, which is very troublesome.
● the configuration of each caller needs to be changed when the network location of the back-end service changes

Since there are these problems, service registration and discovery solve these problems. Back end services A-N can register their current network location with the service discovery module, and the service discovery is recorded in K-V mode. K is generally the service name, and v is IP:PORT. The service discovery module conducts regular health checks and polls to see if these back-end services can be accessed. When the front end calls the back-end services A-N, it runs to the service discovery module to ask for their network location, and then calls their services. It can be solved in this way The above problem, the front-end does not need to record the network location of these back-end services, and the front-end and back-end are completely decoupled!

2. What is consumer

Consumer is an open source service management software developed by google using go language.
Support multi data centers, distributed high availability, service discovery and configuration sharing, and adopt Raft algorithm to ensure high availability of services.
Built in service registration and discovery framework, distributed consistency protocol implementation, health check, Key/Value storage, multi data center scheme, and no need to rely on other tools (such as ZooKeeper).

Service deployment is simple, with only one runnable binary package. Each node needs to run agent, which has two running modes: server and client. The official recommendation of each data center requires 3 or 5 server nodes to ensure data security and ensure that the server leader election can be carried out correctly.

• in the client mode, all services registered with the current node will be forwarded to the server node, which does not persist the information
• in the server mode, the function is similar to the client mode. The only difference is that it will persist all information locally, so that in case of failure, the information can be retained
• server leader is the boss of all server nodes. Unlike other server nodes, it needs to be responsible for synchronizing the registered information to other server nodes and monitoring the health of each node
Some key features of consumer:
① Service registration and discovery: consumer makes service registration and discovery easy through DNS or HTTP interfaces. Some external services, such as those provided by saas, can also be registered
② Health check: health check enables consumer to quickly alarm the operation in the cluster. Integration with service discovery can prevent services from being forwarded to failed services
③ Key/Value storage: a system used to store dynamic configuration. It provides a simple HTTP interface and can operate anywhere
④ Multiple data centers: any number of areas can be supported without complex configuration
Install consumer is used for service registration, that is, some information of the container itself is registered in consumer. Other programs can obtain the registered service information through consumer, which is service registration and discovery.

2, Deployment process of consumer cluster

Deployment host environment preparation

Consumer server: 192.168 one hundred and sixteen point nine zero
Run the consum1 service, nginx service, and the consum - template daemon
Registrar server: 192.168 one hundred and sixteen point eight zero
Run the Registrar container, run the nginx container

systemctl stop firewalld. service
setenforce 0

=================consul The server===================

1.establish Consul service
mkdir /opt/consul
cp consul_0.9.2_linux_amd64.zip /opt/consul
cd /opt/consul
unzip consul_0.9.2_linux_amd64.zip
mv consul /usr/local/bin/

//Set the agent and start the consumer server in the background
consul agent \
-server \
-bootstrap \
-ui \
-data-dir=/var/lib/consul-data \
-bind=192.168.116.90 \
-client=0.0.0.0 \
-node=consul-server01 &> /var/1og/consul.log &
=======================================================
-server: with server Start as. The default is client. 
-bootstrap : Used to control a server Is it bootstrap There can only be one mode in a data center server be in bootstrap Mode, when a server be in bootstrap Mode, you can elect to server-leader.
-bootstrap-expect=2 : Minimum cluster requirements server Quantity. When it is lower than this quantity, the cluster will fail.
-ui :Specify on UI Interface, so that you can http://Use an address like localhost:8500/ui to access the web UI provided by consumer.
-data-dir :Specify the data store directory.
-bind :Specify the communication address within the cluster. All nodes in the cluster must be reachable to this address. The default is 0.0.0.0. 
-client :appoint consul Where is the binding client address.On, this address provides HTTP, DNS,RPC And other services. The default is 127.0.0.1. 
-node :The name of a node in a cluster must be unique in a cluster. The default is the host name of the node.
-datacenter :Specify the data center name. The default is dc1. 
========================================================

netstat -natp | grep consul

start-up consul Five ports will be monitored by default after:
8300: replication, leader farwarding Port
8301: lah cossip Port
8302: wan gossip Port
8500: web ui Interface port
8600: use dns Port for viewing node information by protocol

2.View cluster information
#View members status
consul members
Node                    Address         Status  Type   Build   Protocol    DC
consul- server01   192.168.80.15:8301  alive   server  0.9.2   2           dc1

#View cluster status
consul operator raft list-peers

consul info | grep leader
     leader = true
  leader_addr = 192.168.80.15:8300
  
3.adopt http api Get cluster information
curl 127.0.0.1:8500/v1/status/peers
#View cluster server members
curl 127.0.0.1:8500/v1/status/leader
#Cluster server leader
curl 127.0.0.1:8500/v1/catalog/services
#All registered services
curl 127.0.0.1;8500/v1/catalog/nginx
#View nginx service information
curl 127.0.0.1:8500/v1/catalog/nodes
#Cluster node details

Keywords: Docker

Added by buddymoore on Wed, 29 Dec 2021 04:39:10 +0200