3.RabbitMQ cluster construction
Absrtact: the cluster scheme of message queue will be adopted in practical production applications. If RabbitMQ is selected, it is necessary to understand its cluster scheme principle
Generally speaking, if it is only to learn RabbitMQ or verify the correctness of business engineering, it is OK to use its single instance deployment in the local environment or test environment. However, considering the reliability, concurrency, throughput and message stacking capacity of MQ middleware, RabbitMQ clustering scheme is generally considered in the production environment.
3.1 principle of cluster scheme
RabbitMQ, a message queuing middleware product, is written based on Erlang. Erlang language is naturally distributed (realized by synchronizing magic cookie s of each node of Erlang cluster). Therefore, RabbitMQ naturally supports Clustering. This makes it unnecessary for RabbitMQ to implement the HA scheme and save the metadata of the cluster through ZooKeeper like ActiveMQ and Kafka. Cluster is a way to ensure reliability. At the same time, it can increase message throughput through horizontal expansion.
[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-6hmdFIVY-1618491133107)(pic/1566073768274.png)]
3.2 single machine multi instance deployment
Due to some factors, sometimes you have to build a rabbitmq cluster on one machine, which is a bit similar to the stand-alone version of zookeeper. The real generation environment should be configured as a multi machine cluster. For information on how to configure a multi machine cluster, you can refer to other materials. Here we mainly discuss how to configure multiple rabbitmq instances in a single machine.
Main reference official documents: https://www.rabbitmq.com/clustering.html
First, make sure RabbitMQ is running without problems
[root@super ~]# rabbitmqctl status Status of node rabbit@super ... [{pid,10232}, {running_applications, [{rabbitmq_management,"RabbitMQ Management Console","3.6.5"}, {rabbitmq_web_dispatch,"RabbitMQ Web Dispatcher","3.6.5"}, {webmachine,"webmachine","1.10.3"}, {mochiweb,"MochiMedia Web Server","2.13.1"}, {rabbitmq_management_agent,"RabbitMQ Management Agent","3.6.5"}, {rabbit,"RabbitMQ","3.6.5"}, {os_mon,"CPO CXC 138 46","2.4"}, {syntax_tools,"Syntax tools","1.7"}, {inets,"INETS CXC 138 49","6.2"}, {amqp_client,"RabbitMQ AMQP Client","3.6.5"}, {rabbit_common,[],"3.6.5"}, {ssl,"Erlang/OTP SSL application","7.3"}, {public_key,"Public key infrastructure","1.1.1"}, {asn1,"The Erlang ASN1 compiler version 4.0.2","4.0.2"}, {ranch,"Socket acceptor pool for TCP protocols.","1.2.1"}, {mnesia,"MNESIA CXC 138 12","4.13.3"}, {compiler,"ERTS CXC 138 10","6.0.3"}, {crypto,"CRYPTO","3.6.3"}, {xmerl,"XML parser","1.3.10"}, {sasl,"SASL CXC 138 11","2.7"}, {stdlib,"ERTS CXC 138 10","2.8"}, {kernel,"ERTS CXC 138 10","4.2"}]}, {os,{unix,linux}}, {erlang_version, "Erlang/OTP 18 [erts-7.3] [source] [64-bit] [async-threads:64] [hipe] [kernel-poll:true]\n"}, {memory, [{total,56066752}, {connection_readers,0}, {connection_writers,0}, {connection_channels,0}, {connection_other,2680}, {queue_procs,268248}, {queue_slave_procs,0}, {plugins,1131936}, {other_proc,18144280}, {mnesia,125304}, {mgmt_db,921312}, {msg_index,69440}, {other_ets,1413664}, {binary,755736}, {code,27824046}, {atom,1000601}, {other_system,4409505}]}, {alarms,[]}, {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,411294105}, {disk_free_limit,50000000}, {disk_free,13270233088}, {file_descriptors, [{total_limit,924},{total_used,6},{sockets_limit,829},{sockets_used,0}]}, {processes,[{limit,1048576},{used,262}]}, {run_queue,0}, {uptime,43651}, {kernel,{net_ticktime,60}}]
Stop rabbitmq service
[root@super sbin]# service rabbitmq-server stop Stopping rabbitmq-server: rabbitmq-server.
Start the first node:
[root@super sbin]# RABBITMQ_NODE_PORT=5673 RABBITMQ_NODENAME=rabbit1 rabbitmq-server start RabbitMQ 3.6.5. Copyright (C) 2007-2016 Pivotal Software, Inc. ## ## Licensed under the MPL. See http://www.rabbitmq.com/ ## ## ########## Logs: /var/log/rabbitmq/rabbit1.log ###### ## /var/log/rabbitmq/rabbit1-sasl.log ########## Starting broker... completed with 6 plugins.
Start the second node:
The web management plug-in port is occupied, so you should also specify the port number occupied by its Web plug-in.
[root@super ~]# RABBITMQ_NODE_PORT=5674 RABBITMQ_SERVER_START_ARGS="-rabbitmq_management listener [{port,15674}]" RABBITMQ_NODENAME=rabbit2 rabbitmq-server start RabbitMQ 3.6.5. Copyright (C) 2007-2016 Pivotal Software, Inc. ## ## Licensed under the MPL. See http://www.rabbitmq.com/ ## ## ########## Logs: /var/log/rabbitmq/rabbit2.log ###### ## /var/log/rabbitmq/rabbit2-sasl.log ########## Starting broker... completed with 6 plugins.
End command:
rabbitmqctl -n rabbit1 stop rabbitmqctl -n rabbit2 stop
rabbit1 operation as master node:
[root@super ~]# rabbitmqctl -n rabbit1 stop_app Stopping node rabbit1@super ... [root@super ~]# rabbitmqctl -n rabbit1 reset Resetting node rabbit1@super ... [root@super ~]# rabbitmqctl -n rabbit1 start_app Starting node rabbit1@super ... [root@super ~]#
rabbit2 operates as a slave node:
[root@super ~]# rabbitmqctl -n rabbit2 stop_app Stopping node rabbit2@super ... [root@super ~]# rabbitmqctl -n rabbit2 reset Resetting node rabbit2@super ... [root@super ~]# rabbitmqctl -n rabbit2 join_cluster rabbit1@'super' ###The hostname in '' is changed to its own Clustering node rabbit2@super with rabbit1@super ... [root@super ~]# rabbitmqctl -n rabbit2 start_app Starting node rabbit2@super ...
View cluster status:
[root@super ~]# rabbitmqctl cluster_status -n rabbit1 Cluster status of node rabbit1@super ... [{nodes,[{disc,[rabbit1@super,rabbit2@super]}]}, {running_nodes,[rabbit2@super,rabbit1@super]}, {cluster_name,<<"rabbit1@super">>}, {partitions,[]}, {alarms,[{rabbit2@super,[]},{rabbit1@super,[]}]}]
web monitoring:
[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-4gaflf5m-1618491133117) (PIC \ 15666065096459. PNG)]
3.3 cluster management
rabbitmqctl join_cluster {cluster_node} [–ram]
Join the node to the specified cluster. Before executing this command, you need to stop the RabbitMQ application and reset the node.
rabbitmqctl cluster_status
Displays the status of the cluster.
rabbitmqctl change_cluster_node_type {disc|ram}
Modify the type of cluster node. The RabbitMQ application needs to be stopped before this command is executed.
rabbitmqctl forget_cluster_node [–offline]
Delete the node from the cluster and allow offline execution.
rabbitmqctl update_cluster_nodes {clusternode}
Before starting the node application in the cluster, consult the latest information of the clusternode node and update the corresponding cluster information. This and join_ Unlike cluster, it does not join the cluster. Consider this situation. Node A and node B are both in the cluster. When node a goes offline, node C forms a cluster with node B, and then node B leaves the cluster. When a wakes up, it will try to contact node B, but this will fail because node B is no longer in the cluster.
rabbitmqctl cancel_sync_queue [-p vhost] {queue}
Cancels the queue synchronization mirror operation.
rabbitmqctl set_cluster_name {name}
Set the cluster name. The cluster name will be notified to the client when the client connects. Federation and Shovel plug-ins will also be useful where cluster names are. The cluster name defaults to the name of the first node in the cluster. You can reset it with this command.
3.4 RabbitMQ image cluster configuration
The default cluster mode of RabbitMQ has been completed above, but the high availability of the queue is not guaranteed. Although the switches and bindings can be copied to any node in the cluster, the contents of the queue will not be copied. Although this mode solves the node pressure of a project group, the queue can not be applied directly due to the downtime of the queue node and can only wait for restart. Therefore, in order to apply normally when the queue node goes down or fails, it is necessary to copy the queue content to each node in the cluster and create a mirror queue.
The image queue is based on the normal cluster mode, and then some policies are added. Therefore, you still have to configure the normal cluster before setting the image queue. We will continue with the above cluster.
The image queue can be set through admin - > policies on the management side of the open web page, or through the command.
rabbitmqctl set_policy my_ha "^" '{"ha-mode":"all"}'
[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-5ucQm28R-1618491133119)(pic66072300852.png)]
- Name: policy name
- Pattern: the matching rule. If it matches all queues, it is ^
- Definition: use all in Ha mode mode mode, that is, synchronize all matching queues. Question mark links to help documents.
3.5 load balancing - HAProxy
HAProxy provides high availability, load balancing and proxy based on TCP and HTTP applications. It supports virtual hosts. It is a free, fast and reliable solution used by many well-known Internet companies, including Twitter, Reddit, StackOverflow and GitHub. HAProxy implements an event driven, single process model that supports a very large number of concurrent connections.
3.5.1 installing HAProxy
//Download dependent packages yum install gcc vim wget //Upload haproxy source package //decompression tar -zxvf haproxy-1.6.5.tar.gz -C /usr/local //Enter the directory, compile and install cd /usr/local/haproxy-1.6.5 make TARGET=linux31 PREFIX=/usr/local/haproxy make install PREFIX=/usr/local/haproxy mkdir /etc/haproxy //Empowerment groupadd -r -g 149 haproxy useradd -g haproxy -r -s /sbin/nologin -u 149 haproxy //Create haproxy profile mkdir /etc/haproxy vim /etc/haproxy/haproxy.cfg
3.5.2 configuring HAProxy
Configuration file path: / etc / haproxy / haproxy cfg
#logging options global log 127.0.0.1 local0 info maxconn 5120 chroot /usr/local/haproxy uid 99 gid 99 daemon quiet nbproc 20 pidfile /var/run/haproxy.pid defaults log global mode tcp option tcplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5s clitimeout 60s srvtimeout 15s #front-end IP for consumers and producters listen rabbitmq_cluster bind 0.0.0.0:5672 mode tcp #balance url_param userid #balance url_param session_id check_post 64 #balance hdr(User-Agent) #balance hdr(host) #balance hdr(Host) use_domain_only #balance rdp-cookie #balance leastconn #balance source //ip balance roundrobin server node1 127.0.0.1:5673 check inter 5000 rise 2 fall 2 server node2 127.0.0.1:5674 check inter 5000 rise 2 fall 2 listen stats bind 172.16.98.133:8100 mode http option httplog stats enable stats uri /rabbitmq-stats stats refresh 5s
Start HAproxy payload
/usr/local/haproxy/sbin/haproxy -f /etc/haproxy/haproxy.cfg //Viewing haproxy process status ps -ef | grep haproxy Visit the following address pairs mq Node for monitoring http://172.16.98.133:8100/rabbitmq-stats
If you access the mq cluster address in the code, you will access the haproxy address: 5672