Author: Yang Taotao
Senior database expert, specializing in MySQL for more than ten years. He is good at backup and recovery, SQL tuning, monitoring, operation and maintenance, high availability architecture design related to MySQL, PostgreSQL, MongoDB and other open source databases. At present, he works in aikesheng, providing MySQL related technical support and MySQL related course training for major operators and banking and financial enterprises.
Source: original contribution
*It is produced by aikesheng open source community. The original content cannot be used without authorization. For reprint, please contact Xiaobian and indicate the source.
Through the introduction in the previous chapter, we have learned how ProxySQL builds function points such as read-write separation and failover based on MySQL master-slave and group replication architecture, but we do not cover the details of ProxySQL related configuration tables. This chapter will give an extended explanation of the contents omitted in the previous section.
Let's learn about the following list of built-in databases of ProxySQL:
ytt:admin> show databases; +-----+---------------+-------------------------------------+ | seq | name | file | +-----+---------------+-------------------------------------+ | 0 | main | | | 2 | disk | /var/lib/proxysql/proxysql.db | | 3 | stats | | | 4 | monitor | | | 5 | stats_history | /var/lib/proxysql/proxysql_stats.db | +-----+---------------+-------------------------------------+ 5 rows in set (0.00 sec)
In the databases listed above, main stands for runtime, that is, runtime library; disk stands for persistence library; Stats stands for statistical database; monitor represents the monitoring database; stats_history stands for statistical database archiving.
Several configuration tables for storing MySQL master-slave, group replication and read-write separation exist in each library. Different libraries represent different operation categories.
First, the backend host metabase table
mysql_servers: this table is the basic table for storing metadata information related to the back-end database. All subsequent operations need to access and update this table.
The main fields are as follows:
hostgroup_id, the host group flag of the backend MySQL instance. Each instance can be set to the same value or different values. It is recommended that a group of instances be set to the same value.
gtid_port, the port on which the Proxy Binlog Reader component needs to listen. Without this component, you can keep the default.
Status, instance status value.
- Online, the default option, provides services online, i.e. normal status;
- offline_soft, non forced offline status, that is, stop accepting new requests after processing the current session;
- offline_hard, forcibly offline, forcibly close all sessions of the target, and no longer accept new requests;
- shunned: the back-end instance is temporarily closed due to too many wrong connections, or the processing of new requests is suspended due to exceeding the set delay time.
weight, the instance priority in a group. The higher the priority, the more likely it is to be selected. For example, if there are multiple slave instances, the priority of one node can be raised to ensure the priority of traffic allocation.
Compression, whether to compress the connection request. It is not compressed by default. It can be set to 1 to indicate compression.
max_connections, the maximum number of connections through the ProxySQL traffic port.
max_replication_lag, which specifies the delay time when the instance state is set to shunned. After this time, the status of the specified instance changes from online to shunned until the backlog of requests is processed.
For example, the following runtime level MySQL_ The servers table records: since these nodes are not running, the status is shunned:
ytt:admin> select hostgroup_id,hostname,port,status,max_connections from runtime_mysql_servers where hostgroup_id in (1,2); +--------------+-----------+------+---------+-----------------+ | hostgroup_id | hostname | port | status | max_connections | +--------------+-----------+------+---------+-----------------+ | 2 | 127.0.0.1 | 3341 | SHUNNED | 1000 | | 2 | 127.0.0.1 | 3342 | SHUNNED | 1000 | | 2 | 127.0.0.1 | 3340 | SHUNNED | 1000 | +--------------+-----------+------+---------+-----------------+ 3 rows in set (0.00 sec)
I start the three master-slave nodes, and the corresponding status is automatically updated to online:
ytt:admin> select hostgroup_id,hostname,port,status,max_connections from runtime_mysql_servers where hostgroup_id in (1,2); +--------------+-----------+------+--------+-----------------+ | hostgroup_id | hostname | port | status | max_connections | +--------------+-----------+------+--------+-----------------+ | 2 | 127.0.0.1 | 3341 | ONLINE | 1000 | | 1 | 127.0.0.1 | 3341 | ONLINE | 1000 | | 2 | 127.0.0.1 | 3342 | ONLINE | 1000 | | 2 | 127.0.0.1 | 3340 | ONLINE | 1000 | +--------------+-----------+------+--------+-----------------+ 4 rows in set (0.00 sec)
Similarly, start the group replication instance, and the status of the three nodes is as follows:
ytt:admin> select hostgroup_id,hostname,port,status from runtime_mysql_servers where hostgroup_id > 2; +--------------+-----------+------+--------+ | hostgroup_id | hostname | port | status | +--------------+-----------+------+--------+ | 3 | 127.0.0.1 | 3343 | ONLINE | | 5 | 127.0.0.1 | 3343 | ONLINE | | 5 | 127.0.0.1 | 3344 | ONLINE | | 5 | 127.0.0.1 | 3345 | ONLINE | +--------------+-----------+------+--------+ 4 rows in set (0.00 sec)
Second, user metadata table
mysql_users: this table stores authorization data of traffic users. There are several main fields:
transaction_persistent, used to specify whether the whole transaction is shunted. If it is set to 1, it means shunting to the default host group with transaction granularity; If it is 0, it means that the transaction is shunted according to the internal SQL granularity. Except for read-only transactions, other transactions should be taken as a whole and maintain the original transaction logic.
default_hostgroup, the default host group. SQL without query rules is uniformly shunted to the default host group.
frontend, front-end user, for ProxySQL instance.
Backend, backend user, for MySQL instance.
These two fields are both 1 by default. Usually, a back-end MySQL instance user is defined and will be automatically mapped to the front-end ProxySQL instance.
For example, the following primary and secondary traffic users: from MySQL_ The users table is automatically mapped to the runtime_mysql_users table. A user is both the front and back ends.
ytt:admin> select username,active,default_hostgroup,frontend,backend from mysql_users where username = 'dev_user'; +----------+--------+-------------------+----------+---------+ | username | active | default_hostgroup | frontend | backend | +----------+--------+-------------------+----------+---------+ | dev_user | 1 | 1 | 1 | 1 | +----------+--------+-------------------+----------+---------+ 1 row in set (0.00 sec) ytt:admin> select username,active,default_hostgroup,frontend,backend from runtime_mysql_users where username = 'dev_user'; +----------+--------+-------------------+----------+---------+ | username | active | default_hostgroup | frontend | backend | +----------+--------+-------------------+----------+---------+ | dev_user | 1 | 1 | 0 | 1 | | dev_user | 1 | 1 | 1 | 0 | +----------+--------+-------------------+----------+---------+ 2 rows in set (0.00 sec)
Third, master-slave metadata table
mysql_ replication_ Host groups: this table configures master-slave instance host group information.
ProxySQL shunts front-end requests according to the contents of this table and cooperates with mysql_servers table to achieve the goal of master-slave automatic failover.
writer_hostgroup, write the host group ID. For example, in our example, it is set to 1, which means that the host group ID is 1 to process write requests.
reader_hostgroup, read the host group ID. For example, in our example, it is set to 2, which means that the host group ID is 2 to process read requests.
check_type to check the value of MySQL read-only variable. In read_ only , innodb_ read_ only , super_ read_ Select only from these variables.
For example, super needs to be detected_ read_ Only, if it is 1, it means read; If it is 0, it is write.
ytt:admin> select * from mysql_replication_hostgroups; +------------------+------------------+-----------------+---------------------------------+ | writer_hostgroup | reader_hostgroup | check_type | comment | +------------------+------------------+-----------------+---------------------------------+ | 1 | 2 | super_read_only | MySQL Replication failover test | +------------------+------------------+-----------------+---------------------------------+ 1 row in set (0.00 sec)
Fourth, group replication metadata table
mysql_ group_ replication_ Host groups: this table is used to configure groups and copy host group information. It also works with MySQL_ The servers table is used to complete the non aware fault tolerance function of the group replication node, similar to the MySQL table_ replication_ hostgroups .
writer_hostgroup ,reader_ Host group, which represent write and read traffic groups respectively.
offline_hostgroup, offline host group. Nodes with abnormal status are placed in this group.
max_writers ,backup_writer_hostgroup, which are used in the multi write mode. If the number of write instances exceeds max_ The writers setting is put into the host group backup_writer_hostgroup .
max_transactions_behind, similar to the master-slave delay flow deactivation function. Set the number of transactions behind a node. After reaching this number, the node state is set to shunned. After it is completely processed, it is changed to normal state.
The configuration table of the current group replication environment is as follows:
ytt:admin> select writer_hostgroup,backup_writer_hostgroup,reader_hostgroup from mysql_group_replication_hostgroups; +------------------+-------------------------+------------------+ | writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | +------------------+-------------------------+------------------+ | 3 | 4 | 5 | +------------------+-------------------------+------------------+ 1 row in set (0.00 sec)
Fifth, read and write separate metadata tables
mysql_query_rules: it is used to configure the read-write separation mode. It is very flexible. You can configure unified ports to match regular expressions or shunt according to different ports. (the criteria for regular expressions are determined by the setting of the MySQL query_processor_regex parameter). The main fields are as follows:
active, whether to activate this matching pattern.
username, traffic user name.
schemaname, database name.
match_pattern, the specific matching pattern.
In addition to relying on regular expressions to divert read and write traffic to the same port described in the previous chapter, multiple ports can also be set to distinguish different instance groups. For example, if the master-slave traffic flows through port 6401 and the group replication traffic flows through port 6402, you can adapt directly as follows:
First add the port to be monitored by ProxySQL to the variable MySQL interfaces, and then restart the ProxySQL service:
ytt:admin> SET mysql-interfaces='0.0.0.0:6033;0.0.0.0:6401;0.0.0.0:6402'; Query OK, 1 row affected (0.00 sec) ytt:admin> SAVE MYSQL VARIABLES TO DISK; Query OK, 140 rows affected (0.02 sec)
Insert these two ports into this table:
ytt:admin> INSERT INTO mysql_query_rules (rule_id,active,proxy_port,destination_hostgroup,apply) -> VALUES (1,1,6401,1,1), (2,1,6402,3,1); Query OK, 2 rows affected (0.00 sec) ytt:admin> LOAD MYSQL QUERY RULES TO RUNTIME; Query OK, 0 rows affected (0.00 sec) ytt:admin> SAVE MYSQL QUERY RULES TO DISK; Query OK, 0 rows affected (0.08 sec)
The contents of this picture are as follows: in addition to regular shunting, there are two additional records to shunt according to the specified port (6401 is the main and slave shunting port and 6402 is the group replication shunting port).
ytt:admin> select rule_id,active,username,schemaname,match_pattern,destination_hostgroup,proxy_port from runtime_mysql_query_rules; +---------+--------+--------------+------------+---------------+-----------------------+------------+ | rule_id | active | username | schemaname | match_pattern | destination_hostgroup | proxy_port | +---------+--------+--------------+------------+---------------+-----------------------+------------+ | 1 | 1 | NULL | NULL | NULL | 1 | 6401 | | 2 | 1 | NULL | NULL | NULL | 3 | 6402 | | 13 | 1 | dev_user | ytt | ^select | 2 | NULL | | 14 | 1 | dev_user_mgr | ytt | ^select | 4 | NULL | +---------+--------+--------------+------------+---------------+-----------------------+------------+ 4 rows in set (0.00 sec)
To verify the shunting strategy: use user dev respectively_ User connection port 6401, user dev_user_mgr connection port 6402.
root@ytt-ubuntu:~# mysql -udev_user -pdev_user -P6401 -h ytt-ubuntu -e "use ytt;select count(*) from t1"; mysql: [Warning] Using a password on the command line interface can be insecure. +----------+ | count(*) | +----------+ | 5 | +----------+ root@ytt-ubuntu:~# mysql -udev_user_mgr -pdev_user_mgr -P6402 -h ytt-ubuntu -e "use ytt;select count(*) from t1"; mysql: [Warning] Using a password on the command line interface can be insecure. +----------+ | count(*) | +----------+ | 1 | +----------+
Enter the ProxySQL management side and check the audit table: the above different users and ports are shunted to specific host groups.
ytt:admin> select hostgroup,schemaname,username,digest_text,count_star from stats_mysql_query_digest where schemaname = 'ytt'; +-----------+------------+--------------+-------------------------+------------+ | hostgroup | schemaname | username | digest_text | count_star | +-----------+------------+--------------+-------------------------+------------+ | 3 | ytt | dev_user_mgr | select count(*) from t1 | 1 | | 1 | ytt | dev_user | select count(*) from t1 | 1 | +-----------+------------+--------------+-------------------------+------------+ 2 rows in set (0.00 sec)
Here, I believe I have a certain understanding of how ProxySQL matches MySQL HA.