MySQL cloud database physical backup local recovery scheme

In China, whether you use the cloud platform version of MySQL database of Alibaba cloud, Tencent cloud or Huawei cloud, you will encounter the need to use Percona XtraBackup tool for backup and restore in the scenario of data backup and recovery.

Looking at a pile of verbose and backward backup and recovery schemes on the Internet, I can't help but get bored. I take the opportunity to help my friends with data migration again, sort out and share the previous practical notes, hoping to help the students in need.

Write in front

There are few cloud platform practitioners in China, and the mature solutions are relatively fixed, so it is not difficult to see that the product backup and recovery strategies and even documents of "Royal three" are very "similar".

  • Alibaba cloud: restoring RDS MySQL physical backup files to self built databases
  • Tencent cloud: MySQL cloud database - restoring database using physical backup
  • Huawei cloud: restore to self built database (MySQL) through backup files

In this paper, data recovery processing will be based on container tools to avoid unnecessary software dependence.

Writing a database recovery instance configuration file

In the container era, if you are not an operation and maintenance worker in the container environment, you don't have to worry about the system configuration too much. We can directly use the image officially provided by Percona. Taking MySQL 5.7 as an example, you can modify the version number according to your own needs.

# https://hub.docker.com/r/percona/percona-xtradb-cluster/

version: "3"

services:
  percona:
    image: percona/percona-xtradb-cluster:5.7
    container_name: percona
    restart: always
 # According to your needs, declare the exposed port
    # ports:
    #   - 3306:3306
    environment:
      - MYSQL_ALLOW_EMPTY_PASSWORD=1
    volumes:
      - ./node.cnf:/etc/mysql/node.cnf
      - ./data:/var/lib/mysql:rw
      - ./restore:/var/lib/mysql-files:rw

In the above configuration, I declare two directories for saving data. The first is the restore directory for storing the cloud database backup, and the second is the data directory for temporarily storing the restored database files. Save the above content as docker - compose YML, to be used later.

Next, write a database configuration file that can be used for restore:

[mysqld]

skip-grant-tables

ignore-db-dir=lost+found
datadir=/var/lib/mysql
socket=/tmp/mysql.sock
skip-host-cache

#coredumper
#server_id=0
binlog_format=ROW
default_storage_engine=InnoDB

innodb_flush_log_at_trx_commit  = 0
innodb_flush_method             = O_DIRECT
innodb_file_per_table           = 1
innodb_autoinc_lock_mode=2

bind_address = 0.0.0.0

wsrep_slave_threads=2
wsrep_cluster_address=gcomm://
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so

wsrep_cluster_name=noname
wsrep_node_address=172.20.12.2
wsrep_node_incoming_address=0cdb19fc56e4:3306

wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth='xtrabackup:xtrabackup'

[client]
socket=/tmp/mysql.sock

[sst]
progress=/var/lib/mysql/sst_in_progresss

Save the above configuration as node CNF, and then compare it with the previous docker - compose YML places the same directory, and uses the familiar docker compose up - D to start the database instance for data restore.

...
2021-10-12T06:08:37.329788Z 0 [Note] Server socket created on IP: '0.0.0.0'.
2021-10-12T06:08:37.385234Z 0 [Note] InnoDB: Buffer pool(s) load completed at 211012  6:08:37
2021-10-12T06:08:37.665867Z 0 [Note] mysqld: ready for connections.
Version: '5.7.33-36-57'  socket: '/tmp/mysql.sock'  port: 3306  Percona XtraDB Cluster (GPL), Release rel36, Revision a1ed9c3, WSREP version 31.49, wsrep_31.49
2021-10-12T06:08:37.666282Z 2 [Note] WSREP: Initialized wsrep sidno 2
...

Use docker compose logs - F to view the operation log. After a while, you can see a log similar to the above, including "ready for connections". You can start data recovery.

Data recovery

Copy the data you need to restore to the local restore directory (corresponding to the / var / lib / MySQL files / directory in the container). You can also use the docker cp command to copy directly to the container. However, for large files, the experience is not friendly.

Data "decompression"

After preparing the data backup file, we enter the container for subsequent operations:

docker exec -it percona bash

After entering the container, first switch to the working directory:

cd /var/lib/mysql-files/

Assuming that our backup file format is a storage file in tar format, we need to decompress it first. If it is in other formats, such as xb format. The container has built-in qpress and xbstream tools. Refer to the documentation provided by your cloud platform and use it directly.

tar zxvf *.tar

After decompressing the backup files, we can officially start the data recovery operation.

innobackupex --defaults-file=/etc/mysql/node.cnf --apply-log /var/lib/mysql-files/

Data recovery time depends on the size of your backup file.

InnoDB: 5.7.32 started; log sequence number 3006781461
xtrabackup: starting shutdown with innodb_fast_shutdown = 1
InnoDB: FTS optimize thread exiting.
InnoDB: Starting shutdown...
InnoDB: Shutdown completed; log sequence number 3006781480
211013 07:57:02 completed OK!

When you see the log output above, the data files of the normal MySQL instance are ready.

However, in order to export complete data, we need to do some additional operations.

Export data file

In the above operation process, considering that the database instance needs to run stably, the data is not directly restored to the / var/lib/mysql directory, but decompressed in the MySQL files directory.

In order to export data correctly, we need to enable the database instance to read the recovered data, so we will completely overwrite the database instance data with the decompressed data.

cp -r /var/lib/mysql-files/* /var/lib/mysql/
rm -rf /var/lib/mysql-files/*

After execution, we switch to the outside of the container, execute docker compose down & & docker compose up - D, delete the previous container, and re create a clean new container to continue data recovery. Enter the container again with docker exec -it:

docker exec -it percona bash

Use the default user name to enter the MySQL interactive terminal:

mysql -u xtrabackup

Try to list the databases that can be read at present:

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| YOUR_DATABASE      |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.01 sec)

You will find that the MySQL database in the cloud has been correctly restored locally.

However, if you try to use mysqldump to export data directly, you may receive an "PXC" error.

mysqldump: Got error: 1105: Percona-XtraDB-Cluster prohibits use of LOCK TABLE/FLUSH TABLE <table> WITH READ LOCK/FOR EXPORT with pxc_strict_mode = ENFORCING when using LOCK TABLES

To solve this file, we need to make global settings in the MySQL interactive terminal:

mysql> set global pxc_strict_mode=DISABLED ;
Query OK, 0 rows affected (0.00 sec)

Then export the database and there will be no more problems:

mysqldump -u xtrabackup YOUR_DATABASE > backup.sql

Because we export standard database backups, it is also easy to continue the migration, such as:

mysql -u USER -p DATABSE_NAME < backup.sql

Or file load to restore and rebuild the database quickly.

last

For engineers, laziness is a virtue, but the premise of laziness is that you can correctly and simply locate and solve problems and encourage each other.

-- EOF

Added by silverphpd on Tue, 25 Jan 2022 12:33:29 +0200