preface
Redis migrate tool is an open source redis data migration tool of vipshop. It is based on redis replication, fast and stable. The github address is: https://github.com/vipshop/redis-migrate-tool
- Fast.
- Multithreading.
- redis based replication.
- Live migration.
- During the migration process, the source cluster does not affect the provision of external services.
- Heterogeneous migration.
- Supports tweetproxy clusters, redis cluster clusters, rdb files and aof files.
- Filtering function.
- When the target cluster is tweetproxy, the data will skip tweetproxy and be directly imported into the backend redis.
- The migration status is displayed.
- Perfect data sampling check (- C redis_check).
- Focus on real-time migration. During the migration process, the source cluster does not affect the provision of external services
Install redis migrate tool
1. Rely on
$ yum -y install automake libtool autoconf bzip2 git
2. Build
$ cd redis-migrate-tool $ autoreconf -fvi $ ./configure $ make $ src/redis-migrate-tool -h
Warning:
Before running the tool, ensure that the machine where the source redis is located has enough memory to allow at least one redis to be generated rdb file. If the source machine has enough memory, all redis can be generated rdb, which can be in the configuration file RMT Conf set source_safe: false.
The following commands do not support propagation to the target redis group, because the keys under these commands may cross different target redis nodes.
RENAME,RENAMENX,RPOPLPUSH,BRPOPLPUSH,FLUSHALL,FLUSHDB,BITOP,MOVE,GEORADIUS,GEORADIUSBYMEMBER,EVAL,EVALSHA,SCRIPT,PFMERGE
Detailed explanation of redis migrate tool command
The following help notes indicate that the installation was successful
This is redis-migrate-tool-0.1.0 Usage: redis-migrate-tool [-?hVdIn] [-v verbosity level] [-o output file] [-c conf file] [-C command] [-f source address] [-t target address] [-p pid file] [-m mbuf size] [-r target role] [-T thread number] [-b buffer size] Options: -h, --help : this help -V, --version : show version and exit -d, --daemonize : run as a daemon -I, --information : print some useful information -n, --noreply : don't receive the target redis reply -v, --verbosity=N : set logging level (default: 5, min: 0, max: 11) -o, --output=S : set logging file (default: stderr) -c, --conf-file=S : set configuration file (default: rmt.conf) -p, --pid-file=S : set pid file (default: off) -m, --mbuf-size=N : set mbuf size (default: 512) -C, --command=S : set command to execute (default: redis_migrate) -r, --source-role=S : set the source role (default: single, you can input: single, twemproxy or redis_cluster) -R, --target-role=S : set the target role (default: single, you can input: single, twemproxy or redis_cluster) -T, --thread=N : set how many threads to run the job(default: 4) -b, --buffer=S : set buffer size to run the job (default: 140720309534720 byte, unit:G/M/K) -f, --from=S : set source redis address (default: 127.0.0.1:6379) -t, --to=S : set target redis group address (default: 127.0.0.1:6380) -s, --step=N : set step (default: 1) Commands: redis_migrate : Migrate data from source group to target group. redis_check : Compare data between source group and target group. Default compare 1000 keys. You can set a key count behind. redis_testinsert : Just for test! Insert some string, list, set, zset and hash keys into the source redis group. Default 1000 keys. You can set key type and key count behind.
Partial instruction parsing:
- h. -- help: help
- 5. -- version: display version
- d. -- daemon: background process running
- 1. -- information: print some useful information, including parseable instructions (126), unsupported instructions (14), etc
- v. -- verbosity = n: set the log level. (default: 5, minimum: 0, maximum: 11)
- o. -- output = s: set the output log file
- c. -- conf file = s: set the configuration file. (default: rmt.conf)
- C. -- command = s: set the running command (default: redis_migrate, migrate). redis_check compares the source and destination, with 1000 sample Keys by default. redis_ Insert 1000 Keys in total by default.
- T. -- thread = n: set how many threads are used to run the tool. (default: 4)
1. Run migration
$ src/redis-migrate-tool -c rmt.conf -o log -d
Note: - d is specified to run in the background. If you run it again, you may need to kill the process occupying the current port. netstat -tnulp check the port number of redis migrate tool and kill -9 [port number] before running.
Specify the output log file as log. You can view the log through tail -200 log, etc.
2. Sampling inspection
$ src/redis-migrate-tool -c rmt.conf -o log -C redis_check Check job is running... Checked keys: 1000 Inconsistent value keys: 0 Inconsistent expire keys : 0 Other check error keys: 0 Checked OK keys: 1000 All keys checked OK! Check job finished, used 1.041s
Sample and check the data of source group and target group. The default is 1000. If you need to check more data,
$ src/redis-migrate-tool -c rmt.conf -o log -C "redis_check 200000" Check job is running... Checked keys: 200000 Inconsistent value keys: 0 Inconsistent expire keys : 0 Other check error keys: 0 Checked OK keys: 200000 All keys checked OK! Check job finished, used 11.962s
3. Test insert some data
$ src/redis-migrate-tool -c rmt.conf -o log -C "redis_testinsert" Test insert job is running... Insert string keys: 200 Insert list keys : 200 Insert set keys : 200 Insert zset keys : 200 Insert hash keys : 200 Insert total keys : 1000 Correct inserted keys: 1000 Test insert job finished, used 0.525s
The default data to be inserted is 200 for string, list, set, zset and hash (average score), with a total of 1000. If you need to insert more keys,
$ src/redis-migrate-tool -c rmt.conf -o log -C "redis_testinsert 30000" Test insert job is running... Insert string keys: 6000 Insert list keys : 6000 Insert set keys : 6000 Insert zset keys : 6000 Insert hash keys : 6000 Insert total keys : 30000 Correct inserted keys: 30000 Test insert job finished, used 15.486s
If you only want to insert keys of type string (1000),
$ src/redis-migrate-tool -c rmt.conf -o log -C "redis_testinsert string"
If you want to specify several types of inserts and specify the total number,
$src/redis-migrate-tool -c rmt.conf -o log -C "redis_testinsert string|set|list 10000" Test insert job is running... Insert string keys: 3336 Insert list keys : 3336 Insert set keys : 3328 Insert zset keys : 0 Insert hash keys : 0 Insert total keys : 10000 Correct inserted keys: 10000 Test insert job finished, used 5.539s
The data generated by the insertion verification will not be cleared. The number of key s inserted can be minimized during the test.
rmt.conf configuration file
The configuration file consists of three parts: [source], [target] and [common]
The source of the migration tool can be: a separate redis instance, a twoproxy cluster, a redis cluster, an rdb file, and an aof file.
The target of the migration tool can be: a separate redis instance, a twoproxy cluster, a redis cluster, and an rdb file.
[source]/[target]: type: single: alone redis example twemproxy: twemproxy colony redis cluster: redis colony rdb file: .rdb file aof file: .aof file servers: redis Address group, if type:twemproxy,Then twemproxy Profile, if type:rdb file,Then rdb File name. redis_auth: connect redis Certification of services auth. timeout: Reading and writing redis Timeout of the service(ms),The default is 120000 ms hash: Hash method name. Only if type:twemproxy Effective. Can be one_at_a_time,md5,crc16,crc32,crc32a,fnv1_64,fnv1a_64,fnv1_32,fnv1a_32,hsieh,murmur,jenkins. hash_tag: Key used for hashing key Two characters of, for example"{}" Or“ $$". Only if type:twemproxy Effective. As long as the key in the label key Is the same and can map different keys to the same server. distribution: Key distribution pattern. Only if type:twemproxy Effective. Can be ketama,modula,random. [common]: listen: Listening address and port. The default is 127.0.0.1:8888 max_clients: The maximum number of connections that can listen on the port. The default is 100 threads: The maximum number of threads available to the tool. Default to cpu Number of cores. step: Number of steps to resolve the request. The default value is 1. The larger the number, the faster the migration, and the more memory is required. mbuf_size: Requested cache size( M),The default is 512 M noreply: Check the reply of the target group. The default is false source_safe: Whether to protect the memory security of the source group machine. Default to true,The tool will allow only one on the same machine in the source group at the same time redis generate.rdb. dir: Working directory. Used to store files, such as rdb File, the default is the current directory. filter: Filter non-conforming expressions Key,Default to NULL,Wildcards are supported glob-style style ? : 1 Any character. for example h?llo matching hello, hallo , hxllo * : 0 One or more arbitrary characters. for example h*llo matching hllo , heeeello [characters]: Match any character in square brackets, such as[abc],Either match a,Either match b,Either match c. for example h[ae]llo matching hello , hallo, But it doesn't match hillo. [^character]: Exclude characters in square brackets. for example h[^e]llo matching hallo, hbllo, ... But it doesn't match hello. [character-character]: Indicates that all characters within the range of 2 characters can be matched, such as[a-z],[0-9]. for example h[a-b]llo matching hallo and hbllo. \ : Used to transfer special characters.
Configuration file example:
(1) Migrating data from a single instance to a twoproxy cluster:
[source] type: single servers: - 127.0.0.1:6379 - 127.0.0.1:6380 - 127.0.0.1:6381 - 127.0.0.1:6382 [target] type: twemproxy hash: fnv1a_64 hash_tag: "{}" distribution: ketama servers: - 127.0.0.1:6380:1 server1 - 127.0.0.1:6381:1 server2 - 127.0.0.1:6382:1 server3 - 127.0.0.1:6383:1 server4 [common] listen: 0.0.0.0:8888 threads: 2 step: 1 mbuf_size: 1024 source_safe: true
(2) Migrate data from a twoproxy cluster to a redis cluster
[source] type: twemproxy hash: fnv1a_64 hash_tag: "{}" distribution: ketama servers: - 127.0.0.1:6379 - 127.0.0.1:6380 - 127.0.0.1:6381 - 127.0.0.1:6382 [target] type: redis cluster servers: - 127.0.0.1:7379 [common] listen: 0.0.0.0:8888 step: 1 mbuf_size: 512
(3) Migrate data from one redis cluster to another (redis cluster to another redis cluster), and configure the filter as the key starting with "abc"
[source] type: redis cluster servers: - 127.0.0.1:8379 [target] type: redis cluster servers: - 127.0.0.1:7379 [common] listen: 0.0.0.0:8888 filter: abc*
(4) From rdb file to redis cluster
[source] type: rdb file servers: - /data/redis/dump1.rdb - /data/redis/dump2.rdb [target] type: redis cluster servers: - 127.0.0.1:7379 [common] listen: 0.0.0.0:8888 step: 2 mbuf_size: 512 source_safe: false
(5) Save the data of redis cluster to rdb(redis cluster to rdb file)
[source] type: redis cluster servers: - 127.0.0.1:7379 [target] type: rdb file [common] listen: 0.0.0.0:8888 source_safe: true
(6) From aof file to redis cluster
[source] type: aof file servers: - /data/redis/appendonly1.aof - /data/redis/appendonly2.aof [target] type: redis cluster servers: - 127.0.0.1:7379 [common] listen: 0.0.0.0:8888 step: 2
(7) Migrating data from a redis cluster to a single instance
[source] type: redis cluster servers: - 127.0.0.1:8379 [target] type: single servers: - 127.0.0.1:6379 [common] listen: 0.0.0.0:8888
(8) Migrating data from a single instance to a redis cluster
[source] type: single servers: - 127.0.0.1:6379 [target] type: redis cluster servers: - 127.0.0.1:7379 [common] listen: 0.0.0.0:8888
Listen to redis migrate tool
You can use the redis cli connection tool to set the listen ing address and port under [common] in the configuration file. The default is 127.0.0.1:8888
1. info instruction
$redis-cli -h 127.0.0.1 -p 8888 127.0.0.1:8888> info # Server version:0.1.0 # Version number of the tool os:Linux 2.6.32-573.12.1.el6.x86_64 x86_64 # Operating system information multiplexing_api:epoll # Multiplex interface gcc_version:4.4.7 # gcc version process_id:9199 # Process id of the tool tcp_port:8888 # tcp port number monitored by the tool uptime_in_seconds:1662 # Time of tool running (seconds) uptime_in_days:0 # Time of tool running (days) config_file:/ect/rmt.conf # The name of the profile in which the tool runs # Clients connected_clients:1 # Number of clients currently connected max_clients_limit:100 # Simultaneous client connection maximum limit total_connections_received:3 # Total connections to date # Memory mem_allocator:jemalloc-4.0.4 # Group source_nodes_count:32 # Number of nodes in the source redis group target_nodes_count:48 # Number of nodes in destination redis group # Stats all_rdb_received:1 # Whether all of the source redis group nodes have been received rdb file all_rdb_parsed:1 # Whether all of the source redis group nodes have been resolved rdb file all_aof_loaded:0 # Whether all of the source redis group nodes have been loaded aof file rdb_received_count:32 # Received source redis group node Number of rdb files rdb_parsed_count:32 # Resolved source redis group node Number of rdb files aof_loaded_count:0 # Loaded source redis group node Number of aof files total_msgs_recv:7753587 # The number of all messages received from the source group node total_msgs_sent:7753587 # The number of messages sent to the target node and received responses total_net_input_bytes:234636318 # The total number of input bytes received from the source group total_net_output_bytes:255384129 # The total number of output bytes sent to the target group total_net_input_bytes_human:223.77M # Same as total_net_input_bytes, but converted to human readable. total_net_output_bytes_human:243.55M # Same as total_net_output_bytes, but converted to human readable. total_mbufs_inqueue:0 # Command data from mbufs input cache of source group (excluding rdb data) total_msgs_outqueue:0 # The number of messages that will be sent to the target group, and the number of messages that have been sent to the target but are waiting for a response 127.0.0.1:8888>
2,shutdown [seconds|asap]
Behavior after executing the instruction:
- Stop copying from source redis
- Try to send the cached data in the tool to the target redis
- Redis migrate tool stop and exit
Parameters:
- Seconds: the tool is used to send cached data to the target redis most of the time before exiting. The default is 10 seconds.
- asap: do not care about the cached data, exit immediately.
For example:
$ redis-cli -h 127.0.0.1 -p 8888 127.0.0.1:8888> shutdown 5 OK (5.00s)
summary
- Not applicable to redis4 Version 0. X and above (the high version migration Xiaobian will be released as soon as possible ~)
- When there are multiple databases in the source, it is better to migrate in another way to avoid key value coverage
- Multiple sources either do not have a password, or the source is the same password, otherwise it cannot be started and the password can be changed online
You can use config set requirepass [password]