Five data types
key creation
- View all key s in the current library
keys *
- Determine whether a key exists
exists key
- View the type of key
type key
- Delete key
del key unlink key --Asynchronous deletion
- Set the expiration time of the key
expire key 10
- See how many seconds are left to expire
ttl key ---1 Means never expires,-2 Indicates that it has expired
String
- The maximum length is 512M
- Add data
set key value --Same settings key,cover value setnx key --only key You can only join when it does not exist mset key1 value1 key2 value2 msetnx key1 value1 key2 value2 setrange key 2 ab3 --Range coverage value setex key 20 value --Set expiration time
- Query data
get key mget key1 key2 getrange key 0 3 --Data in query range getset key value --Take out the old value and replace the new value
- Append string
append key value
- Get data length
strlen key
- Increase numeric value
incr key --Increase the step size to 1 incrby key 10 --Increase the step size by 10
- Reduce numeric value
decr key --The reduction step is 1 decrby key 10 --The reduction step is 10
List
- insert data
lpush key value1 value2 --Insert data from the left, the actual position is value2 value1 rpush key value1 value2 --Insert data from the right, the actual position is value1 value2 rpoplpush key1 key2 --from key1 Take a value from the right and insert it into the key2 left linsert key before value value1 --stay value Front insert value1 linsert key after value value1 --stay valueh Insert after value1 lset key 1 value --Set the value with subscript 1 to value
- Fetch data
--Value light key forgetting lpop key --Take the data from the left rpop key --Take out the data from the right lrem key n value --delete value sinister n Elements
- Query data
lrange key 0 -1 --Gets the element within the scope subscript lindex key 1 --Gets the element of the specified subscript
- Get list length
llen key
Set
- The data cannot be repeated and out of order
- insert data
sadd key value1 value2 --Exist in value Will be ignored smove key key1 value --take key Elements in value Move to key1 in
- Query data
smembers key sismember key value --judge key Are there any in the value Value, 1 means yes, 0 means No srandmember key n --Randomly extract from the set n Values
- Get the number of elements
scard key
- Delete data
srem key value value1 spop key --Randomly delete a data
- Hand in and supplement
sinter key key1 --intersection sunion key key1 --Union sduff key key1 --Difference set
Hash
- The data type is key value pair, but the value is also composed of key value pairs
- insert data
hset key field value hset key field1 value1 field2 value2 hsetnx key field value field1 value2
- get data
hget key field
- Determine whether the field exists
hexists key field
- View all field s
hkeys key
- View all value s
hkeys value
- Element addition
hincrby key field 2
Zset
- Ordered set of non repeating elements
- insert data
zadd key score value score1 value2 --according to score Sort
- Query data
zrange key 0 -1 zrange key 0 -1 withscores --Show score zrangebyscore key min max --Show score as min reach max Data zrangebyscore key min max withscores zrevrangebyscore key max min --Show score as min reach max Data zrevrangebyscore key max min withscores
- Score increase
zincrby key 10 value --The given value is value Score increased by 10
- Delete element
zrem key value
- Gets the number of collection elements
zcount key min max --Specify scoring range
- Return ranking
zrank key value
Publish And Subscribe
- Subscribers subscribe to channels
subscribe channel
- The publisher publishes the message
publish channel message
New data type
Bitmaps
- Save value can only be 0 or 1
- insert data
setbit key offset value --offset Is the offset
- get data
getbit key offset
- Count the number of strings set to 1
bitcount key bit count key start end --Add start and end positions
- Hand in and supplement
bitop and key1 key2 --intersection bitop or key1 key2 --Union
HyperLogLog
- insert data
pfadd key value pfadd key [value] --Add multiple data
- Statistical base quantity
pfcount key
- merge
pfmerge key key1
Geospatial
- insert data
gepadd ley longitude latitude value gepadd ley [longitude latitude value] --Insert multiple data
- get data
geopos key value
- Get distance
geodist key value value1 [m|km]
- Get data within radius
georadius key longitude latitude radius m|km
affair
- The main function of Redis transaction is to concatenate multiple commands to prevent other commands from jumping in the queue.
- All commands in a transaction are serialized and executed sequentially.
- When a command is queued, an error is reported. When it is executed, the whole queue will be cancelled.
- If an error is reported in a command in the execution phase, only the error reported command will not be executed, and other commands will be executed and will not be rolled back.
- Transaction has three characteristics:
- Separate isolation operation: all commands in the transaction are serialized and executed sequentially. During the execution of the transaction, it will not be interrupted by command requests sent by other clients.
- There is no concept of isolation level: the commands in the queue will not be actually executed before the transaction is committed, because any commands will not be actually executed before the transaction is committed.
- Atomicity is not guaranteed: if a command fails to execute in a transaction, the subsequent commands will still be executed without rollback.
- basic operation
multi --Start the transaction, and then all commands enter the command queue, but will not be executed exec --Execute commands in the command queue discard --Abandon transaction watch key --monitor key,If this is before the transaction is executed key If it is changed by other commands, the transaction is interrupted unwatch --Cancel monitoring
Persistence
RDB
- Write the data set snapshot in memory to disk within the specified time interval, and read the snapshot file directly to memory during recovery.
- Redis will separately create (fork) a sub process to perform the persistence operation. It will first write the data to a temporary file. After the persistence process is completed, redis will use this temporary file to replace the last persistent file. In the whole process, the main process does not perform any IO operations, which ensures extremely high performance. If large-scale data recovery is required and is not very sensitive to the integrity of large-scale Bureau recovery, RDB method is more efficient than AOF method. The disadvantage of RDB is that the data after persistence may be lost.
- Advantages:
- Suitable for large-scale data recovery.
- The requirements for data integrity and consistency are not high, so it is more suitable for use.
- Save disk space
- Fast recovery.
- inferiority
- When forking, a copy of the data in memory is cloned, and about twice the expansion needs to be considered.
- Although Redis uses copy on write technology in fork, it still consumes performance when the data is huge.
- A backup is performed at a certain interval during the backup cycle, so if Redis unexpectedly goes down, all modifications after the last snapshot will be lost.
AOF
- Each write operation is recorded in the form of a log (incremental saving), and all write instructions executed by redis are recorded (read operations are not recorded). Only files are allowed to be added but not modified. Redis will read the file and reconstruct the data at the beginning of startup, in other words, When redis restarts, it executes the write instruction from front to back according to the contents of the log file to complete the data recovery.
- Aof and RDB are enabled at the same time. The system obtains AOF data by default.
- Advantages:
- The backup mechanism is more robust and the probability of data loss is lower.
- Readable log text. Error operations can be handled by operating the AOF file.
- inferiority:
- It takes more disk space than RDB.
- Restore backup is slow.
- If each read and write is synchronized, there is a certain performance pressure.
- There are individual bugs, resulting in failure to recover.
Problem solving
Cache penetration
Cache penetration
- The data source corresponding to the key does not exist. Every time a request for this key cannot be obtained from the cache, the request will be pressed to the data source, which may crush the data source.
Solution
- Cache null values: if the data returned by a query is null (whether the data exists or not), the null result is still cached. Set the expiration time of the null result to be very short, no more than five minutes.
- Set the accessible list: use the bitmaps type to define an accessible white list. The list id is used as the offset of bitmaps. Each access is compared with the id in birmaps. If the access id is not in bitmaps, it is intercepted and access is not allowed.
- Bloom filter is adopted.
- Real time monitoring: when Redis's hit rate begins to decrease rapidly, it is necessary to check the access objects and data, cooperate with the operation and maintenance personnel, and set the blacklist restriction service.
Buffer breakdown
Buffer breakdown
- The data corresponding to the key exists, but it expires in Redis. At this time, if a large number of concurrent requests come, if these requests find that the cache expires, they will generally load data from the back-end database and set it back to the cache. At this time, highly concurrent requests may crush the back-end database instantly.
Solution
- Preset popular data: store some popular data in Redis in advance before the peak access of Redis, and increase the duration of these popular data key s.
- Real time adjustment: monitor the popular data on site and adjust the expiration time of the key in real time.
- Use locks:
- When the cache fails (it is judged that the value is empty), you do not access the database immediately.
- Use some operations of the caching tool with successful return values (such as setnx in Redis) to set a key.
- When the operation returns success, access the database, reset the cache, and finally delete the key.
- When the operation returns failure, it proves that a thread is accessing the database. The current thread sleeps for a period of time and then retries the whole get cache method.
Cache avalanche
Cache avalanche
- The data corresponding to the key exists, but it expires in Redis. At this time, if a large number of concurrent requests come, if these requests find that the cache expires, they will generally load data from the back-end database and set it back to the cache. At this time, highly concurrent requests may crush the back-end database instantly.
- The difference between cache avalanche and cache breakdown is that cache avalanche is for many key caches, while cache breakdown is only for one.
Solution
- Build a multi-level cache architecture: Nginx cache + Redis cache + other caches.
- Use lock or queue: lock or queue is used to ensure that a large number of threads will not read and write to the database at one time, so as to avoid a large number of concurrent requests falling on the database in case of failure. But it is not suitable for high concurrency.
- Set the expiration flag to update the cache: record whether the cache data expires. If it expires, it will trigger another thread to update the cache of the actual key in the background.
- Disperse the cache expiration time: add a random value on the basis of the original expiration time, so that the repetition rate of the expiration time of each cache will be reduced, which is difficult to cause the problem of collective failure.