Crazy God video address
https://www.bilibili.com/video/BV1S54y1R7SB
1,NoSQL
1.1 why use Nosql
Have some chicken soup first
- We are now in 2020, in the era of big data;
- The general database of big data cannot be analyzed!
- Force yourself to study! Only continuous learning! This is the only law to survive in this society!
- Study or for yourself, for your family! In order to make yourself more decent!
(1) The age of stand-alone MySQL!
in the 1990s, the number of visits to a basic website was generally not too large, and a single database was completely enough!
at that time, there was no great pressure to use more static web Html ~ servers!
think about this situation: what is the bottleneck of the whole website?
- If the amount of data is too large for one machine!
- The data index (B+ Tree) can't fit into a single machine memory
- The number of accesses (mixed reading and writing) can't be borne by one server~
As long as you start to have one of the above three situations, you have to advance!
(2) . Memcached + MySQL + vertical split (read / write separation)
80% of the website is reading. It's very troublesome to query the database every time! So we want to reduce the pressure on data. We can use cache to ensure efficiency!
development process: optimizing data structure and index – > file caching (IO) - > memcached (the hottest technology at that time!)
1.2 warehouse and table splitting + horizontal splitting
with the development of technology and business, the requirements for people are becoming higher and higher!
essence: database (read, write)
MyISAM in earlier years: watch lock, which greatly affects efficiency! A serious lock problem will occur when you send high and low
Switch to Innodb: row lock
slowly start using sub database and sub table to solve the writing pressure! When did MySQL introduce table partitioning! This is not used by many companies!
MySQL Cluster can well meet all the needs of that era!
Today's recent era
earth shaking changes have taken place in the world between 2010 and 2020; (positioning is also a kind of data, music, hot list!)
MySQL and other relational databases are not enough! There is a lot of data and changes quickly ~!
MySQL uses it to store some large files, blogs and pictures! The database table is very large, and the efficiency is low! If there is a database dedicated to this data,
MySQL pressure becomes very small (study how to deal with these problems!) Under the IO pressure of big data, the table can hardly be bigger!
At present, a basic Internet project!
Why use NoSQL!
user's personal information, social network, geographical location. Data generated by users themselves, user logs and so on are growing explosively.
at this time, we need to use Nosql database. Nosql can handle the above situations well.
1.3 what is NoSql
As long as you can't learn to die, learn to die!
NoSQL = Not Only SQL (not just SQL)
Relational database: table, row, column
generally refers to the development of non relational databases, with the development of web2 0 the birth of the Internet! The traditional relational database is difficult to deal with web2 0 era! Especially large-scale and highly concurrent communities! Many insurmountable problems have been exposed. NoSQL is developing very rapidly in today's big data environment. Redis is the fastest growing technology, and it is a technology we must master at present!
many data types, users' personal information, social networks, geographical location. The storage of these data types does not need a fixed format! It can be expanded horizontally without more than months of operation! Map uses key value pairs to control!
NoSQL features
1. Easy to expand (there is no relationship between data, so it is easy to expand!)
2. Large amount of data and high performance (Redis writes 80000 times a second and reads 110000 times. NoSQL's cache record level is a fine-grained cache, which will have high performance!)
3. Data types are diverse! (there is no need to design the database in advance! Use as you go! If it is a table with a large amount of data, many people can't design it!)
4. Traditional RDBMS and NoSQL
Traditional RDBMS
- Structured organization - SQL - Data and relationships exist in separate tables - Operation, data definition language - Strict consistency - Basic transaction ...
NoSQL - Not just data - There is no fixed abstract language - Key value pair storage, column storage, document storage, graphic database - Final consistency - CAP Theorem sum BASE (Live more in different places!) - High scalability, high availability, high scalability - ...
real practice in the company: NoSQL + RDBMS is the strongest. Alibaba's architecture evolution! There is no high or low technology, it depends on how you use it! (improve internal skill and thinking!)
1.4 Alibaba evolution analysis
Thinking question: are so many things in one database?
Technology is not urgent. The more you learn slowly, the more solid you can be!
A large number of companies do the same business; (competitive agreement)
With such competition, the business is more and more perfect, and then the requirements for developers are higher and higher!
Agile development, extreme programming
If you are an architect in the future: nothing can't be solved by adding a layer!
1,Basic information of goods name Price Merchant information: Relational database can be solved( MySql Oracle)Alibaba cloud lunatics 2.Description and comments of goods (more words) In the documentation database, Redis ,MongoDB 3.picture distributed file system fastdfs 4.Keywords for items (search) Search Engines solr elasticsearch All cattle have a hard time! But you just like SB Stick to the same, and you will be forced! 5.Popular band information Memory database redis 6.Commodity transaction, external payment interface
Large Internet application problems:
too many data types
there are many data sources and they are often reconstructed
2. Four categories of NoSql
KV key value pair:
Sina: Redis
meituan: Redis+Tair
Alibaba, baidu: Redis+Memecache
Document database (bson format is the same as json):
MongoDB (generally required)
MongoDB is a database based on distributed file storage and written in C + +. It is mainly used to process a large number of documents!
MongoDB is a product between relational database and non relational database! MongoDB is non relational data
The library has the richest functions and is most like a relational database!
ConthDB
Column storage database
HBase
distributed file system
Image relational database
not storing graphics, but relationships, such as circle of friends, social networks, advertising recommendations!
Comparison of the four
3. Getting started with Redis
3.1 what is redis?
1. (Remote Dictionary Server), i.e. remote dictionary service!
2. Is an open source, written in ANSI C language
3. Support network
4. Memory based and persistent log
5. Key value database,
6. It also provides API s in multiple languages.
7.redis periodically writes the updated data to the disk or writes the modification operation to the additional record file, and realizes master-slave synchronization on this basis.
Free and open source! It is called structured database!
What can Redis do?
1. Memory storage and persistence. In memory, power is lost immediately, so persistence is very important (rdb, aof)
2. Efficient and can be used for caching
3. Publish subscribe system
4. Map information analysis
5. Timer, counter (views!)
Redis features?
1. Diverse data types
2. Persistence
3. colony
4. affair
3.2 windows installation Redis
1. Download installation package
2. After downloading, get the compressed package:
3. Unzip it to the environment directory on your computer! Redis is very small, only 5M
4. Turn on Redis,
5. Use the redis customer list to connect to redis
3.3 Linux Installation Redis
1. Download the installation package!
2. Unzip the Redis installation package!
tar -zxvf redis-6.2.3.tar.gz
3. After entering the extracted file, you can see our redis configuration file
cd redis-6.2.3/
4. Basic environment installation
yum install gcc-c++
Separate execution
make && make install
5. Default installation path of redis
whereis redis
6. redis configuration file. Copy to our current directory
cp /usr/local/redis/redis.conf .redis.conf
7. redis is not started in the background by default. Modify the configuration file!
Set to yes
8. Start Redis service!
cd /usr/local/redis/bin/
/home/wrz/redis-6.2.3/redis.conf this path is the path where the conf file was copied earlier
./redis-server ../redis.conf
9. Use redis cli for connection test!
redis-cli -p 6379
10. Check whether the redis process is started!
ps -ef | grep redis
11. How to turn off Redis service?
Turn off redis
shutdown
Exit redis
exit
12. Check again to see if the process exists
ps -ef | grep redis
3.4 performance test
Redis benchmark is a stress testing tool!
Official performance test tool!
Redis benchmark command parameters
Simple test
Test 100 concurrent connections 100000 redis-benchmark -h localhost -p 6379 -c 100 -n 100000
3.5 redis is single threaded
I understand that redis is fast. Officials say that redis is based on memory operation. CPU is not redis's performance bottleneck. Redis's bottleneck is based on the machine's memory and network bandwidth. You can use single thread instead of single thread!
Why is Redis single thread so fast?
Redis is written in C language. The official data provided is 100000 + QPS, which is not worse than Memecache using key value in the same style!
myth 1: high performance servers must be multi-threaded?
Myth 2: multithreading (CPU context will switch!) It must be more efficient than single thread!
understand the speed of CPU, memory and hard disk
CPU memory hard disk
core: Redis puts all data in memory, so using single thread to operate is the most efficient. Multithreading (CPU context switching: time-consuming operation!!!), For the memory system, if there is no context, the switching efficiency is the highest! Multiple reads and writes are on one CPU. In the case of memory, this is the best solution!
4. Redis Basics
redis has 16 databases by default
0 is used by default
You can use select to switch databases!
View db size!
dbsize
View all key s
keys *
Empty the current database
flushdb
Empty all databases
flushdball
5. Redis operation
5.1 key operation
- set
- get
- exists
- move
- ttl
- type
- keys *
#set up set XXX #obtain get XXX #Judge whether the current key exists exists XXX #Remove a key move xxx 1 #Set a key to expire automatically expire xxx Expiration time (seconds) #See how many seconds are left to expire ttl xxx
127.0.0.1:6379> keys * #View all key s (empty array) 127.0.0.1:6379> set name xianyan #Set the key value to xianyan OK 127.0.0.1:6379> keys * 1) "name" 127.0.0.1:6379> set age a #Set the key value to a OK 127.0.0.1:6379> keys * 1) "age" 2) "name" 127.0.0.1:6379> exists name #Judge whether the current key exists or not. 1 is "exist" (integer) 1 127.0.0.1:6379> move name 1 #Remove current key (integer) 1 127.0.0.1:6379> keys * 1) "age" 127.0.0.1:6379> keys * 1) "age" 127.0.0.1:6379> set name xy OK 127.0.0.1:6379> get name "xy" 127.0.0.1:6379> expire name 10 #Sets the expiration time of the current key, in seconds (integer) 1 127.0.0.1:6379> ttl name #Check how long the key expires (integer) 4 127.0.0.1:6379> ttl name (integer) 1 127.0.0.1:6379> ttl name (integer) -2 127.0.0.1:6379> get name (nil) 127.0.0.1:6379>
expire purpose: single sign on
127.0.0.1:6379> keys * 1) "age" 127.0.0.1:6379> type name #View the type of current key none 127.0.0.1:6379> set name xiany OK 127.0.0.1:6379> type name string 127.0.0.1:6379> type age string 127.0.0.1:6379>
5.2String type
90% of java programmers only use one String type when using redis
- set
- get
- mset
- mget
- keys *
- exists
- append
- strlen
- incr
- decr
- incrby
- decrby
- getrange
- setrange
- setex
- setnx
- getset
127.0.0.1:6379> set key1 v1 #Set value OK 127.0.0.1:6379> get key1 #Get value "v1" 127.0.0.1:6379> keys * #Get all key s 1) "key1" 127.0.0.1:6379> exists key1 #Determine whether a key exists (integer) 1 127.0.0.1:6379> append key1 "hello" #Append a string. If the key does not exist, it is equivalent to set key (integer) 7 127.0.0.1:6379> get key1 "v1hello" 127.0.0.1:6379> strlen key1 #Get key length (integer) 7 127.0.0.1:6379> append key1 'xianyan' (integer) 14 127.0.0.1:6379> get key1 "v1helloxianyan" 127.0.0.1:6379> strlen key1 (integer) 14 127.0.0.1:6379> y
Self increasing
i++ step i+= 127.0.0.1:6379> set views 0 #The initial number of views is 0 OK 127.0.0.1:6379> get views "0" 127.0.0.1:6379> incr views #Self increasing 1 views + 1 (integer) 1 127.0.0.1:6379> incr views (integer) 2 127.0.0.1:6379> get views "2" 127.0.0.1:6379> decr views #Self subtraction 1 (integer) 1 127.0.0.1:6379> decr views (integer) 0 127.0.0.1:6379> get views "0" 127.0.0.1:6379> incrby views 10 # Set the step size and specify the increment (integer) 10 127.0.0.1:6379> decrby views 5 (integer) 5 127.0.0.1:6379> incrby views 10 (integer) 15 127.0.0.1:6379> decrby views 1 (integer) 14 127.0.0.1:6379> get views "14" 127.0.0.1:6379>
String range
127.0.0.1:6379> set key1 "hello,xy" #Set the value of key1 OK 127.0.0.1:6379> get key1 "hello,xy" 127.0.0.1:6379> getrange key1 0 3 #Intercept string, interval [0,1,2,3] "hell" 127.0.0.1:6379> getrange key1 0 -1 #Getting all strings is the same as getting key "hello,xy" Replace string 127.0.0.1:6379> set key2 abcdef OK 127.0.0.1:6379> get key2 "abcdef" 127.0.0.1:6379> setrange key2 1 xx #Replace the string starting at the specified location (integer) 6 127.0.0.1:6379> get key2 "axxdef" 127.0.0.1:6379>
The current value does not exist
setex set expiration time
setnx has no setting, (it is often used in distributed locks!)
127.0.0.1:6379> setex key3 30 "hello" #Set the value of key3 to hello and expire in 30 seconds OK 127.0.0.1:6379> get key3 "hello" 127.0.0.1:6379> setnx mykey "redis" #If mykey does not exist, create mykey (integer) 1 127.0.0.1:6379> keys * 1) "key1" 2) "mykey" 3) "key3" 4) "key2" 127.0.0.1:6379> ttl key3 (integer) -2 #Expired, return - 2 127.0.0.1:6379> setnx mykey "mongodb" #If it does not exist, set it to mongodb (integer) 0 127.0.0.1:6379> get mykey #This value does not exist, so it is redis "redis"
Batch settings
mset
mget
127.0.0.1:6379> keys * (empty array) 127.0.0.1:6379> mset key1 v1 key2 v2 key3 v3 #Set multiple values at the same time OK 127.0.0.1:6379> keys * 1) "key1" 2) "key3" 3) "key2" 127.0.0.1:6379> mget key1 key2 key3 #Get multiple values at the same time 1) "v1" 2) "v2" 3) "v3" #msetnx is an atomic operation that either succeeds or fails together! 127.0.0.1:6379> msetnx key1 v1 key4 v4 (integer) 0 127.0.0.1:6379> keys * 1) "key1" 2) "key3" 3) "key2" 127.0.0.1:6379> get key4 (nil)
object
#Set a user:1 object with the value of json string to save an object! 127.0.0.1:6379> set user:1 {name:xy,age:3} OK 127.0.0.1:6379> keys * 1) "key1" 2) "user:1" 3) "key3" 4) "key2" 127.0.0.1:6379> get user:1 "{name:xy,age:3}" 127.0.0.1:6379> 127.0.0.1:6379> mset user:2:name xianyan user:2:age 3 OK 127.0.0.1:6379> get user:2:name "xianyan" 127.0.0.1:6379> mget user:2:name user:2:age 1) "xianyan" 2) "3" 127.0.0.1:6379>
Get set get set
127.0.0.1:6379> getset db redis #Gets a value that does not exist, then returns nil (nil) 127.0.0.1:6379> get db "redis" 127.0.0.1:6379> getset db "mongodb" #Get the original value and set the new value "redis" 127.0.0.1:6379> get db "mongodb"
- The data structure is the same!
- Usage scenario of String type: value can be not only our String, but also our number
- Counter
- Count multi unit quantity
- Fraction number
- Object cache storage!
5.3List
Basic data type, list
in redis, you can complete the list, stack, queue and block queue!
all list commands start with l. Redis is not case sensitive
- lpush # inserts one or more values into the header of the list
- lrange
- lpop
- rpop
- llen
- lrem
- lrange
- ltrim
- rpush
- rpoplpush
- linsert
- lset
- lindex
Left
127.0.0.1:6379> lpush list one #Inserts a value, or multiple values, into the head of the list (integer) 1 127.0.0.1:6379> lpush list two (integer) 2 127.0.0.1:6379> lpush list three (integer) 3 127.0.0.1:6379> lrange list 0 -1 #Get the value in the list 1) "three" 2) "two" 3) "one" 127.0.0.1:6379> lrange list 0 1 #Obtain the specific value through the interval 1) "three" 2) "two"
right
127.0.0.1:6379> rpush list right #Inserts one or more values at the end of the list (integer) 4 127.0.0.1:6379> lrange list 0 -1 1) "three" 2) "two" 3) "one" 4) "right" 127.0.0.1:6379>
remove
127.0.0.1:6379> lrange list 0 -1 1) "three" 2) "two" 3) "one" 4) "right" 127.0.0.1:6379> lpop list #Remove the first element of the list "three" 127.0.0.1:6379> rpop list #Remove the last element of the list "right" 127.0.0.1:6379> lrange list 0 -1 1) "two" 2) "one" 127.0.0.1:6379>
Get a value in the list by subscript
127.0.0.1:6379> lrange list 0 -1 1) "two" 2) "one" 127.0.0.1:6379> lindex list 1 "one" 127.0.0.1:6379> lindex list 0 "two"
Get list length
127.0.0.1:6379> llen list #Gets the length of the list (integer) 2 127.0.0.1:6379>
Removes the specified value
take off uid
lrem
127.0.0.1:6379> lrange list 0 -1 1) "four" 2) "three" 3) "three" 4) "two" 5) "one" 127.0.0.1:6379> lrem list 1 one #Removes the specified number of value s from the list collection (integer) 1 127.0.0.1:6379> lrange list 0 -1 1) "four" 2) "three" 3) "three" 4) "two" 127.0.0.1:6379> lrem list 2 three (integer) 2 127.0.0.1:6379> lrange list 0 -1 1) "four" 2) "two"
trim operation: list truncation
127.0.0.1:6379> lrange mylist 0 -1 1) "hello" 2) "hello1" 3) "hello2" 4) "hello3" 127.0.0.1:6379> ltrim mylist 1 2 #Intercept the specified length by subscript OK 127.0.0.1:6379> lrange mylist 0 -1 1) "hello1" 2) "hello2" 127.0.0.1:6379>
The list has been changed and truncated, leaving only the intercept element!
Removes the last element of the list and adds the removed element to the new list
127.0.0.1:6379> rpush mylist hello hello1 hello2 (integer) 3 127.0.0.1:6379> rpoplpush mylist myother #Removes the last element of the list and adds the removed element to the new list "hello2" 127.0.0.1:6379> lrange mylist 0 -1 #View current list 1) "hello" 2) "hello1" 127.0.0.1:6379> lrange myother 0 -1 #Check that the value does exist in the target list 1) "hello2" lset Replace the value of the specified subscript in the list with another value. In the update operation, there must be a list to be updated 127.0.0.1:6379> lpush list value (integer) 1 127.0.0.1:6379> lrange list 0 0 1) "value" 127.0.0.1:6379> lset list 0 item OK 127.0.0.1:6379> lrange list 0 0 1) "item"
linsert inserts an element, inserting a specific value before or after an element in the list
127.0.0.1:6379> linsert list before "item" "other" #Insert element after item (integer) 2 127.0.0.1:6379> lrange list 0 -1 1) "other" 2) "item" 127.0.0.1:6379>
Summary:
- It is actually a linked list. Before node, after, list right can insert values
- If it does not exist, create a new linked list
- If the key exists, add content
- If all values are removed, the empty linked list also means that it does not exist
- Insert or change values on both sides for the highest efficiency! Intermediate elements are relatively inefficient
5.4.Set
The value in set cannot be repeated
- sadd
- smembers
- sismember
- scard
- sadd
- srem
- srem
- srandmember
- spop
- smove
- sdiff
- sinter
- sunion
127.0.0.1:6379> sadd myset hello #Add element to set set (integer) 1 127.0.0.1:6379> sadd myset xianyan (integer) 1 127.0.0.1:6379> sadd myset xy (integer) 1 127.0.0.1:6379> smembers myset #View all values in the specified set 1) "xianyan" 2) "xy" 3) "hello" 127.0.0.1:6379> sismember myset hello #Determine whether a value is in the set or not (integer) 1 127.0.0.1:6379> sismember myset word (integer) 0 127.0.0.1:6379>
scard gets the number of elements in the set collection
127.0.0.1:6379> scard myset (integer) 3 127.0.0.1:6379> sadd myset xy (integer) 0 127.0.0.1:6379> sadd myset xy2 (integer) 1 127.0.0.1:6379> scard myset (integer) 4 127.0.0.1:6379>
srem removes the specified element from the set collection
127.0.0.1:6379> srem myset hello (integer) 1 127.0.0.1:6379> scard myset (integer) 3 127.0.0.1:6379> smembers myset 1) "xy2" 2) "xianyan" 3) "xy"
Set is out of order and does not duplicate the set
Random sampling
127.0.0.1:6379> srandmember myset #Select an element at random "xianyan" 127.0.0.1:6379> srandmember myset "xianyan" 127.0.0.1:6379> srandmember myset "xy2" 127.0.0.1:6379> srandmember myset "xianyan" 127.0.0.1:6379> srandmember myset "xy2" 127.0.0.1:6379> srandmember myset 2 #Randomly select a specified number of elements 1) "xy2" 2) "xy" 127.0.0.1:6379> srandmember myset 2 1) "xy2" 2) "xianyan" 127.0.0.1:6379> srandmember myset 2 1) "xy2" 2) "xianyan"
Delete the specified key
Randomly delete a key
127.0.0.1:6379> smembers myset 1) "xy2" 2) "xianyan" 3) "xy" 127.0.0.1:6379> spop myset #Randomly delete the elements in a set set "xy2" 127.0.0.1:6379> spop myset "xy" 127.0.0.1:6379> smembers myset 1) "xianyan" 127.0.0.1:6379>
Moves a specified value to another set set
127.0.0.1:6379> smembers myset 1) "world" 2) "xy" 3) "hello" 127.0.0.1:6379> smembers myset2 1) "set2" 127.0.0.1:6379> smove myset myset2 xy (integer) 1 127.0.0.1:6379> smembers myset 1) "world" 2) "hello" 127.0.0.1:6379> smembers myset2 1) "xy" 2) "set2" 127.0.0.1:6379>
Microblog, station B, common concern! (Union)
Numeric collection class:
1. Difference set
2. intersection
3. Union
127.0.0.1:6379> sadd key1 a (integer) 1 127.0.0.1:6379> sadd key1 b (integer) 1 127.0.0.1:6379> sadd key1 c (integer) 1 127.0.0.1:6379> sadd key2 c (integer) 1 127.0.0.1:6379> sadd key2 d (integer) 1 127.0.0.1:6379> sadd key2 e (integer) 1 127.0.0.1:6379> sdiff key1 key2 #Difference set 1) "b" 2) "a" 127.0.0.1:6379> sinter key1 key2 #This is achieved by meeting common friends 1) "c" 127.0.0.1:6379> sunion key1 key2 #Union 1) "c" 2) "e" 3) "b" 4) "a" 5) "d"
- On Weibo, user A puts all the followers in A set and its fans in A set,
- Common concern
- Common hobby
- Second degree friend (six degree separation theory)
- Recommend friends
5.5 hash (map set)
Map set, key map! At that time, this value is a map set!
- hset
- hmset
- hmget
- hgetall
- hdel
- hlen
- hexists
- hkeys
- hincrby
- hsetnx
127.0.0.1:6379> hset myhash file1 xy #set a specific key -value (integer) 1 127.0.0.1:6379> hget myhash file1 #Get a field value "xy" 127.0.0.1:6379> hmset myhash file1 hello filed2 world #set multiple key values OK 127.0.0.1:6379> hmget myhash file1 filed2 #Get multiple field values 1) "hello" 2) "world" 127.0.0.1:6379> hgetall myhash #Get all data 1) "file1" 2) "hello" 3) "filed2" 4) "world" 127.0.0.1:6379>
delete
127.0.0.1:6379> hgetall myhash 1) "file1" 2) "hello" 3) "filed2" 4) "world" 127.0.0.1:6379> hdel myhash filed2 #Delete the specified key field of hash! The corresponding value disappeared (integer) 1 127.0.0.1:6379> hgetall myhash 1) "file1" 2) "hello" 127.0.0.1:6379>
Gets the field length of the hash table
127.0.0.1:6379> hlen myhash (integer) 1 127.0.0.1:6379> hmset myhash file1 hello file2 world OK 127.0.0.1:6379> hgetall myhash 1) "file1" 2) "hello" 3) "file2" 4) "world" 127.0.0.1:6379> 127.0.0.1:6379> hlen myhash (integer) 2 127.0.0.1:6379>
Judge whether the specified field in the hash exists!
127.0.0.1:6379> hexists myhash file1 (integer) 1 127.0.0.1:6379> hexists myhash file3 (integer) 0 127.0.0.1:6379>
Only get all key s
Only get all value s
127.0.0.1:6379> hkeys myhash 1) "file1" 2) "file2" 127.0.0.1:6379> hvals myhash 1) "hello" 2) "world" 127.0.0.1:6379>
incr
decr
127.0.0.1:6379> hset myhash file3 5 #Specified quantity (integer) 1 127.0.0.1:6379> hincrby myhash file3 1 (integer) 6 127.0.0.1:6379> hincrby myhash file3 -1 (integer) 5 127.0.0.1:6379> hsetnx myhash file4 hello #If it does not exist, it can be set (integer) 1 127.0.0.1:6379> hsetnx myhash file4 word #If it exists, it cannot be set (integer) 0
hash changed data user name age, especially user information, which changes frequently! Hash is more suitable for object storage, and String is more suitable for String storage
127.0.0.1:6379> hset user:1 name xy (integer) 1 127.0.0.1:6379> keys * 1) "user:1" 2) "myhash" 127.0.0.1:6379> hget user:1 name "xy"
5.6Zset (ordered set)
On the basis of set, add a value, set k1 v1
zset k1 score1 v1
- zadd
- zrange
- zrangebyscore
- zrem
- zcount
- zrevrange
127.0.0.1:6379> zadd myset 1 one #Add a value (integer) 1 127.0.0.1:6379> 127.0.0.1:6379> zadd myset 2 two 3 three #Add multiple values (integer) 2 127.0.0.1:6379> zrange myset 0 -1 1) "one" 2) "two" 3) "three"
Sorting implementation
127.0.0.1:6379> zrange myset 0 -1 1) "one" 2) "two" 3) "three" 127.0.0.1:6379> zadd salary 2500 xiaohuang #Add three users (integer) 1 127.0.0.1:6379> zadd salary 5000 zhangsan (integer) 1 127.0.0.1:6379> zadd salary 500 xy (integer) 1 127.0.0.1:6379> zrangebyscore salary -inf +inf #Show all users, from small to large 1) "xy" 2) "xiaohuang" 3) "zhangsan" 127.0.0.1:6379> zrangebyscore salary 0 -1 (empty array) 127.0.0.1:6379> zrangebyscore salary 0 -1 with scores (error) ERR syntax error 127.0.0.1:6379> zrangebyscore salary 0 -1 withscores (empty array) 127.0.0.1:6379> zrangebyscore salary +inf -inf (empty array) 127.0.0.1:6379> zrangebyscore salary -inf +inf 1) "xy" 2) "xiaohuang" 3) "zhangsan" 127.0.0.1:6379> zrangebyscore salary 0 -1 (empty array) #Show all users with scores 127.0.0.1:6379> zrangebyscore salary -inf +inf withscores 1) "xy" 2) "500" 3) "xiaohuang" 4) "2500" 5) "zhangsan" 6) "5000" #Displays the ascending sort of employees whose salary is less than 2500 127.0.0.1:6379> zrangebyscore salary -inf 2500 withscores 1) "xy" 2) "500" 3) "xiaohuang" 4) "2500" 127.0.0.1:6379> zrangebyscore salary +inf -inf (empty array)
Remove elements from rem
127.0.0.1:6379> zrange salary 0 -1 1) "xy" 2) "xiaohuang" 3) "zhangsan" 127.0.0.1:6379> zrem salary xiaohuang #Removes the specified element from the ordered collection (integer) 1 127.0.0.1:6379> zrange salary 0 -1 1) "xy" 2) "zhangsan"
Gets the number of in the ordered collection
127.0.0.1:6379> zcard salary (integer) 2
Sort from large to small
127.0.0.1:6379> zrevrange salary 0 -1 #Sort from large to small 1) "zhangsan" 2) "xy"
zcount gets the number of members in the specified interval
127.0.0.1:6379> zadd myset 1 hello (integer) 1 127.0.0.1:6379> zadd myset 2 world 3 xy #Gets the number of members in the specified interval (integer) 2 127.0.0.1:6379> zcount myset 1 3 (integer) 3 127.0.0.1:6379> zcount myset 1 2 (integer) 2 127.0.0.1:6379>
Case ideas:
set sorting to store class grade table and payroll sorting!
ordinary message, 1, important message, 2, judgment with weight
6. geospatial geographic location
positioning of friends, people nearby, taxi distance calculation?
Redis Geo was launched in Redis version 3.2! This function can calculate the information of geographical location, the distance between the two places, and people within a few miles!
There are only six commands
- Add geographic location
- Rule: the two levels cannot be added directly. Generally, the city data is downloaded and imported directly through the java program at one time!
- Parameter key value ()
geoadd
127.0.0.1:6379> geoadd china:city: 116.40 39.90 beijing (integer) 1 127.0.0.1:6379> geoadd china:city: 121.47 31.23 shanghai (integer) 1 127.0.0.1:6379> geoadd china:city: 106.50 29.53 chongqing (integer) 1 127.0.0.1:6379> geoadd china:city: 114.08 22.53 shenzhen (integer) 1 127.0.0.1:6379> geoadd china:city 120.16 30.24 hangzhou 108.96 34.26 xian (integer) 2
geopos
Get the current location. It must be a coordinate value!
127.0.0.1:6379> geopos china:city: beijing #Get the longitude and latitude of the specified city! 1) 1) "116.39999896287918091" 2) "39.90000009167092543" 127.0.0.1:6379> geopos china:city: shanghai 1) 1) "121.47000163793563843" 2) "31.22999903975783553" 127.0.0.1:6379> geopos china:city: chongqing shenzhen hangzhou 1) 1) "106.49999767541885376" 2) "29.52999957900659211" 2) 1) "114.08000081777572632" 2) "22.52999956292396888" 3) (nil) 127.0.0.1:6379>
The distance between two people
- Returns the distance between two given positions.
- If one of the two locations does not exist, the command returns a null value.
- The parameter unit of the specified unit must be one of the following units:
- m is expressed in meters.
- km is expressed in kilometers.
- mi is in miles.
- ft is in feet.
If you do not explicitly specify the unit parameter, GEODIST defaults to meters.
# View the straight-line distance from Chongqing to Beijing 127.0.0.1:6379> geodist china:city: chongqing beijing km "1464.0708" # View the straight-line distance from Beijing to Shanghai 127.0.0.1:6379> geodist china:city: beijing shanghai km "1067.3788"
georadius takes a given latitude and longitude as the center to find the elements in a half
people near me? (get the address and location of all nearby people!) Query by radius!
all data should be entered: China:city: to make the structure more demanding!
#With the longitude and latitude of 110 and 30 as the center, look for cities within a 1000 km radius 127.0.0.1:6379> georadius china:city: 110 30 1000 km 1) "chongqing" 2) "shenzhen" 127.0.0.1:6379> georadius china:city: 110 30 600 km 1) "chongqing" 127.0.0.1:6379> georadius china:city: 110 30 500 km 1) "chongqing" # Displays the position to the middle distance 127.0.0.1:6379> georadius china:city: 110 30 500 km withdist 1) 1) "chongqing" 2) "341.9374" # Display other people's location information 127.0.0.1:6379> georadius china:city: 110 30 1000 km withdist withcoord 1) 1) "chongqing" 2) "341.9374" 3) 1) "106.49999767541885376" 2) "29.52999957900659211" 2) 1) "shenzhen" 2) "924.9425" 3) 1) "114.08000081777572632" 2) "22.52999956292396888" 127.0.0.1:6379>
Get the specified number of people, 200
# Filter out the specified results 127.0.0.1:6379> georadius china:city: 110 30 1000 km withdist withcoord count 1 1) 1) "chongqing" 2) "341.9374" 3) 1) "106.49999767541885376" 2) "29.52999957900659211" 127.0.0.1:6379> georadius china:city: 110 30 1000 km withdist withcoord count 2 1) 1) "chongqing" 2) "341.9374" 3) 1) "106.49999767541885376" 2) "29.52999957900659211" 2) 1) "shenzhen" 2) "924.9425" 3) 1) "114.08000081777572632" 2) "22.52999956292396888" 127.0.0.1:6379> georadius china:city: 110 30 1000 km withdist withcoord count 3 1) 1) "chongqing" 2) "341.9374" 3) 1) "106.49999767541885376" 2) "29.52999957900659211" 2) 1) "shenzhen" 2) "924.9425" 3) 1) "114.08000081777572632" 2) "22.52999956292396888"
georadiusbymember
like the GEORADIUS command, this command can find the elements within the specified range, but the center point of GEORADIUS bymember is determined by the given location element, instead of using the entered longitude and latitude to determine the center point, as GEORADIUS does, and the location of the specified member is used as the center of the query
#Find the location around the specified element 127.0.0.1:6379> georadiusbymember china:city: beijing 1000 km 1) "beijing" 127.0.0.1:6379> georadiusbymember china:city: shanghai 400 km 1) "shanghai"
7,Hyperloglog
What is cardinality?
A{1,3,5,6,8,7}
B{1,3,5,7,8}
cardinality (non repeated number) = 5, acceptable error!
brief introduction
Redis2. Version 8.9 updates the Hyperloglog data structure!
Redis Hyperloglog cardinality statistics algorithm!
Advantages: the occupied memory is fixed. 2 ^ 64 different element technologies only need 12kb memory! If you want to compare from the perspective of memory, Hyperloglog is preferred!
UV of web page (a person visits the website many times, but it is still a person)
in the traditional way, set saves the user's id, and then you can count the number of elements in set as a judgment!
if you save a large number of user IDs in this way, it will be more troublesome (accounting for memory). Our purpose is to count, not save user IDs;
0.81%, negligible!
127.0.0.1:6379> pfadd mykey a b c d e f g #Create the first set of elements mykey (integer) 1 127.0.0.1:6379> pfcount mykey #Count the cardinality number of mykey elements (integer) 7 127.0.0.1:6379> pfadd mykey2 i j k l m n o #Create the first set of elements mykey2 (integer) 1 127.0.0.1:6379> pfcount mykey2 (integer) 7 127.0.0.1:6379> pfmerge mykey3 mykey mykey2 #Merge two groups mykey mykey2 = mykey3 OK 127.0.0.1:6379> pfcount mykey3 #View union number (integer) 14
If fault tolerance is allowed, be sure to use Hyperloglog!
If fault tolerance is not allowed, you can use set or your own data type
8,Bitmap
there are many application scenarios in development. Learning is one more idea!
many skills don't weigh on you!
as long as you can't learn to die, learn to die!
Bit storage
statistics of user information, active, inactive, logged in, not logged in, clocked in, not clocked in. Bitmap can be used for both statuses!
Bitmaps, bitmap, data structure! They are all recorded by binary operations. There are only two states: 0 and 1!
use Bitmaps to record punch outs from Monday to Sunday!
Monday: 0
Tuesday: 0
Check whether there is a clock out on a certain day
Statistics operation to view the days of clocking in
Only two days to punch in
9. Transaction and monitoring
9.1 transactions
Either succeed or fail at the same time, atomicity!
A single Redis command is atomic, but Redis transactions are not atomic!
Redis transactions do not have the concept of isolation level!
All commands are not executed directly in a transaction! It will be executed only when the execution command is initiated!
Redis transaction essence: a collection of commands! All commands in a transaction will be serialized and executed sequentially during the execution of the transaction
- disposable
- Order
- exclusiveness
- Execute some column commands!
-------queue set set set implement------------
Redis transactions
- Open transaction (multi)
- Command listing
- Execute transaction (exec)
Lock: redis can implement optimistic lock
Normal execution of transactions
127.0.0.1:6379> multi #Open transaction OK # Order to join the team 127.0.0.1:6379> set k1 v1 QUEUED 127.0.0.1:6379> set k2 v2 QUEUED 127.0.0.1:6379> get k2 QUEUED 127.0.0.1:6379> set k3 v3 QUEUED 127.0.0.1:6379> exec #After executing the transaction, the transaction ends. If you want to execute it again, you need to restart the transaction #Execute transaction results 1) OK 2) OK 3) "v2" 4) OK
Abandon transaction
127.0.0.1:6379> multi OK 127.0.0.1:6379> set k1 v1 QUEUED 127.0.0.1:6379> set k2 v2 QUEUED 127.0.0.1:6379> set k4 v4 QUEUED 127.0.0.1:6379> discard #Abandon the transaction. Once abandoned, the commands in the transaction queue will not be executed OK 127.0.0.1:6379> get k4 (nil)
compiled exception (code problem! Command error) all commands in the transaction will not be executed!
127.0.0.1:6379> multi OK 127.0.0.1:6379> set k1 v1 QUEUED 127.0.0.1:6379> set k2 v2 QUEUED 127.0.0.1:6379> set k3 v3 QUEUED 127.0.0.1:6379> getset k3 #bad command (error) ERR wrong number of arguments for 'getset' command 127.0.0.1:6379> set k4 v4 QUEUED 127.0.0.1:6379> set k5 v5 QUEUED 127.0.0.1:6379> exec #Error in executing transaction (error) EXECABORT Transaction discarded because of previous errors. 127.0.0.1:6379> get k5 #All commands will not be executed (nil)
runtime exception (1 / 0). If syntax exists in the transaction queue, other commands can be executed normally when executing commands, and the error command throws an exception!
127.0.0.1:6379> set k1 "v1" #Save String value OK 127.0.0.1:6379> multi #Open transaction OK 127.0.0.1:6379> incr k1 #Self increment 1 QUEUED 127.0.0.1:6379> set k2 v2 QUEUED 127.0.0.1:6379> set k3 v3 QUEUED 127.0.0.1:6379> get k3 QUEUED 127.0.0.1:6379> exec 1) (error) ERR value is not an integer or out of range 2) OK #Although the first command reported an error, other commands were executed normally 3) OK 4) "v3"
9.2 monitoring
Pessimistic lock:
I'm very pessimistic. I think there will be problems at any time!
Optimistic lock
I'm optimistic that there will be no problem at any time, so I won't lock it! When updating the data, judge whether anyone has modified the data during this period, version! (judge version)
Redis optimistic lock
- Get version (unlocked)
- Compare version when updating,
- When the program is concurrent, optimistic locks are generally used instead of pessimistic locks, because they will be locked no matter what they do. The efficiency of pessimistic locks is extremely low, and the performance of optimistic locks will be much better.
Redis monitoring test
127.0.0.1:6379> set money 100 OK 127.0.0.1:6379> set out 0 OK 127.0.0.1:6379> watch money #Monitor money objects OK #The transaction ends normally and there is no change in the data period. The normal execution is successful at this time 127.0.0.1:6379> multi OK 127.0.0.1:6379> decrby money 20 QUEUED 127.0.0.1:6379> incrby out 20 QUEUED 127.0.0.1:6379> exec 1) (integer) 80 2) (integer) 20
Test multithreading, modify values, monitor, and use watch as an optimistic lock operation of redis
Thread 1
127.0.0.1:6379> watch money #monitor OK 127.0.0.1:6379> multi OK 127.0.0.1:6379> decrby money 10 QUEUED 127.0.0.1:6379> incrby out 10 QUEUED #Before execution, another thread modifies the value, which will lead to transaction execution failure 127.0.0.1:6379> exec (nil)
Thread 2
127.0.0.1:6379> get money (nil) 127.0.0.1:6379> get money (nil) 127.0.0.1:6379> set money 1000 OK
If the modification fails, just get the latest value!
10,Jedis
What is jedis?
it is the Java connection development tool officially recommended by redis! Use java to operate redis middleware! If you want to use java to operate redis, you must be very familiar with jedis!
1. Import dependency
<dependencies> <!--jedis rely on--> <dependency> <groupId>redis.clients</groupId> <artifactId>jedis</artifactId> <version>3.6.0</version> </dependency> <!--fastjson--> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.2.62</version> </dependency> </dependencies>
2. Coding test
Connect to database
public class TestPing { public static void main(String[] args) { //1. Just use the new Jedis object Jedis jedis = new Jedis("127.0.0.1",6379); System.out.println(jedis.ping()); } }
output
PONG
10.1 Redis operation through Jedis
- ping() test connection
- exists("name") determines whether the key exists
- set("password","xy") sets the value according to the key
- keys("*") view all key s
- jedis.del("password") deletes the key
- type("username") view the stored data type of the key
- rename("username","name") modifies the key name
- select(0) view by index
- Flush db() clears the current database
- dbSize() returns the number of key s in the database
- Flush all() clears all database key s
Jedis jedis = new Jedis("127.0.0.1", 6379); System.out.println(jedis.ping()); System.out.println("clear database "+jedis.flushDB()); System.out.println("Judge a key Is there:"+ jedis.exists("name")); System.out.println("newly added username,xy Key value pair:"+jedis.set("password","xy")); System.out.println("newly added password,123 Key value pair:"+jedis.set("password","xy")); System.out.println("All in the system key As follows:"); Set<String> keys = jedis.keys("*"); System.out.println(keys); System.out.println("delete key password: "+jedis.del("password")); System.out.println("judge key password Is there:"+jedis.exists("password")); System.out.println("see key username Type stored:"+jedis.type("username")); System.out.println("rename key "+jedis.rename("username","name")); System.out.println("Take out the modified name: "+jedis.get("name")); System.out.println("Query by index:"+jedis.select(0)); System.out.println("Delete all in the current database key : "+jedis.flushDB()); System.out.println("Returns all data in the database key number:"+jedis.dbSize()); System.out.println("Delete all databases key: "+jedis.flushAll());
Output results
PONG clear database OK Judge a key Is there: false newly added username,xy Key value pair: OK newly added password,123 Key value pair: OK All in the system key As follows: [password, username] delete key password: 1 judge key password Is there: false see key username Type stored: string rename key OK Take out the modified name: xy Query by index: OK Delete all in the current database key : OK Returns all data in the database key Number: 0 Delete all databases key: OK
10.2 jedis operation String type
Jedis jedis = new Jedis("127.0.0.1", 6379); //Empty the current database jedis.flushDB(); jedis.set("key1","val1"); jedis.set("key2","val2"); jedis.set("key3","val3"); System.out.println("delete key2: "+jedis.del("key2")); System.out.println("obtain key2: "+jedis.get("key2")); System.out.println("modify key1: "+jedis.set("key1","value1Changed")); System.out.println("stay key3 Add data later:"+jedis.append("key3","Emd")); System.out.println("key3 Value of:"+jedis.get("key3")); System.out.println("Add multiple key value pairs:"+jedis.mset("key01","value01","key02","value02","key03","value03","key04","value04")); System.out.println("Get multiple key value pairs:"+jedis.mget("key01","key02","key03")); System.out.println("Get multiple key value pairs:"+jedis.mget("key01","key02","key03","key04")); System.out.println("Delete multiple key value pairs:"+jedis.del("key01","key02")); System.out.println("Get multiple key value pairs:"+jedis.mget("key01","key02")); jedis.flushDB(); System.out.println("==========Add a key value pair and set the effective time================"); System.out.println(jedis.setnx("key1","value1")); System.out.println(jedis.setnx("key2","value2")); System.out.println(jedis.setnx("key2","value2-new")); System.out.println(jedis.get("key1")); System.out.println(jedis.get("key2")); System.out.println("==========Set effective time================"); System.out.println(jedis.setex("key3",2,"value3")); System.out.println(jedis.get("key3")); try{ TimeUnit.SECONDS.sleep(3); }catch (Exception e){ e.printStackTrace(); } System.out.println(jedis.get("key3")); System.out.println("==========Get the original value and update it to the new value================"); System.out.println(jedis.getSet("key2","keyGetSet")); System.out.println(jedis.get("key2")); System.out.println("get key2 String of values for:"+jedis.getrange("key2",2,4));
result:
delete key2: 1 obtain key2: null modify key1: OK stay key3 Add data later: 7 key3 Value of: val3Emd Add multiple key value pairs: OK Get multiple key value pairs:[value01, value02, value03] Get multiple key value pairs:[value01, value02, value03, value04] Delete multiple key value pairs: 2 Get multiple key value pairs:[null, null] ==========Add a key value pair and set the effective time================ 1 1 0 value1 value2 ==========Set effective time================ OK value3 null ==========Get the original value and update it to the new value================ value2 keyGetSet get key2 String of values for: yGe
10.3.Jedis operation List type
Jedis jedis = new Jedis("127.0.0.1", 6379); jedis.flushDB(); System.out.println("Add a List"); jedis.lpush("collections","ArrayList","LinkedList","Vector","Stack","Map","HashMap"); jedis.lpush("collections","Set"); jedis.lpush("collections","HashSet"); jedis.lpush("collections","TreeMap"); System.out.println("Collection Collection content:"+jedis.lrange("collections",0,-1));//-1 represents the last element System.out.println("Collection Interval 0-3 Element of"+jedis.lrange("collections",0,3)); System.out.println("=============================="); //Delete the value specified in the list. The second parameter is the number of deleted values (when there are duplicates). The value add ed later is deleted first, similar to out of the stack System.out.println("Delete the specified number of elements:"+jedis.lrem("collections",2,"HashMap")); System.out.println("collections Content:"+jedis.lrange("collections",0,-1)); System.out.println("Delete subscript 0-3 Elements outside the interval:"+jedis.ltrim("collections",0,3)); System.out.println("collections Content:"+jedis.lrange("collections",0,-1)); System.out.println("collections List out of stack (left end):"+jedis.lpop("collections")); System.out.println("collections Content:"+jedis.lrange("collections",0,-1)); System.out.println("collections Add elements from the right end of the list, and lpush corresponding"+jedis.rpush("collections","Java")); System.out.println("collections Content:"+jedis.lrange("collections",0,-1)); System.out.println("modify collections Specify the contents of subscript 1:"+jedis.lset("collections",1,"newValue")); System.out.println("collections Content:"+jedis.lrange("collections",0,-1)); System.out.println("================================="); System.out.println("collections length"+jedis.llen("collections")); System.out.println("obtain collections Length with subscript 2"+jedis.lindex("collections",2)); System.out.println("================================="); System.out.println(jedis.lpush("sortedList","3","6","2","0","7","4")); System.out.println("sortedList Before sorting:"+jedis.lrange("sortedList",0,-1)); //sort List<String> sortedList = jedis.sort("sortedList"); System.out.println("sortedList After sorting:"+sortedList);
result
Add a List Collection Collection content:[TreeMap, HashSet, Set, HashMap, Map, Stack, Vector, LinkedList, ArrayList] Collection Interval 0-3 Element of[TreeMap, HashSet, Set, HashMap] ============================== Number of specified elements deleted: 1 collections Content:[TreeMap, HashSet, Set, Map, Stack, Vector, LinkedList, ArrayList] Delete subscript 0-3 Elements outside the interval: OK collections Content:[TreeMap, HashSet, Set, Map] collections List out of stack (left end): TreeMap collections Content:[HashSet, Set, Map] collections Add elements from the right end of the list, and lpush Corresponding 4 collections Content:[HashSet, Set, Map, Java] modify collections Specify the contents of subscript 1: OK collections Content:[HashSet, newValue, Map, Java] ================================= collections Length 4 obtain collections Length with subscript 2 Map ================================= 6 sortedList Before sorting:[4, 7, 0, 2, 6, 3] sortedList After sorting:[0, 2, 3, 4, 6, 7]
10.4.Jedis operation Set type
Jedis jedis = new Jedis("127.0.0.1", 6379); jedis.flushDB(); System.out.println("=================Add elements to the collection (no repetition)========================"); System.out.println(jedis.sadd("eleSet","e1","e2","e3","e4","e6","e5","e0","e8","e7")); System.out.println(jedis.sadd("eleSet","e6")); System.out.println(jedis.sadd("eleSet","e6")); System.out.println("eleSet All elements of are:"+jedis.smembers("eleSet")); System.out.println("Delete an element e0:"+jedis.srem("eleSet","e0")); System.out.println("eleSet All elements of are:"+jedis.smembers("eleSet")); System.out.println("Delete two elements e7,e6: "+jedis.srem("eleSet","e7","e6")); System.out.println("eleSet All elements of are:"+jedis.smembers("eleSet")); System.out.println("Randomly remove an element from the collection:"+jedis.spop("eleSet")); System.out.println("Randomly remove an element from the collection:"+jedis.spop("eleSet")); System.out.println("eleSet All elements of are:"+jedis.smembers("eleSet")); System.out.println("eleSet The number of all elements of is:"+jedis.scard("eleSet")); System.out.println("e3 Is it eleSet Medium:"+jedis.sismember("eleSet","e3")); System.out.println("e1 Is it eleSet Medium:"+jedis.sismember("eleSet","e1")); System.out.println("e5 Is it eleSet Medium:"+jedis.sismember("eleSet","e5")); System.out.println("============================================================="); System.out.println(jedis.sadd("eleSet1","e1","e2","e3","e4","e5","e8","e7")); System.out.println(jedis.sadd("eleSet2","e1","e2","e3","e4","e8")); System.out.println("take eleSet1 Delete in e1 And deposit eleSet3 Medium:"+jedis.smove("eleSet1","eleSet3","e1")); System.out.println("take eleSet2 Delete in e1 And deposit eleSet3 Medium:"+jedis.smove("eleSet1","eleSet3","e2")); System.out.println("eleSet1 Elements in:"+jedis.smembers("eleSet1")); System.out.println("eleSet3 Elements in:"+jedis.smembers("eleSet3")); System.out.println("===========================Set operation=================================="); System.out.println("eleSet1 Elements in:"+jedis.smembers("eleSet1")); System.out.println("eleSet2 Elements in:"+jedis.smembers("eleSet2")); System.out.println("eleSet1 and eleSet2 Union of:"+jedis.sinter("eleSet1","eleSet2")); System.out.println("eleSet1 and eleSet2 Union of:"+jedis.sunion("eleSet1","eleSet2")); System.out.println("eleSet1 and eleSet2 Difference set of:"+jedis.sdiff("eleSet1","eleSet2")); //Union and save intersection to dstkey set jedis.sinterstore("eleSet4","eleSet1","eleSet2"); System.out.println("eleSet4 Elements:"+jedis.smembers("eleSet4"));
result
=================Add elements to the collection (no repetition)======================== 9 0 0 eleSet All elements of are:[e1, e2, e4, e6, e0, e5, e7, e3, e8] Delete an element e0:1 eleSet All elements of are:[e6, e5, e4, e1, e7, e3, e8, e2] Delete two elements e7,e6: 2 eleSet All elements of are:[e4, e1, e5, e3, e8, e2] Randomly remove an element from the collection: e3 Randomly remove an element from the collection: e4 eleSet All elements of are:[e1, e5, e8, e2] eleSet The number of all elements of: 4 e3 Is it eleSet Medium: false e1 Is it eleSet Medium: true e5 Is it eleSet Medium: true ============================================================= 7 5 take eleSet1 Delete in e1 And deposit eleSet3 Medium: 1 take eleSet2 Delete in e1 And deposit eleSet3 Medium: 1 eleSet1 Elements in:[e3, e8, e4, e7, e5] eleSet3 Elements in:[e2, e1] ===========================Set operation================================== eleSet1 Elements in:[e3, e8, e4, e7, e5] eleSet2 Elements in:[e4, e3, e2, e1, e8] eleSet1 and eleSet2 Union of:[e4, e3, e8] eleSet1 and eleSet2 Union of:[e3, e8, e1, e2, e4, e7, e5] eleSet1 and eleSet2 Difference set of:[e5, e7] eleSet4 Elements:[e4, e3, e8]
10.5.Jedis operation Hash
Jedis jedis = new Jedis("127.0.0.1", 6379); jedis.flushDB(); Map<String,String> hash = new HashMap<String, String>(); hash.put("k1","v1"); hash.put("k2","v2"); hash.put("k3","v3"); hash.put("k4","v4"); //Add an element named hash (key) jedis.hmset("hash",hash); //Add key k5 value v5 to the hash named hash jedis.hset("hash","k5","v5"); System.out.println("hash hash All key value pairs are:"+jedis.hgetAll("hash")); System.out.println("hash hash All keys for:"+jedis.hkeys("hash")); System.out.println("hash hash All values for:"+jedis.hvals("hash"));
result
hash hash All key value pairs are:{k3=v3, k4=v4, k5=v5, k1=v1, k2=v2} hash hash All keys for:[k3, k4, k5, k1, k2] hash hash All values for:[v3, v2, v1, v4, v5]
10.6 jedis operation transaction
public static void main(String[] args) { Jedis jedis = new Jedis("127.0.0.1", 6379); jedis.flushDB(); //Open transaction Transaction transaction = jedis.multi(); JSONObject jsonObject = new JSONObject(); jsonObject.put("hello", "world"); jsonObject.put("name", "xy"); String result = jsonObject.toJSONString(); try { transaction.set("user1", result); transaction.set("user2", result); //Simulation anomaly int i = 1 /0; transaction.exec(); } catch (Exception e) { //Abandon transaction transaction.discard(); e.printStackTrace(); }finally{ System.out.println(jedis.get("user1")); System.out.println(jedis.get("user2")); jedis.close();//Close connection } }
result:
java.lang.ArithmeticException: / by zero at cn.bloghut.TestTx.main(TestTx.java:30) null null
11. Spring boot integrated Redis
11.1 springboot integrated Redis source code
SpringBoot operation database: Spring data JPA JDBC mongodb redis
Spring Data is also as famous as SpringBoot
In springboot 2 After X, the original jedis is replaced by lettuce
jedis:
the bottom layer adopts direct connection. If multiple threads operate, it is unsafe. If you want to avoid insecurity, use jedis pool connection pool! NIO mode
lettuce:
netty is adopted at the bottom layer, and instances can be shared in multiple threads, so there is no unsafe situation! It can reduce the number of threads and update NIO mode
spring.factories
RedisTemplate template
- @ConditionalOnMissingBean is an annotation that modifies beans. Its main implementation is that after your beans are registered, if you register beans of the same type, it will not succeed. It will ensure that there is only one bean, that is, there is only one instance. When you register multiple identical beans, exceptions will appear to tell developers.
- Simple point: this method takes effect only if the bean does not exist!
- If we define a redisTemplate, the default one will become invalid
@Bean @ConditionalOnMissingBean @ConditionalOnSingleCandidate(RedisConnectionFactory.class) public StringRedisTemplate stringRedisTemplate(RedisConnectionFactory redisConnectionFactory) {
Since String is the most commonly used type in redis, a bean is proposed separately
11.2SpringBoot integration Redis
1. Import dependency
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency>
2. Configure connections
#All spring boot configuration classes have an auto configuration class #Automatic configuration classes will bind a properties configuration file spring: redis: host: 127.0.0.1 port: 6379
3. Test
@Autowired private RedisTemplate redisTemplate; @Test void contextLoads() { //Operate on different data types //Operation of five data types //opsForValue() operation String //opsForList() operation List //opsForSet() operation Set //opsForHash() operation //opsForZSet() operation Zset //In addition to basic operations, common methods can be used directly //Transaction and basic CRUD //Get redis connection object // RedisConnection connection = redisTemplate.getConnectionFactory().getConnection(); // connection.flushDb(); // connection.flushAll(); redisTemplate.opsForValue().set("mykey","csdn_xy"); Object mykey = redisTemplate.opsForValue().get("mykey"); System.out.println(mykey); }
all redis operations are actually very simple for java developers. It is more important to understand the idea of redis and the use and action scenarios of each data structure!
11.3 customize RedisTemplate
1.RedisTemplate uses JdkSerializationRedisSerializer
2.RedisTemplate encapsulates some common Redis operations
3.Jedis is a Java oriented Redis client officially recommended by Redis, and RedisTemplate is a highly encapsulated JedisApi in spring dataredis.
@Configuration public class RedisConfig { /** * You have defined a RedisTemplate * @param factory * @return */ @Bean @SuppressWarnings("all") public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory factory) { // For the convenience of our own development, we generally directly use < string, Object > RedisTemplate<String, Object> template = new RedisTemplate<String, Object>(); template.setConnectionFactory(factory); // Json serialization configuration Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class); ObjectMapper om = new ObjectMapper(); om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY); om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL); jackson2JsonRedisSerializer.setObjectMapper(om); // Serialization of String StringRedisSerializer stringRedisSerializer = new StringRedisSerializer(); // The key is serialized by String template.setKeySerializer(stringRedisSerializer); // The key of hash is also serialized by String template.setHashKeySerializer(stringRedisSerializer); // value is serialized by jackson template.setValueSerializer(jackson2JsonRedisSerializer); // The value serialization method of hash is jackson template.setHashValueSerializer(jackson2JsonRedisSerializer); template.afterPropertiesSet(); return template; } }
12,Redis. Detailed explanation of conf configuration file
1. Start through the configuration file when starting!
2. If you are an expert, you will know
3. Once an expert makes a move, he knows whether there is one
4. The expert watches the doorway and the layman watches the excitement
5. Some small configurations at work can make you stand out
unit units are not case sensitive
contain
For example, import in Spring
network
Bound ip bind 127.0.0.1 -::1 Protection mode protected-mode yes Port number port 6379
General configuration
Whether to start as a daemon thread. The default is no yes. It runs as a daemon
How to run in the background mode, we need to specify a pid file (process file)
journal
The default number of databases is 16
Display logo
snapshot
Persistence: the number of operations performed within a specified time will be persisted to rdb . aof
-
redis is an in memory database. If it is not persistent, the data will be lost
-
Memory is lost when power is off
-
save 900 1
-
If at least one key is modified within 900 seconds, we will persist it
-
save 30 10
-
If at least 10 key s are modified within 300 seconds, we will persist them
-
save 60 10000
-
If at least 10000 key s are modified within 60 seconds, we will persist them
Persistence failed. Do you want to continue
Whether to compress rdb files (rdb persistent files) requires some CPU resources!
When saving rdb files, perform rdb check and verification
Directory where rdb files are saved
Master slave replication related
Do you want to save some data
Is it read-only
Safety related
You can set a password. There is no password by default
No permission, login required
Set password
Client restrictions
maxclients 1000 sets the maximum number of clients that can connect to redis
- Processing strategy for memory reaching the upper limit
- Remove some expired key s
- report errors
1. Volatile LRU: LRU only for key s with expiration time set (default value)
2. Allkeys lru: delete the key of lru algorithm
3. Volatile random: randomly delete expired key s
4. Allkeys random: random deletion
5. Volatile TTL: delete expired
6. noeviction: never expires, and an error is returned
AOF configuration
the aof mode is not enabled by default. The mode is persisted using rdb. In most cases, rdb is enough!
appendonly. The name of the AOF persistence file
1.appendfsync everysec executes sync once every second and may lose data for this second
2.appendfsync always syncs every time you modify it, which is slow
3.appendfsync no does not execute sync. At this time, the operating system synchronizes data by itself, which is the fastest!
Performance recommendations or preferred rdb
13. Redis persistence
Redis persistence – ADB
Interview and work, persistence is the focus.
Redis is an in memory database. If the database state in memory is not saved to disk, the database state in the server will disappear once the server process exits (power failure). So Redis provides persistence.
In master-slave replication, rdb is used for standby
write the data set Snapshot in memory to disk within the specified time interval. The jargon is: Snapshot snapshot Snapshot. Its recovery is to read the Snapshot file directly into memory.
Redus will separately create (Fork) a sub process for process persistence. It will first write the data to a temporary file. After the persistence process is completed, it will use this temporary file to replace the last persistent file. In the whole process, the main process does not perform any IO operations. This ensures extremely high performance. If large-scale data recovery is required and the integrity of data recovery is not very sensitive, RDB is more efficient than AOF. The disadvantage of RDB is that the database may be lost after the last persistence. Our default is RDB. Generally, we don't need to modify this configuration.
the file saved by RDB is dump rbb
they are all configured in our configuration file snapshot
Add 5 key s to the class in one minute
Dump. Is generated rdb
Shut down
View process
It's closed
Start redis
View process
connect
Get k1
It can be obtained that its value v1 always exists
Using the flush command to empty all databases will automatically generate an rdb file
Trigger mechanism
- If the save rule is satisfied, the rdb rule will be triggered automatically
- Executing the flush command will also trigger rdb rules
- Exiting redis will also generate rdb files!
The backup will automatically generate a dump RDB file
How to recover rdb files
- You only need to put the rdb file on our Redis startup. When Redis starts, it will automatically check dump rdb files recover the data in them.
- View the text to be stored
- If dump is stored in the / home/wrz/redis-6.2.3/src directory RDB file, the data in it will be recovered automatically after startup.
Almost his own default configuration is enough!
advantage
- Suitable for large-scale data recovery!
- The requirements for data integrity are not high!
shortcoming
- Process operation needs a certain time interval! If the system goes down unexpectedly, there will be no need to modify the data for the last time!
- The fork process will occupy a certain memory space!
Redis persistence – AOF
record all our commands, history, and execute all the files again when recovering
each write operation is recorded in the form of a log. All instructions executed by redis are recorded (read operations are not recorded). Only files are allowed to be added, but files cannot be overwritten. Redis will read the file and rebuild the data at the beginning of startup. In other words, if redis restarts, the write instructions will be executed from front to back according to the contents of the log file to complete the data recovery
AOF saves appendonly Aof file
It is not enabled by default. It needs to be set manually. Set appendonly to yes
Overridden rules
Modify profile
Restart redis
Viewing aof files
Add data
View appendonly file
if the aof file is misplaced, redis cannot be started at this time. You need to repair the aof file
redis provides a tool
redis-check-aof --fix
One is to lose all but the wrong data.
- appendfsync everysec executes sync once every second and may lose data for this second
- appendfsync always syncs every time you modify it, which is slow
- appendfsync no does not execute sync. At this time, the operating system synchronizes data by itself, which is the fastest!
advantage
- Every modification is synchronized, and the file integrity is better!
- Sync once per second, and you may lose one second of data
- Never sync, most efficient!
shortcoming
- For data files, aof is much larger than rdb, and the speed of modification is also slower than rdb!
- aof also runs slower than rdb. rdb persistence is the default configuration for all redis.
Rewrite rule description
- The default is the unlimited addition of files, and the files are getting larger and larger
- If the aof file is larger than 64m, it is too large! fork a new one comes in and rewrites our files!
Redis persistence summary
- RDB persistence can snapshot and store your data within a specified time interval
- Aof persistence records every write operation to the server. When the server restarts, these commands will be re executed to recover the original data. The AOF command additionally saves each write operation to the end of the file with redis protocol. Redis can also rewrite the AOF file in the background, so that the volume of the AOF file will not be too large.
- Only cache. If you only want your data to exist when the server is running, you can also not use any persistence
- Enable two persistence methods at the same time
in this case, when redis restarts, AOF files will be loaded first to recover the original data, because normally, the data set saved in AOF files is more complete than that saved in RDB files.
RDB data is not real-time. When using both, the server will only find AOF files when restarting. Do you want to use AOF only? The author suggests no, because RDB is more suitable for backing up databases (AOF is changing and hard to back up), fast restart, and there will be no potential bugs in AOF, which will be kept as a means in case. - Performance recommendations
because RDB files are only used for backup purposes, it is recommended to only persist RDB files on Slave, and backup once every 15 minutes is enough. Only save 900 1 is retained.
if you Enable AOF, the advantage is that in the worst case, only less than two seconds of data will be lost. The startup script is simpler, and you can only load your own AOF file. The cost is that it brings continuous IO and AOF rewrite. Finally, the new data generated in the rewriting process is written to the new file, which is almost inevitable. As long as the hard disk is allowed, the frequency of AOF rewriting should be minimized. The default value of 64M for the basic size of AOF rewriting is too small, which can be set to more than 5G. By default, it exceeds 100% of the original size, and the size rewriting can be changed to an appropriate value.
if AOF is not enabled, high availability can be achieved only by master slave replication, which can save a lot of IO and reduce the system fluctuation caused by rewriting. The price is that if the Master/Slave is dropped at the same time, more than ten minutes of data will be lost. The startup script also needs to compare the RDB files in the two Master/Slave and load the newer one. This is the architecture of microblog.
14. Redis publish and subscribe
Thread communication
there are two things for the column, the sender and the subscriber
Redis publish / subscribe (pub/sub) is a message communication mode
1. Sender (pub) sends information
2. A subscriber (sub) receives information.
3. Wechat, microblog and attention system
4. Message queuing MQ
Redis client can subscribe to any number of channels
first: message sender
second: Channel
third: Message subscriber!
The relationship between channel 1 and the three clients subscribing to this channel - client2, client5 and client1
When a new message is sent to channel 1 through the publish command, the message will be sent to the three clients subscribing to it:
command
Rookie tutorial: https://www.runoob.com/redis/redis-pub-sub.html
test
Subscriber
127.0.0.1:6379> subscribe xy #Subscribe to a channel Reading messages... (press Ctrl-C to quit) 1) "subscribe" 2) "xy" 3) (integer) 1 1) "message" # news 2) "xy" # Which channel 3) "123" # Content of the message 1) "message" 2) "xy" 3) "xy" 1) "message" 2) "xy" 3) "helloredis"
Sender
127.0.0.1:6379> publish xy 123 # Publishers post messages to channels (integer) 1 127.0.0.1:6379> publish xy xy # Publishers post messages to channels (integer) 1 127.0.0.1:6379> publish xy helloredis # Publishers post messages to channels (integer) 1 127.0.0.1:6379>
Redis is implemented in C by analyzing PubSub in Redis source code C file to understand the underlying implementation of publish and subscribe mechanism, so as to deepen the understanding of Redis.
Redis implements PUBLISH and SUBSCRIBE functions through PUBLISH, SUBSCRIBE, PSUBSCRIBE and other commands.
WeChat:
after subscribing to a channel through the SUBSCRIBE command, a dictionary is maintained in redis server, and the keys of the dictionary are channels!, The value of the dictionary is a linked list, which saves all clients subscribing to the channel. The key of the SUBSCRIBE command is to add the client to the subscription linked list of a given channel.
Usage scenario
real time message system!
fact chat! (the channel is used as a chat room to echo the information to everyone)
subscription and attention system
A slightly more complex scene
message intermediate MQ
15. Redis cluster
15.1 redis Cluster - master-slave replication
For example: 1 master - 2 servants
concept
master Slave replication refers to copying data from one Redis server to other Redis servers. The former is called Master/leader and the latter is called Slave / follower; Data replication is unidirectional and can only be from master node to Slave node. Master mainly writes, Slave mainly reads.
by default, each Redis server is the master node; A master node can have multiple slave nodes (or no slave nodes), but a slave node can only be composed of one master node.
The main functions of master-slave replication include:
- Data redundancy: master-slave replication realizes the hot backup of data, which is a data redundancy method other than persistence.
- Fault recovery: when the master node has problems, the slave node can provide services to achieve rapid fault recovery; It is actually a kind of service redundancy.
- Load balancing: on the basis of master-slave replication, combined with read-write separation, the master node can provide write services, and the slave node can provide read services (that is, the application connects to the master node when writing Redis data, and the application connects to the slave node when reading Redis data), sharing the server load; Especially in the scenario of less writing and more reading, the concurrency of Redis server can be greatly improved by sharing the read load among multiple slave nodes.
- High availability (cluster) cornerstone: in addition to the above functions, master-slave replication is also the basis for sentinel and cluster implementation. Therefore, master-slave replication is the basis for Redis high availability.
Generally speaking, to apply Redis to engineering projects, it is absolutely impossible to use only one Redis (downtime, 1 master and 2 slave). The reasons are as follows:
structurally, a single Redis server will have a single point of failure, and one server needs to handle all request loads, which is under great pressure;
in terms of capacity, the memory capacity of a single Redis server is limited. Even if the memory capacity of a Redis server is 256G, all memory cannot be used as Redis storage memory. Generally speaking, the maximum memory used by a single Redis should not exceed 20G. Commodities on e-commerce websites are generally uploaded once and browsed countless times. It is said that being professional means "reading more and writing less".
Master-slave copy, read-write separation! 80% of the cases are read operations in the process! Reduce the pressure on the server!
Often used in architecture! One master and two slaves! (minimum configuration)
Master-slave replication must be used in the company, because Redis cannot be used on a single machine in a real project!
Environment configuration
Only slave libraries are configured, and master libraries are not configured!
[root@localhost bin]# ./redis-cli 127.0.0.1:6379> info replication #View information about the current library # Replication role:master #Role: master connected_slaves:0 #No slave master_replid:76cfb7376506413b4d6dd71f6da24afc8a61dedc master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0
Host: 6379
1,port 2,pid name 3,log File name 4,dump.rdb name
Slave: 6380
Slave: 6381
start-up
By default, all three are hosts
By default, each Redis server is the master node; Generally, only configure the slave to recognize the boss!
Configure one master and two slave
Set 6379 this machine as a slave
Is to find someone to be your boss
slaveof
Set 6381 this machine as a slave
View the configuration of the host
The real master-slave configuration should be configured in the configuration file. In this case, it is permanent. The commands used here are temporary!
details
1. The host can write, the slave can't write, can only read!
2. All information and data in the host are automatically saved by the slave!
Slave can only write
Test: when the host is disconnected, the slave is still connected to the host, but there is no write operation. At this time, if the host returns, the slave can still directly obtain the information written by the host!
If you use the command line to configure the master and slave, if you restart at this time, it will change back to the host! As long as it becomes a slave, it will get the value from the host immediately!
Replication principle
After Slave is successfully started and connected to the master, it will send a sync synchronization command
After receiving the command, the master starts the background save process and collects all received commands for modifying the dataset. After the background process is executed, the master will transfer the entire data file to the slave and complete a complete synchronization.
Full copy: after receiving the database file data, the slave service saves it and loads it into memory.
Incremental replication: the Master continues to transmit all new collected modification commands to the slave in turn to complete the synchronization
However, as long as the master is reconnected, a full synchronization (full replication) will be performed automatically! Our data must be visible in the slave!
The second model: layer by layer link
it's time to complete master - slave replication. 6380, that is, the slave node of 6379, is also the master node of 6381, but this node cannot complete writing.
If there is no boss (the master node hangs), can you choose a boss at this time? Before sentinel mode comes out, it needs to be configured manually
Seeking to usurp the throne
if the host is disconnected, you can use slaveof no one to make yourself a host! Other nodes can be manually connected to the latest master node (manual)
If the boss fixes it at this time, you can only reconfigure it
it's like trying to usurp the throne. When the emperor steps down, you can't be the boss if you come back
The following 6380 re recognize 6379 as the boss
Sentinel mode
(automatic election of boss)
15.2 redis Cluster - sentinel mode
Concept:
the method of master-slave switching technology is: when the master server goes down, you need to manually switch a slave server to the master server, which requires manual intervention, which is laborious and laborious, and the service will not be available for a period of time. This is not a recommended way. More often, we give priority to Sentinel mode. Redis has officially provided Sentinel architecture since 2.8 to solve this problem.
the automatic version of seeking to usurp the throne can monitor whether the host fails in the background. If it fails, it will automatically convert from the library to the main library according to the number of votes.
sentinel mode is a special mode. Firstly, Redis provides the command of sentinel. Sentinel is an independent process. As a process, it will run independently. The principle is that the sentinel sends a command and waits for the response of the Redis server, so as to monitor multiple running Redis instances.
The sentry here has two functions
- Send a command to let Redis server return to monitor its running status, including master server and slave server.
- When the sentinel detects that the master is down, it will automatically switch the slave to the master, and then notify other slave servers through publish subscribe mode to modify the configuration file and let them switch hosts.
however, there may be problems when a sentinel process monitors the Redis server. Therefore, we can use multiple sentinels for monitoring. Each sentinel will also be monitored, which forms a multi sentinel mode.
assuming that the main server is down, sentry 1 detects this result first, and the system will not immediately fail over. Only sentry 1 subjectively thinks that the main server is unavailable, which becomes a subjective offline phenomenon. When the following sentinels also detect that the primary server is unavailable and the number reaches a certain value, a vote will be held between sentinels. The voting result is initiated by one sentinel to perform the "failover" operation. After the switch is successful, each sentinel will switch its monitored slave server to the host through the publish and subscribe mode. This process is called objective offline.
Test:
configure one master and two slaves!
1. Configure sentry profile
Bold style name cannot be wrong, sentinel conf
monitor # the name of the monitored
127.0.0.1 #host 6379 #Port number port 1 #On behalf of the host, the slave votes to see who takes over as the host, and the one with the most votes will become the host sentinel monitor myredis 127.0.0.1 6379 1
2. Activate the sentry
3. The test host bounced
Turn off the host 6379
View 6380
Automatic conversion to host
View 6381
Automatically recognize 6380 as the new boss
Conclusion:
if the master node is disconnected, a server will be randomly selected from the slave (there is a voting algorithm)
it's no use coming back at this time. It's a bare pole commander. If you want to be the boss, you need to configure it manually!
If 6379 comes back, it can only be a slave. (you're back. You can only be my man)
if the master comes back at this time, it can only be merged into the new master and used as a slave. This is the rule of sentinel mode!
Advantages of sentry mode:
- Sentinel mode cluster is based on master-slave replication mode. It has all the advantages of master-slave configuration
- The master-slave can be switched and the fault can be transferred, so the system availability will be better
- Sentinel mode is the upgrade of master-slave replication, from manual to automatic, which is more robust
Disadvantages of sentinel mode:
- Redis is not easy to expand online. Once the cluster capacity reaches the upper limit, online expansion will be very troublesome!
- The configuration of sentinel mode is very troublesome. There are many configurations
Full configuration of sentinel mode
# Example sentinel.conf # The port on which the sentinel sentinel instance runs is 26379 by default port 26379 # Sentry sentinel's working directory dir /tmp #ip port of the redis master node monitored by sentinel # The master name can be named by itself. The name of the master node can only be composed of letters A-z, numbers 0-9 These three characters".-_"form. # How many sentinel sentinels are configured in quorum? If the master node is lost, then At this time, it is objectively considered that the primary node is lost sentinel monitor mymaster 127.0.0.1 6379 2 # When the requirepass foobared authorization password is enabled in the Redis instance, all connections to the Redis instance The client must provide the password # Set the password of sentinel sentinel connecting master and slave. Note that the same authentication password must be set for master and slave sentinel auth-pass mymaster MySUPER--secret-0123passw0rd # Specifies how many milliseconds after which the master node does not respond to sentinel sentinel At this time, the sentinel subjectively thinks that the primary node goes offline for 30 seconds by default sentinel down-after-milliseconds mymaster 30000 # This configuration item specifies the maximum number of slave s that can be connected to the new master at the same time when a failover active / standby switch occurs Synchronization, the smaller the number, the completion failover The longer it takes, but if the number is larger, It means more slave because replication Not available. You can ensure that there is only one at a time by setting this value to 1 slave Is in a state where the command request cannot be processed. sentinel parallel-syncs mymaster 1 # Failover timeout can be used in the following aspects: #1. The interval between two failover of the same sentinel to the same master. #2. When a slave synchronizes data from an incorrect master, the time is calculated. Until the slave is corrected to correct master When synchronizing data there. #3. The time required to cancel an ongoing failover. #4. During failover, configure the maximum time required for all slaves to point to the new master. But even after This timeout, slaves Will still be correctly configured to point to master,But you don't parallel-syncs Configured Here are the rules # The default is three minutes sentinel failover-timeout mymaster 180000 # SCRIPTS EXECUTION #Configure the script to be executed when an event occurs. You can notify the administrator through the script, For example, when the system is not running normally, send an email to notify relevant personnel. #There are the following rules for the running results of scripts: #If the script returns 1 after execution, the script will be executed again later. The number of repetitions is currently 10 by default #If the script returns 2 after execution, or a return value higher than 2, the script will not be executed repeatedly. #If the script is terminated due to receiving a system interrupt signal during execution, the behavior is the same as when the return value is 1. #The maximum execution time of a script is 60s. If it exceeds this time, the script will be terminated by a SIGKILL signal Re execute after. #Notification script: when any warning level event occurs in sentinel (for example, the subjective failure and objective failure of redis instance) Failure, etc.) will call this script. At this time, this script should be sent by e-mail, SMS And other ways to notify system management Information about abnormal operation of the system. When calling the script, two parameters will be passed to the script. One is the type of event, One is the description of the event. If sentinel.conf If this script path is configured in the configuration file, it must be guaranteed A script exists in this path and is executable, otherwise sentinel Unable to start normally, successfully. #Notification script # shell programming sentinel notification-script mymaster /var/redis/notify.sh # Client reconfiguration master node parameter script # When a master changes due to failover, this script will be called to notify the relevant clients about master Information that the address has changed. # The following parameters will be passed to the script when calling the script: # # At present, it is always "failover", # It is one of "leader" or "observer". # The parameters from IP, from port, to IP and to port are used to communicate with the old master and the new master (i.e. the old master) of slave)Communicable # This script should be generic and can be called multiple times, not targeted. sentinel client-reconfig-script mymaster /var/redis/reconfig.sh # It is generally configured by Yun Wei!
At present, the society is saturated with programmers (primary and intermediate), and senior programmers are hard to find! Improve yourself
16. Redis cache penetration and avalanche
the use of Redis cache greatly improves the performance and efficiency of applications, especially in data query. But at the same time, it also brings some problems. Among them, the most crucial problem is the consistency of data. Strictly speaking, this problem has no solution. If data consistency is required, caching cannot be used.
other typical problems are cache penetration, cache avalanche and cache breakdown. At present, the industry also has more popular solutions.
Cache penetration
concept
the concept of cache penetration is very simple. When users want to query a data, they find that the redis memory database does not hit, that is, the cache does not hit, so they query the persistence layer database. No, so this query failed. When there are many users, the cache misses (second kill!), So they all requested the persistence layer database. This will put a lot of pressure on the persistence layer database, which is equivalent to cache penetration.
Solution
Bloom filter
bloom filter is a data structure that stores all possible query parameters in the form of hash. It is verified at the control layer first, and discarded if it does not meet the requirements, so as to avoid the query pressure on the underlying storage system;
Cache empty objects
when the storage layer misses, even the returned empty object will be cached, and an expiration time will be set. Then accessing the data will be obtained from the cache, protecting the back-end data source;
However, there are two problems with this method:
1. If null values can be cached, it means that the cache needs more space to store more keys, because there may be many null keys;
2. Even if the expiration time is set for a null value, there will be inconsistency between the data of the cache layer and the storage layer for a period of time, which will affect the business that needs to maintain consistency.
Cache breakdown (too much cache, cache expired!)
here we need to pay attention to the difference between cache breakdown and cache breakdown. Cache breakdown refers to that a key is very hot. It is constantly carrying large concurrency. Large concurrency focuses on accessing this point. When the key fails, the continuous large concurrency will break through the cache and directly request the database, which is like cutting a hole in a barrier.
when a key expires, a large number of requests are accessed concurrently. This kind of data is generally hot data. Because the cache expires, the database will be accessed at the same time to query the latest data and write back to the cache, which will lead to excessive pressure on the database.
1. Set the hotspot data to never expire
from the cache level, no expiration time is set, so there will be no problems after the hot key expires.
2. Add mutex lock
distributed lock: distributed lock is used to ensure that there is only one thread for each key to query the back-end service at the same time, and other threads do not have the permission to obtain the distributed lock, so they only need to wait. This method transfers the pressure of high concurrency to distributed locks, so it is a great test for distributed locks.
Cache avalanche
cache avalanche means that the cache set expires in a certain period of time. Redis is down!
one of the reasons for the avalanche. For example, when writing this article, it is about to arrive at double twelve o'clock, and there will be a wave of rush buying soon. This wave of goods will be put into the cache for a concentrated time. Suppose the cache is one hour. Then at one o'clock in the morning, the cache of these goods will expire. The access and query of these commodities fall on the database, which will produce periodic pressure peaks. Therefore, all requests will reach the storage layer, and the call volume of the storage layer will increase sharply, resulting in the storage layer hanging up.
in fact, centralized expiration is not very fatal. The more fatal cache avalanche is that a node of the cache server is down or disconnected. Because of the naturally formed cache avalanche, the cache must be created centrally in a certain period of time. At this time, the database can withstand the pressure. It is nothing more than periodic pressure on the database. The downtime of the cache service node will cause unpredictable pressure on the database server, which is likely to crush the database in an instant.
Solution
1.redis high availability
the meaning of this idea is that since redis may hang up, I will add several more redis. After one is hung up, others can continue to work. In fact, it is a cluster. (live more in different places!)
2. Current limiting degradation (explained in spring cloud!)
the idea of this solution is to control the number of threads reading and writing to the database cache by locking or queuing after the cache fails. For example, for a key, only one thread is allowed to query data and write cache, while other threads wait.
3. Data preheating
data heating means that before formal deployment, I first access the possible data in advance, so that some data that may be accessed in large quantities will be loaded into the cache. Before a large concurrent access is about to occur, manually trigger the loading of different cache key s and set different expiration times to make the time point of cache invalidation as uniform as possible.
Double Eleven: stop some services (ensure that the main services are available)
summary
- Five data types
- java operation jedis
- Persistence
- Master slave replication
- Cache avalanche and cache penetration