2. Redis distributed lock is redis distributed lock secure

We use local locks and distributed locks in many programming scenarios, but do we consider the principle of these locks? This article discusses the common methods of implementing distributed locks and their implementation principles.

1, Principles of using locks

Local locks and distributed locks are used to solve the scenario of dirty data caused by concurrency. The highest level of using locks is to avoid using locks through process design, which will sacrifice system performance.

2, Common distributed lock implementation

Distributed lock summary:

  • Product performance: redis > zookeeper > mysql, lock acquisition success rate: MySQL > ZK > redis
    |Lock implementation | implementation method | performance | selection attention | selection focus|
    | — | — | — | — | — |
    |mysql | optimistic lock | good | concurrent scenario lock invalid | low cost implementation|
    || pessimistic locking | poor locking | may lead to table locking | extreme scenarios|
    |zk | sequential node | medium | performance, reliability | general | performance and reliability|
    |redis | setNx | low | lock has no unique mark | simple but not recommended|
    || lua script | highest | bad use | worse effect | great God chooses not great God uses redisson|
    || redisson | medium and high | good balance | pay attention to performance|

2.1. Principle of mysql distributed lock

2.1.1 realization of optimistic lock

  • Optimistic lock implementation method:

This is achieved by adding version or updatetime timestamp to mysql. The following mainly introduces the new process, and the modification is simpler. It is not introduced. When transaction rollback occurs in optimistic lock implementation, invalid data in the new scenario should be handled. In fact, optimistic lock does not use the concept of lock. In fact, it is a technical implementation of version synchronization.
**Version - new implementation: * * when adding a new database, first add a piece of data, and then read the data and return it to the modification page. When the modification page is submitted, version comparison indicates that the data is legal if it is consistent, and illegal if it is inconsistent. Then the version is automatically incremented and written to the database.
**updatetime - new implementation: * * consistent with version rule.

  • Optimistic lock cannot solve the problem

Optimistic locking cannot solve the problem of concurrent multithreading, which is suitable for solving the problem of database data consistency in scenarios with low concurrency.

2.1.2 realization of pessimistic lock

  • **Pessimistic lock implementation: * * pessimistic lock implementation is simple select * from table for update. Pessimistic lock is the most reliable one among all locks in theory, but its performance is poor. It is not recommended in concurrent scenarios;
  • **Pessimistic lock problems: * * performance problems, resulting in lock table

2.2 implementation and principle of zookeeper distributed lock

2.2.1 distributed implementation of zookeeper

/**
 * Function: ZK - ZK cursor client - realize distributed lock test
 * Author: Ding Zhichao
 */  
public class ZkCuratorLock{  
	
	//Instantiate client
	private static RetryPolicy retryPolicy  = new ExponentialBackoffRetry(1000,3);
    private static CuratorFramework client = CuratorFrameworkFactory.builder()
            .connectString("ip:2181")
            .sessionTimeoutMs(3000)
            .connectionTimeoutMs(5000)
            .retryPolicy(retryPolicy)
            .build();
    
    //zk distributed lock creation node is created under zero time directory zklock
    static String lockPath = "/zklock";
    //Instantiate distributed locks
    final static InterProcessLock lock = new InterProcessSemaphoreMutex(client, lockPath);
	
	
	public static void main(String[] args) {
		//Acquire lock
		try {
			lock.acquire();
		} catch (Exception e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}finally {
			//Release lock
			try {
				lock.release();
			} catch (Exception e) {
			}
		}
	}
	
} 

<!-- zookeeper curator client  -->
<dependency>
  <groupId>org.apache.curator</groupId>
  <artifactId>curator-recipes</artifactId>
  <version>2.12.0</version>
</dependency>

2.2 implementation principle of zookeeper distributed lock

2.2.1 realization principle of sequential node

  • Create a lock directory lock
  • When thread A acquires A lock, it creates A temporary sequence node in the lock directory
  • Obtain all child nodes under the lock directory, and then obtain sibling nodes smaller than themselves. If they do not exist, it indicates that the current thread sequence number is the smallest, and obtain the lock
  • Thread B creates temporary nodes and obtains all sibling nodes. It only listens to the changes of the node that currently obtains the lock to save efficiency.
  • After processing, thread A deletes its own node. Thread B hears the change event and judges that it is the smallest node to obtain the lock

Due to the temporary properties of the node, if the client that created the znode crashes, the corresponding znode will be deleted automatically. This avoids the problem of setting the expiration time. The cursor client is a distributed lock based on sequential nodes.

2.3. Problems faced by zookeeper in implementing distributed locks

2.3.1. Is there a brain crack problem?

Since zookeeper is deployed based on Ha mode, its writing is all carried out at the master node. As long as the master node service is normal, there will be no brain crack problem.

2.3.2. Heartbeat timeout problem?

Because the zookeeper client and the server need heartbeat mechanism to maintain the session. If the client or server GC causes the heartbeat timeout, the node will be kicked out by zookeeper at zero hour. The zero hour node of zookeeper is bound to the session. If the session does not exist, all zero hour nodes created by the client will be deleted.

2.4 source code analysis of distributed lock implemented by zookeeper cursor

//1. Cursor lock data structure
private static class LockData
{
        //The thread that currently owns the lock
        final Thread owningThread;
        //The path of the current lock
        final String lockPath;
        //Lock counter
        final AtomicInteger lockCount = new AtomicInteger(1);
 }


//2. This paragraph is the essence of Curator. After the lock is successfully acquired, it will wait after the failure is verified and acquired.
private boolean internalLockLoop(long startMillis, Long millisToWait, String ourPath) throws Exception
    {
        boolean     haveTheLock = false;
        boolean     doDelete = false;
        try
        {
            if ( revocable.get() != null )
            {
                client.getData().usingWatcher(revocableWatcher).forPath(ourPath);
            }

            while ( (client.getState() == CuratorFrameworkState.STARTED) && !haveTheLock )
            {
                List<String>        children = getSortedChildren();
                String              sequenceNodeName = ourPath.substring(basePath.length() + 1); // +1 to include the slash
                
                //Lock acquired successfully
                PredicateResults    predicateResults = driver.getsTheLock(client, children, sequenceNodeName, maxLeases);
                if ( predicateResults.getsTheLock() )
                {
                    haveTheLock = true;
                }
                else
                {
                    //If the lock is not acquired, monitor the change of the node with the lock
                    String  previousSequencePath = basePath + "/" + predicateResults.getPathToWatch();
                    
                    //Wait timeout after lock acquisition failure. If the timeout is not set, wait all the time
                    synchronized(this)
                    {
                        try 
                        {
                            // use getData() instead of exists() to avoid leaving unneeded watchers which is a type of resource leak
                            client.getData().usingWatcher(watcher).forPath(previousSequencePath);
                            if ( millisToWait != null )
                            {
                                millisToWait -= (System.currentTimeMillis() - startMillis);
                                startMillis = System.currentTimeMillis();
                                if ( millisToWait <= 0 )
                                {
                                    doDelete = true;    // timed out - delete our node
                                    break;
                                }

                                wait(millisToWait);
                            }
                            else
                            {
                                wait();
                            }
                        }
                        catch ( KeeperException.NoNodeException e ) 
                        {
                            // it has been deleted (i.e. lock released). Try to acquire again
                        }
                    }
                }
            }
        }
        catch ( Exception e )
        {
            ThreadUtils.checkInterrupted(e);
            doDelete = true;
            throw e;
        }
        finally
        {
            if ( doDelete )
            {
                deleteOurPath(ourPath);
            }
        }
        return haveTheLock;
    }
    
//3. Lock acquisition algorithm idea - the default value of maxleaks is 1. It is required that the thread acquiring the lock is always the first thread of the list to ensure the order of acquiring the lock
public PredicateResults getsTheLock(CuratorFramework client, List<String> children, String sequenceNodeName, int maxLeases) throws Exception
    {
        int             ourIndex = children.indexOf(sequenceNodeName);
        validateOurIndex(sequenceNodeName, ourIndex);

        boolean         getsTheLock = ourIndex < maxLeases;
        String          pathToWatch = getsTheLock ? null : children.get(ourIndex - maxLeases);

        return new PredicateResults(pathToWatch, getsTheLock);
    }
    


2.5 redis implements distributed locks

2.5.1 redis distributed implementation

2.5.1.1. redis setNx implementation

//setNx implements distributed locks
public class SetNxLock {
		public static void main(String[] args) {
			Jedis jedis = new Jedis("localhost");
			jedis.setnx("key", "value");
			try {
				if(jedis.exists("key")) {
					jedis.expire("key", 10);
					System.out.println("I got the lock. Do some work!");
					jedis.del("key");
				}
			} catch (Exception e) {
				
			} finally {
				jedis.del("key");
			}
		}
}  

2.5.1.2 redis lua script implementation

/**
 * Function: redis - lua - lua implements distributed locks using redis Clients client
 * Author: Ding Zhichao
 */  
public class LuaLock {

		public static void main(String[] args) {
			 lock("122333", "33331","10000" );
			 unlock("122333", "33331");
	}
		
	/**
	 * Lock syntax
	 * key:redis key
	 * value:redis value
	 * time: redis timeouts The lock expiration time is generally greater than the time consumed by the most time-consuming business 
	 * Syntax reference document: https://www.runoob.com/redis/redis-scripting.html
	 * */	
	public static String lock(String key, String value,String timeOut ) {
			/**
	         *  -- Lock script, where KEYS [] is an external incoming parameter
	         *  -- KEYS[1]Indicates key 
	         *  -- ARGV[1]Represents value
	         *  -- ARGV[2]Indicates the expiration time
	         */
		    String lua_getlock_script = "if redis.call('SETNX','"+key+"','"+value+"') == 1 then" +
                "     return redis.call('pexpire','"+key+"','"+timeOut+"')" +
                " else" +
                "     return 0 " +
                "end";
			
			Jedis jedis = new Jedis("localhost");
			//Add script to cache but do not execute
			String scriptId = jedis.scriptLoad(lua_getlock_script);
			//Query whether the script is added
			Boolean isExists = jedis.scriptExists(scriptId);
			//Executing the script returns 1 for success and 0 for failure
			Object num = jedis.eval(lua_getlock_script);;
			return String.valueOf(num);
	}
	
	
	
	/**
	 * Release lock syntax
	 * key:redis key
	 * value:redis value
	 * time: redis timeouts The lock expiration time is generally greater than the time consumed by the most time-consuming business 
	 * Syntax reference document: https://www.runoob.com/redis/redis-scripting.html
	 * */	
	public static String unlock(String key, String value ) {
			/**
	         *  -- Lock script, where KEYS [] is an external incoming parameter
	         *  -- KEYS[1]Indicates key 
	         *  -- ARGV[1]Represents value
	         *  -- ARGV[2]Indicates the expiration time
	         */
		    String lua_unlock_script  =
		              "if redis.call('get','"+key+"') == '"+value+"' then " +
		                      " return redis.call('del','"+key+"') " +
		                      "else  return 0 " +
		                      "end";
			
			Jedis jedis = new Jedis("localhost");
			//Add script to cache but do not execute
			String scriptId = jedis.scriptLoad(lua_unlock_script);
			//Query whether the script is added
			Boolean isExists = jedis.scriptExists(scriptId);
			//Executing the script returns 1 for success and 0 for failure
			Object num = jedis.eval(lua_unlock_script);;
			return String.valueOf(num);
	}
}  

2.5.1.3. redis redisson implementation

/**
 * Function: redis - Redisson - Redisson implements distributed locks
 * Author: Ding Zhichao
 */  
public class RedissonLock {

	public static void main(String[] args) {
		Config config = new Config(); 
     	config.useSingleServer().setAddress("localhost");
     	RedissonClient redissonClient = Redisson.create(config);
		RLock rLock = redissonClient.getLock("key");
		try {
			rLock.tryLock(10, TimeUnit.SECONDS);
			System.out.println("I got the lock. It's my turn to work.");
		} catch (InterruptedException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}finally {
			if(rLock.isLocked()) {
				rLock.unlock();
			}
		}
		
	}
}  

2.5.1 distributed implementation principle of single node redis

2.5.1.1 implementation principle of single node setNx and lua script distributed lock

**setNx thread model: * * setNx thread model must be a single thread (including receiving, parsing and processing threads) to ensure that it does not have concurrency problems. This conjecture is not proved by the source code and related documents. redis thread model article

 SET resource_name my_random_value NX PX 30000

**lua script: * * lua script is actually a stored procedure similar to mysql. Its greatest significance is to reduce network interaction. The performance is better than using redis command alone.

if redis.call("get",KEYS[1]) == ARGV[1] then
    return redis.call("del",KEYS[1])
else
    return 0
end

2.5.1.2 implementation principle of redisson distributed lock

//Reisson lock entity data structure
public class RedissonLockEntry  {
    
    //Counter
    private int counter;
    
    //The signal class controls how many threads acquire locks at the same time
    private final Semaphore latch;
    private final RPromise<RedissonLockEntry> promise;
    //Thread queue
    private final ConcurrentLinkedQueue<Runnable> listeners = new ConcurrentLinkedQueue<Runnable>();
}


//Source class location: redisson redissonlock class
//redisson implementation of distributed lock source code analysis
public boolean tryLock(long waitTime, long leaseTime, TimeUnit unit) throws InterruptedException {
        long time = unit.toMillis(waitTime);
        long current = System.currentTimeMillis();
        long threadId = Thread.currentThread().getId();
        //Try to obtain the lock. See the following method for its implementation
        Long ttl = tryAcquire(waitTime, leaseTime, unit, threadId);
        // lock acquired
        if (ttl == null) {
            return true;
        }
        
        //If the lock times out, it directly returns failure
        time -= System.currentTimeMillis() - current;
        if (time <= 0) {
            acquireFailed(waitTime, unit, threadId);
            return false;
        }
        
        
        current = System.currentTimeMillis();   
        //Get lock structure through thread ID
        RFuture<RedissonLockEntry> subscribeFuture = subscribe(threadId);
        if (!subscribeFuture.await(time, TimeUnit.MILLISECONDS)) {
            if (!subscribeFuture.cancel(false)) {
                subscribeFuture.onComplete((res, e) -> {
                    if (e == null) {
                        unsubscribe(subscribeFuture, threadId);
                    }
                });
            }
            acquireFailed(waitTime, unit, threadId);
            return false;
        }

        try {
            //Check whether the lock times out again. If it times out, release the lock
            time -= System.currentTimeMillis() - current;
            if (time <= 0) {
                acquireFailed(waitTime, unit, threadId);
                return false;
            }
            
            //Acquire the lock by spinning in a non timeout period
            while (true) {
                long currentTime = System.currentTimeMillis();
                ttl = tryAcquire(waitTime, leaseTime, unit, threadId);
                // lock acquired
                if (ttl == null) {
                    return true;
                }
                
                //Timeout release lock
                time -= System.currentTimeMillis() - currentTime;
                if (time <= 0) {
                    acquireFailed(waitTime, unit, threadId);
                    return false;
                }

                // waiting for message
                currentTime = System.currentTimeMillis();
                if (ttl >= 0 && ttl < time) {
                    subscribeFuture.getNow().getLatch().tryAcquire(ttl, TimeUnit.MILLISECONDS);
                } else {
                    subscribeFuture.getNow().getLatch().tryAcquire(time, TimeUnit.MILLISECONDS);
                }

                time -= System.currentTimeMillis() - currentTime;
                if (time <= 0) {
                    acquireFailed(waitTime, unit, threadId);
                    return false;
                }
            }
        } finally {
            unsubscribe(subscribeFuture, threadId);
        }
//        return get(tryLockAsync(waitTime, leaseTime, unit));
    }


//Reisson obtains the lowest level implementation of the lock, which is implemented by lua script. If there is a key, the remaining life time of the key is returned
<T> RFuture<T> tryLockInnerAsync(long waitTime, long leaseTime, TimeUnit unit, long threadId, RedisStrictCommand<T> command) {
        return evalWriteAsync(getRawName(), LongCodec.INSTANCE, command,
                //If the key exists, it returns 1; otherwise, it returns 0               
                "if (redis.call('exists', KEYS[1]) == 0) then " +
                        //Add timeout to key    
                        "redis.call('hincrby', KEYS[1], ARGV[2], 1); " +
                        //Set the life cycle of the key      
                        "redis.call('pexpire', KEYS[1], ARGV[1]); " +
                        "return nil; " +
                        "end; " +
                        //Query key exists      
                        "if (redis.call('hexists', KEYS[1], ARGV[2]) == 1) then " +
                        "redis.call('hincrby', KEYS[1], ARGV[2], 1); " +
                        "redis.call('pexpire', KEYS[1], ARGV[1]); " +
                        "return nil; " +
                        "end; " +
                        "return redis.call('pttl', KEYS[1]);",
                Collections.singletonList(getRawName()), unit.toMillis(leaseTime), getLockName(threadId));
} 

//Release lock
@Override
protected RFuture<Void> acquireFailedAsync(long waitTime, TimeUnit unit, long threadId) {
        long wait = threadWaitTime;
        if (waitTime != -1) {
            wait = unit.toMillis(waitTime);
        }

        //I'm a little confused about this one. I'm waiting to learn lua
        return evalWriteAsync(getRawName(), LongCodec.INSTANCE, RedisCommands.EVAL_VOID,
                // Remove the list timeout key and the corresponding thread
                "local queue = redis.call('lrange', KEYS[1], 0, -1);" +
                // find the location in the queue where the thread is
                "local i = 1;" +
                "while i <= #queue and queue[i] ~= ARGV[1] do " +
                    "i = i + 1;" +
                "end;" +
                // go to the next index which will exist after the current thread is removed
                "i = i + 1;" +
                // decrement the timeout for the rest of the queue after the thread being removed
                "while i <= #queue do " +
                    "redis.call('zincrby', KEYS[2], -tonumber(ARGV[2]), queue[i]);" +
                    "i = i + 1;" +
                "end;" +
                // remove the thread from the queue and timeouts set
                //Remove timed out threads              
                "redis.call('zrem', KEYS[2], ARGV[1]);" +
                "redis.call('lrem', KEYS[1], 0, ARGV[1]);",
                Arrays.<Object>asList(threadsQueueName, timeoutSetName),
                getLockName(threadId), wait);
}

2.5.2 implementation principle of redis cluster distributed lock

The setNx and lua implementation of distributed locks described above are based on the local implementation of redis in a single node, as long as the local thread concurrency problem is solved. What about distributed locks in redis clusters? Redis cluster implements distributed lock based on redlock algorithm. The implementation principle and defects are introduced below.

2.5.2.1 prerequisites for redlock implementation

In the distributed version of the algorithm, we assume that we have N redis master nodes, which are completely independent. How a single node acquires and releases locks has been described above. Then, each independent redis in the cluster will also use this method to obtain and release locks. We assume that there are five redis nodes. This value is not fixed, but the choice of business needs.

2.5.2.2 implementation steps of redlock

  1. It gets the current time in milliseconds.
  2. It attempts to acquire locks in all N instances sequentially, using the same key name and random value in all instances. In step 2, when a lock is set in each instance, the client acquires it with a timeout that is less than the total lock auto release time. For example, if the automatic release time is 10 seconds, the timeout may be in the range of ~ 5-50 milliseconds. This can prevent the client from being blocked for a long time and trying to communicate with the closed Redis node: if the instance is unavailable, we should try to communicate with the next instance as soon as possible.
  3. The client calculates the time taken to acquire the lock by subtracting the timestamp obtained in step 1 from the current time. If and only if the client can acquire a lock in most instances (at least 3), and the total time taken to acquire the lock is less than the effective time of the lock, the lock is considered to have been acquired.
  4. If a lock is obtained, its effective time is considered to be the initial effective time minus the elapsed time, as calculated in step 3.
  5. If the client fails to acquire the lock for some reason (or it cannot lock N/2+1 instances or the effective time is negative), it will try to unlock all instances (even those that it believes are not locked).

**Summary: * * the above is the official introduction of redis. In order to ensure the original flavor, I translated it. The following is the summary according to my understanding. The red part is my inconsistent understanding of the official.

  1. Redis's time accuracy is very high. It obtains time in milliseconds, which also reveals the sensitivity of redis to system time, which is also a point questioned by Martin.
  2. redis acquires locks in N nodes at the same time. If it cannot obtain locks in the timeout period, it releases the locks to prevent communication congestion. This is also a point that Martin questioned him. Martin believes that this will lead to more communication times and increase the server cost.
  3. If the client can acquire locks in (N/2+1) of N redis nodes, and the longest time taken to acquire locks for each node is less than the effective time of the lock, it is considered that the client has acquired the lock.
  4. The effective time for redis to acquire a lock is equal to the effective time of the lock minus the time spent acquiring the lock minus the time difference between the cluster nodes. How do you understand? For example, the barrel effect, the lock life cycle is 10 seconds, the fastest node obtains it in 1 second, and the slowest node obtains it in 3 seconds. At this time, the lock life cycle is 10 - (3-1) = 8. After subtracting the time of obtaining the lock, the time difference of the cluster is shielded.
  5. Unable to acquire N/2+1 instances or lock acquisition timeout, i.e. lock acquisition failed.

2.5.2.3 how does redlock ensure security

Is the algorithm safe? We can try to understand what happens in different scenarios. ​
First, let's assume that the client can acquire the lock in most cases. All instances will contain a key with the same lifetime. However, the key is set at different times, so the key will expire at different times. However, if the first key is set to the worst at time T1 (the time we sampled before contacting the first server) and the last key is set to the worst at time T2 (the time we got a reply from the last server), we are sure that the first expired key in the set will have at least MIN_VALIDITY=TTL-(T2-T1)-CLOCK_DRIFT. All other keys will expire later, so we are sure that at least this time the keys will be set at the same time.
During the period when most keys are set, another client will not be able to obtain the lock, because if N/2+1 keys already exist, the N/2+1 SET NX operation will not succeed. Therefore, if a lock is acquired, it is impossible to re acquire it at the same time (in violation of the mutex attribute).

However, we also want to ensure that multiple clients try to acquire locks at the same time, but they cannot succeed at the same time.
If the client locks most instances with a time close to or greater than the maximum effective time of the lock (TTL we basically use for SET), it will consider the lock invalid and unlock the instances. Therefore, we only need to consider the case that the client can lock most instances within a time less than the effective time. In this case, for the parameters expressed above, MIN_VALIDITY no client should be able to re acquire the lock. Therefore, multiple clients can lock N/2+1 instances at the same time (the "time" is the end of step 2) only when most of the locking time is greater than the TTL time, making the locking invalid.

**Summary: * * the above is the official understanding of redis on how to ensure the security of redlok. I'll make a comment on my own understanding.
What I want to say is this algorithm MIN_VALIDITY=TTL-(T2-T1)-CLOCK_DRIFT, node t2-t1, is used to shield the time difference between cluster nodes. If the life cycle of the result is less than 0 or N/2+1 nodes cannot be obtained, it is considered that obtaining locks has failed.

2.5.2.4 Martin clapman redLock algorithm

I think Redlock algorithm is a bad choice because it is "neither fish nor poultry": it is unnecessary heavyweight and expensive for efficiency optimization lock, but it is not safe enough for the case that the correctness depends on the lock. ​
In particular, the algorithm makes dangerous assumptions about timing and system clock (basically assuming that the synchronization system has limited network delay and limited operation execution time). If these assumptions are not met, the security attribute will be violated. In addition, it lacks facilities to generate protection tokens (to protect the system from long network delays or suspended processes).

If you only need to do your best to lock (as an efficiency optimization, not for correctness), I suggest sticking to the simple Redis single node locking algorithm (if the condition setting does not exist to obtain the lock, atomic delete if value matches to release the lock), and clearly recording that the lock is only an approximate value in your code, which may sometimes fail. Don't bother to set up a cluster of five Redis nodes.

On the other hand, if you need to lock to ensure correctness, do not use Redlock. Instead, use an appropriate consensus system, such as ZooKeeper, possibly by implementing one of the Curator formulations of the lock. (at a minimum, use a database with reasonable transaction guarantees.) And please force the use of protection token in all resource access under lock.

As I said at the beginning, Redis is a good tool if you use it properly. None of the above will weaken the usefulness of Redis for its intended purpose. Salvatore has been committed to the project for many years and deserves its success. However, each tool has limitations, and it is important to understand them and plan accordingly.

**Summary: * * Martin clapman is also right. For example, redis has acquired locks on three of the five machines, but one of the three has hung up. After restarting, redis still thinks it has successfully acquired locks. redlock is a solution for consistency in distributed scenarios, which is also subject to CAP theory. It ensures AP availability and partition fault tolerance. Redis is the biggest and most demanding performance, which is the principle he has always believed in. redlock has problems under extremely harsh conditions. However, it does not mean that it is unscientific. zk also has its own scenario of losing data, and mysql also has its own scenario of losing data. The key is the probability of this happening, so my personal view is not very supportive of Martin clapman's view.

3, Distributed lock performance test competition

Just saying but not practicing means you don't understand. Just saying but not practicing means you don't have a systematic and comprehensive view. Finally, we test the performance of all kinds of distributed locks. Due to the resources of the test environment, our test value may be different from yours, but we pursue the scientificity of the test method rather than the absolute value. This test uses a local single application. Multiple nodes in the actual production environment should be tested in combination with their own environmental characteristics. Test code and pressure test script
Test method:
1. Single application - local deployment (I58G mac) one set of application jmeter pressure test this application
2.redis master cluster-4 core 8G3 node
3.zk Cluster - 4 core 8G*3 nodes

Lock implementationCluster modeImplementation modeLock acquisition success rateTPSSampling times
mysqlOptimistic lockTo be testedTo be tested10000-30000 times
Pessimistic lockTo be testedTo be tested10000-30000 times
zkSingle nodecurator100%106610000-30000 times
colonycurator100%50-7010000-30000 times
redisclustersetNx100%100-12010000-30000 times
lua script100%20010000-30000 times
redisson50-100%110010000-100000 times
sentrysetNxTo be testedTo be tested10000-30000 times
lua scriptTo be testedTo be tested10000-30000 times
redissonTo be testedTo be tested10000-30000 times
Single nodesetNxTo be testedTo be tested10000-30000 times
lua scriptTo be testedTo be tested10000-30000 times
redissonTo be testedTo be tested10000-30000 times

4, Related documents

3.1,redis distributed lock implementation principle official document

3.2,Martin clapman questioned redlock

Added by Tryfan on Sat, 29 Jan 2022 09:59:11 +0200