JavaSE concurrent programming

Articles Catalogue

Concurrent programming

Concurrent programming mainly involves multi-process and multi-threaded programming, while java usually involves multi-threaded programming, concurrency is not parallel, at the same time a processor core can only have one execution unit running on it. Concurrent programming can not only improve response speed, but also maximize the use of processor resources, especially in multi-core processors.

Thread Foundation

_In java, the data structure corresponding to a thread is the Thread class.
Threads have six states: newly created, runnable (including waiting and running), blocked, waiting, timing, waiting and terminating. When a thread has just been created, it is in a new state. When a thread in a waiting running state abandons the CPU actively or passively, it may acquire the CPU and enter the running state. It is related to the processor scheduling algorithm that which thread waiting for the running state acquires the CPU resources. Blocking and waiting threads take up the least system resources, they can not be directly scheduled, need to obtain other resources than CPU first. When the thread run method ends normally or throws an exception, it enters the termination state.
Conversion among threads: only in the case of a new Thread, it will enter a new state; only when the run function ends normally or throws an exception, it will enter a termination state; only when a thread waiting to run actively or passively releases the cpu, it will enter a running state; Only when the running thread requests resources (such as io resources, synchronous lock resources, etc.) are not available, it may enter the blocking state; only when the running thread wishes to wait for an event to occur, it will enter the waiting state; only when the running thread waits for the timing notification, it will enter the waiting state; when the running thread waits for the timing notification, it will enter the waiting state; When the waiting thread arrives, or the waiting event occurs, or the requested resource is obtained, or when the running thread does not require to abandon the cpu, or when the thread in the new state calls start, it will enter the waiting running state.
Each thread has its own priority, which is expressed as a number of 1 to 10. The larger the number, the higher the priority. By default, the priority of the thread inherits the priority of its parent thread, and the default of the main thread is 5. Thread priority is highly dependent on the operating system, such as linux system, java thread priority does not work, all threads have the same priority.
Threads are divided into user threads and daemon threads. Daemon threads serve other threads. If only daemon threads are left in a virtual machine, even if these daemon threads have not finished executing, the virtual machine will quit. Therefore, we should not access inherent resources such as files and numbers in daemon threads. In order to avoid damaging the inherent resources, such as databases.
The most basic way to start a thread is to first create a new Thread and configure the relevant attributes, then call start to make it run. Then a thread will be started at the bottom. When a thread enters the running state, the run method of the Thread object will be run in the thread, and the run method will call its implementation by default. The run method of the target member of the Runnable interface, so when the run method of the Thread class is rewritten, the run method of the target member of the Runnable interface is invalid and will not be executed. So why do you have Thread's run method and need to do more Runnable interface? Because Thread is a class, it can't inherit more. If the processing of a thread is complex and needs inheriting classes, then Runnable interface can be used, which shows that Runnable has better extensibility.
The interrupt() method of calling Thread object actually sets the interrupt flag of the thread to true, not to terminate the thread immediately. The interrupt flag bit should be continuously detected in the running code of the thread to control the thread. If a thread calls interrupt() when it is in the creation and termination state, the function does not work and does not change the interrupt flag. If the interrupt flag is true when it enters the termination state, it will be set to false; if the thread calls interrupt() when it is in the runnable state, it will set the interrupt flag to true. At this point, the thread's execution code can constantly detect the flag for processing; if a thread calls a method that throws an InterruptedException exception and enters a blocking or waiting state, then calling the thread's intterrupt method will cause the thread to return from the blocking or waiting state and throw an InterruptedException. When a thread calls a method that throws an InterruptedException exception, the interruptedException flag is already true. Then it throws the InterruptedException directly and clears the interruption flag. Note that the place where the exception is thrown is the blocking or waiting function, not the interrupt function. The control principle of using intterrupt() method is the same as setting a global variable by oneself, but the advantage of using intterrupt() is that it can make some interruptible waiting or blocking responses more timely. The general methods of calling into interruptible blocking or waiting state are Thread's sleep method, Object's wait method, Thread's join method and some interruptible io operations. For other synchronous locks, the call to intterrupt is also the middle of the thread. If the break flag is set to true, threads will not be immediately returned from blocked state to runnable state, let alone throw exceptions.
Thread stop method was abandoned because the principle of stop is to let the terminated thread throw a ThreadDeath error. Because the terminated thread is uncertain when it will be terminated, the consistency of thread operation data can not be guaranteed, and only the consistency of data can be guaranteed between two checks of interrupt flags. Then data consistency will be guaranteed during this period. The suspend method of threads is also discarded, because it has been proved that this method is very easy to cause thread deadlock, and the corresponding wake-up method resume is naturally discarded.

public interface Runnable {
  public abstract void run();
}

public class Thread implements Runnable {
  public Thread(ThreadGroup group, Runnable target, String name, long stackSize); // Specifies the thread group, execution code, thread name, and stack size to which it belongs.
  public Thread(ThreadGroup group, Runnable target, String name);
  public Thread(ThreadGroup group, Runnable target);
  public Thread(ThreadGroup group, String name);
  public Thread(Runnable target, String name);
  public Thread(Runnable target);
  public Thread(String name);
  public Thread();
  public static void setDefaultUncaughtExceptionHandler(UncaughtExceptionHandler eh); // Set the default exception handling method for all threads.
  public static UncaughtExceptionHandler getDefaultUncaughtExceptionHandler(); 
  public static native Thread currentThread(); // Gets the currently running thread.
  public static native void yield(); // The current thread voluntarily abandons the cpu, enters the waiting running state, and then redistributes the cpu. It is possible that the scheduling algorithm has selected the thread that voluntarily abandons the cpu.
  public static native void sleep(long millis) throws InterruptedException; // Enter an interruptible timed wait state.
  public static void sleep(long millis, int nanos) throws InterruptedException; // The waiting time is milliseconds + nanoseconds.
  public static native boolean holdsLock(Object obj); // Does the current thread hold an obj object lock?
  public void run(); // The run method of target is called by default.
  public synchronized void start(); // Open a thread to execute the run method.
  public final void stop/suspend/resume(); // They are all abandoned methods.
  public final void interrupt(); // Set interrupt status.
  public boolean isInterrupted(); // Returns the interrupt state of the thread.
  public static boolean interrupted(); // Returns the interrupt status of the current thread and sets the current thread interrupt to false.
  public final void setDaemon(boolean on); // Set threads to daemon threads.
  public final boolean isDaemon(); 
  public final native boolean isAlive(); // Active threads are threads that are not in the state of creation and termination.
  public final void setPriority(int newPriority); // Setting thread priority is not supported by JVMs in all operating systems.
  public final int getPriority();
  public final synchronized void setName(String name); // Set the thread name.
  public final String getName();
  public final ThreadGroup getThreadGroup(); // Gets the group to which the thread belongs.
  public static int activeCount(); // currentThread().getThreadGroup().activeCount(). 
  public final synchronized void join(long millis) throws InterruptedException;  // The thread calling the method waits for the thread to terminate or timeout.
  public final synchronized void join(long millis, int nanos) throws InterruptedException;
  public final void join() throws InterruptedException;
  public State getState(); // Gets the thread state.
  public void setUncaughtExceptionHandler(UncaughtExceptionHandler eh); // Set this thread to detect exception handling methods.
  public UncaughtExceptionHandler getUncaughtExceptionHandler();
}

Thread group

ThreadGroup is a collection of threads. ThreadGroup can manage the threads of the whole thread group (including threads in the sub-thread group), such as batch interruption.
Threads and thread groups in virtual machines form a tree structure in which threads can only serve as leaf nodes. The root thread group of the virtual machine is called system. When running from the main function, a thread group named main is established. Its parent thread group is the thread group system, and the thread of the main function belongs to the main thread group. The only way to specify a thread group for a thread is to specify it in the constructor, which cannot be changed after specifying. When not specified, it defaults to the group belonging to the parent thread. Threads are placed in the specified thread group only after the thread calls start. The only way to specify a parent thread group for a thread group is to specify it in the constructor, which cannot be changed after specifying, and default to the group to which the current thread belongs when not specified.
Thread run method can not throw check exception, but it can throw unchecked exception. When an unchecked exception is thrown, the thread terminates, but before the thread terminates, the exception thrown is handled. The processor implements Thread.UncaughtException Handler interface, which has only one method to handle exception void uncaultE. Xception (Thread, Throwable throwable). If the thread registers the exception handler itself, it will use the exception handler when it throws an exception, otherwise it will use the group it belongs to as the handler (ThreadGroup itself implements Thread. Uncaught Exception Handler, so the thread group itself is an exception handler). Thread groups are processors. If their exception handling methods are not overridden, the default exception handling method of the parent thread group is invoked. If there is no parent thread group (that is, the system thread group already), then the default exception handler is used (through the static method Thread. setDefaultUncaughtException Handler annotation) If the default exception handler is not registered, then print the exception on the standard error, that is, the print exception information we usually see. Since thread groups form a chain as processors and constantly request processing from parent thread groups, when the exception handling method of a group is rewritten, then it will not request processing from parent thread groups, but handle it by itself.

public class ThreadGroup implements Thread.UncaughtExceptionHandler {
  public ThreadGroup(ThreadGroup parent, String name); // Create a new thread group and specify the name of the parent thread group and the thread group.
  public ThreadGroup(String name); // public ThreadGroup(Thread.currentThread.getThreadGroup, name). 
  public final String getName(); 
  public final ThreadGroup getParent(); 
  public final void destroy(); // The thread group and its sub-thread group are destroyed on the premise that there are wireless processes in the group.
  public synchronized boolean isDestroyed();
  public final boolean parentOf(ThreadGroup g); // Whether g is the thread group or the sub-thread group of the thread group.
  public final int getMaxPriority(); 
  public final void setDaemon(boolean daemon); // A daemon thread group is a group of threads that are automatically destroyed when both its subthreads and its subthreads are stopped or destroyed.
  public final boolean isDaemon(); 
  public int activeCount(); // The number of active threads in a group (including the active threads under the sub-thread group).
  public int activeGroupCount(); // Number of active subthread groups (including subthread groups of subthread groups...).
  public final void interrupt/resume/stop/suspend(); // Threads in the thread group are all interrupted and all but interrupts are discarded.
  public void uncaughtException(Thread t, Throwable e); // Exception handling method.
  public int enumerate(Thread list[], boolean recurse); // Copy the active threads in the thread group to the array, and if recurse is true, the threads in the sub-thread group are also included.
  public int enumerate(Thread list[]); // enumerate(list, true). 
  public int enumerate(ThreadGroup[] list, boolean recurse); // Copy references to all active subgroups in this thread group to the specified array, and recurse decides whether to copy the thread group under the subthread group.
  public int enumerate(ThreadGroup list[]); // enumerate(list, true). 
}

Atomics

_Self-incremental operations on variables actually include reading values from memory to registers, then increasing them, and finally writing back to memory, which is not an atomic operation. The purpose of the existence of atomic classes is to manipulate the corresponding data atomically. The methods of atomic classes are atomic. One of the most important methods is CAS, the CompareAndSet function, which can be used to achieve mutual exclusion and synchronization.
Atomic classes contain four types: basic type, array type, reference type and object attribute modification type. The basic types of operations are Atomic Integer, Atomic Long, Atomic Boolean; the elements of the operation array are Atomic Integer Array, Atomic Long Array, Atomic Reference Array; the operation references are Atomic Reference, Atomic Stamped Rerence, Atomic Markable Reference; the attributes of the operation object are Atomic Integer Field Updater, Atomic Long Fiel. DUpdater, Atomic Reference Field Updater.

// AtomicIneger's method is very similar to that of this kind.
public class AtomicLong extends Number implements java.io.Serializable {
  public AtomicLong(long initialValue);
  public AtomicLong(); // Initialization is 0.
  public final long get();
  public final void set/lazySet(long newValue); // lazySet performs better than set methods, but it does not guarantee that other threads will be immediately visible for changes worth making, although the values are volatile-modified.
  public final long getAndSet(long newValue); // Returns the old value and sets the new value.
  public final boolean compareAndSet/weakCompareAndSet(long expect, long update); // If the current value of the non-weak method is expect, set the value to update and return true, otherwise nothing is done to return false. The weak method may fail to set the value and return false.
                      // That is to say, the non-weak method requires setting the value of update to be correct, as long as it returns false, it must not be equal to expect, and weak offence returns false may not be equal to expect, or it may fail to set the value.
  public final long getAndIncrement/getAndDecrement/incrementAndGet/decrementAndGet(); // They correspond to i++, i-,++ i,--i respectively.
  public final long getAndAdd/addAndGet(long delta); // When the old value is returned, the set value is the current value plus Delta / the set value is the current value plus delta, and the new value is returned. If you want to use subtraction, you only need to set Delta as a negative number.
  public int/long/float/double intValue/longValue/floatValue/doubleValue(); // Returns the value after strong rotation.
}    
    
// Internally, a Boolean type is represented by an int type, 1 true and 0 false.
public class AtomicBoolean implements java.io.Serializable {
  public AtomicBoolean(boolean initialValue);
  public AtomicBoolean(); // Default false.
  public final boolean get();
  public final void set/lazySet((boolean newValue);
  public final boolean compareAndSet/weakCompareAndSet(boolean expect, boolean update);
  public final boolean getAndSet(boolean newValue);
}    
    
// The Atomic Integer Array method is similar to it.
public class AtomicLongArray implements java.io.Serializable {
  public AtomicLongArray(int length);
  public AtomicLongArray(long[] array);
  public final int length();
  public final long get(int i);
  public final void set/lazySet(int i, long newValue);
  public final long getAndSet(int i, long newValue);
  public final boolean compareAndSet/weakCompareAndSet(int i, long expect, long update);
  public final long getAndIncrement/getAndDecrement/incrementAndGet/decrementAndGet(int i);
  public final long getAndAdd/addAndGet(int i, long delta);
}    

public class AtomicReferenceArray<E> implements java.io.Serializable {
  public AtomicReferenceArray(int length);
  public AtomicReferenceArray(E[] array);
  public final int length();
  public final E get(int i);
  public final void set/lazySet(int i, E newValue);
  public final E getAndSet(int i, E newValue);
  public final boolean compareAndSet/weakCompareAndSet(int i, E expect, E update);
}    
    
public class AtomicReference<V> implements java.io.Serializable {
  public AtomicReference(V initialValue);
  public AtomicReference();
  public final V get();
  public final void set/lazySet(V newValue);
  public final boolean compareAndSet/weakCompareAndSet(V expect, V update);
  public final V getAndSet(V newValue);
}

// It consists of an int and a V. Atomic MarkedReference is very similar to it, but consists of a boolean and a V.
public class AtomicStampedReference<V> {
  public AtomicStampedReference(V initialRef, int initialStamp);
  public V getReference();
  public int getStamp();
  public void set(V newReference, int newStamp);
  public boolean compareAndSet/weakCompareAndSet(V expectedReference, V newReference, int expectedStamp, int newStamp); // Stap can be used to control calls that are not made elsewhere, only if both are updated when they are equal.
}
    
// Tomic Integer Field Updater is similar in usage
public class AtomicStampedReference<V> {
  public static <U> AtomicLongFieldUpdater<U> newUpdater(Class<U> tclass, String fieldName);
  public boolean compareAndSet/weakCompareAndSet(T obj, long expect, long update);
  public void set/lazySet(T obj, long newValue);
  public long get(T obj);
  public long getAndSet(T obj, long newValue);
  public long getAndIncrement/getAndDecrement/public long incrementAndGet/decrementAndGet(T obj);
  public long getAndAdd/addAndGet((T obj, long delta);
}    

// T represents objects and V represents member object types
public abstract class AtomicReferenceFieldUpdater<T,V> {  
  public static <U,W> AtomicReferenceFieldUpdater newUpdater(Class<U> tclass, Class<W> vclass, String fieldName);
  public boolean compareAndSet/weakCompareAndSet(T obj, V expect, V update);
  public abstract void set/lazySet(T obj, V newValue);
  public abstract V get(T obj);
  public V getAndSet(T obj);
}

JMM (Java Memory Model)

JMM is a series of mechanisms and specifications that enable Java programs to access memory in various platforms to ensure consistent results. It shields access differences between hardware and operating systems. JMM is distinguished from Java memory structures, such as stack permanent generations, which are also called Java memory models in many materials. JMM mainly solves the problems of inconsistency of local memory data, reordering of code instructions by compiler and disorderly execution of code by processor when multi-threads communicate through shared memory (such as multi-threads accessing the same variable and message communication between threads). Java memory structure is the main one. Solve the problem of memory allocation and recovery.
In the hardware memory model, in order to solve the contradiction between low memory access efficiency and high CPU execution efficiency, caching is introduced into each CPU core, which improves the execution efficiency of instructions, but at the same time causes the problems of caching in each CPU core and inconsistency of data between main memory. There are no problems in single-core CPUs. When a CPU executes a set of instructions, in order to reduce memory access, it may execute instructions in disorder. The CPU guarantees that the disorderly execution of instructions in a single thread will not affect the final result.
The Java memory model stipulates that all variables are stored in main memory, and each thread has its own working memory. The working memory of a thread holds a copy of the variables used in the thread in main memory. All operations of a thread on variables must be carried out in working memory instead of directly reading and writing the main memory. Variables in the working memory of the other thread can not be accessed directly between different threads. The transfer of variables between threads requires data synchronization between their own working memory and main memory. JMM acts on the data synchronization process between working memory and main memory. It specifies how to do data synchronization and when to do data synchronization. In order to improve processing efficiency, Java compilers may reorder instructions during compilation, just as processors execute in disorder.
In JMM, there are three important characteristics: atomicity, visibility and orderliness.
Atomicity refers to the fact that a group of operations will not affect or be affected by external operations between the beginning and completion of operations. Atomicity in JMM is different from atomicity in transaction. It is more like isolation of transaction.
Visibility means that changes made by one thread are immediately visible to other threads. Because each thread has its own working memory (which may correspond to the same main memory space), it may lead to two threads accessing the same memory address at the same time to get different values, which leads to visibility problems.
Orderliness refers to the fact that a set of instructions are reordered and processors are executed in disorder without affecting the final expected results.
JVM guarantees that the above three features will be guaranteed in the case of single thread.
In processor instructions, there is a special instruction called memory barrier or memory barrier (different processors may have different physical instructions), which is used to refresh the local processor cache and invalidate the local processor cache; in the java virtual machine, there is also a memory barrier instruction, which is used to synchronize working memory with main memory. In order to achieve visibility, the instructions before and after the barrier instructions can not be reordered. For high-level languages, memory barrier is transparent to programmers, that is to say, bytecode instruction does not include memory barrier instruction, but the underlying implementation of bytecode instruction may call memory barrier instruction, which is also a manifestation of platform-independent, such as reading and writing volatile domain, which contains memory barrier instruction. Orders (indicating that a bytecode instruction is not atomic, it may correspond to multiple machine code instructions when executed).
Volatile modifies variables (only modifies member variables, not local variables), enforces access to the variable directly to read and write memory, and does not enable cache and cpu instruction execution sequence optimization. Volatile guarantees exclusivity for once read or write of variables (for 64-bit length non-volatile classes). Type double and long, their reading and writing are not necessarily atomic operations, may be read and written separately into high 32 bits and low 32 bits, and volatile modification can ensure that 64 bits are read and written before they are all written, but now commercial JVM, even without volatile modification, almost all read and write 64 bits of data. For atomic operations, however, there is no guarantee that other composite operations are exclusive, such as ++, += (maybe two threads execute ++ operations, and finally volatile increases by only 1). Volatile modifies variables better than locks, but because of its drawbacks, we often use it to modify status flags (if it is not feasible to count strictly, such as server access, one access call to + + operation, the final result is not very accurate). In addition, we can use it to combine locks to achieve low overhead. For example, we set a variable to volatile, get method does not need locks, and set method uses synchronous locks.
Publishing refers to an object that can be used outside the current scope, i.e. handing over the operation handle of the object, such as passing references through parameters or exposing references, when the handle of one object is held by another object and the holding object is published, then the holding object is also published. When objects that should not be published are published, this is called escape (non-overflow). We shouldn't publish this in constructors because objects are not fully constructed at this time, such as assigning the holder of this or this to a member of a parameter of the constructor; because of reordering of instructions, the operation of assigning a new object to a variable also escapes.
A new object (including allocation of memory and initialization) is assigned to a reference, and the order of initialization and assignment to reference is unpredictable (it may be that after allocating memory for one object and publishing it, the published object has been used in another thread, but the original thread Initialization may not have started yet, but if the reference is volatile-modified, the order in which a new object assigns values to the reference will not be reordered. That is to say, in the process, other threads will either see the reference as null or be fully published, that is, the initialization is complete. In a double-checked singleton pattern, escape may occur if the member variables of the singleton are not modified with volatile.
Writing a final field in the constructor (non-static final member variables can only be written in <init> method) and assigning the reference of the constructed object to a reference variable subsequently cannot be reordered between the two operations.
JVM guarantees as-if-series semantics and happens-before principles. as-if-serial semantics means that in a single thread, no matter how reordered, the results of execution will not be affected, which is manifested in the fact that JVM does not reorder instructions with dependencies. The happens-before principle is that if A operates happens-before B, then when A operates before B (that is to say, even A operates before B does not guarantee that A operates before B in the code execution process), the effect of A operates on B is visible. Enjoy memory changes, message communication information, etc. As-if-series constrains reordering and happens-before constrains visibility (if A operation is visible to B operation, then there must be A happens-before B relationship and A operation really happens before B operation). There are eight rules to determine the happens-before relationship (two operations only have happens-before relationship if one or more of the rules are satisfied):

  • Transfer rule: If you operate A happens-before operation B and B happens-before operation C, then you operate A happens-before operation C.
  • Procedure Sequence Rules: Any two operations in a thread have a happens-before relationship. For example, any two operations A and B, when A is executed before B, A is visible to B because A hanppens-before B. If B is executed before A after the operation is executed in disorder, B is visible to A because B happens-before A (although A happens B is satisfied, A is not actually in B). Previous execution, so operation A is not visible to operation B.
  • Monitor lock rule: unlock a lock, happen-before locks the lock.
  • Volatile variable rule: write to a volatile field, happen-before is read to the volatile field.
  • Thread startup rule: Thread object's start() method happens-before every action of this thread.
  • Thread termination rule: All operations in a thread are happens-before thread termination detection. Thread.join() method termination, Thread.isAlive() return value means to detect that the thread has terminated execution.
  • Program interrupt rule: code calling happens-before interrupted thread to thread interrupt() method detects the occurrence of interrupt event.
  • Object termination rule: An object initialization completes the start of its happens-before finalize() method.

Lock mechanism

Mutual exclusion refers to the exclusivity of operating certain resources. For example, when a thread self-increments a variable, another thread also performs the same operation. Because self-increments are non-atomic, it is effective to increase the value only written back twice, and the self-incremental operation of variables here is actually a mutually exclusive operation. Operation. Synchronization means that multiple threads are always executed sequentially according to business requirements. When a member variable of an object may be accessed by multiple threads, be careful and careful.
ReentrantLock can make mutex exclusive. Before mutex operation, the lock method of lock object is invoked. If the lock can be occupied (the lock is not occupied by other threads), then the lock will be occupied. Otherwise, the lock will be blocked until unlock is completely released by the thread occupying the lock. Put the lock on. Usually we put unlock calls in final blocks, and lock methods in front of try blocks, then a critical region is formed in try blocks, and mutex operations are placed in the critical region. Locks are reentrant, that is, if a thread possesses a lock, it can get the lock again. At this time, the count of locks becomes 2, which requires two releases of other threads to be invoked. Locks are divided into fair locks and unfair locks. Fair locks will lock to the thread waiting for the queue head when releasing locks, that is, the thread with the longest waiting time. Unfair locks will randomly lock to a thread. Fair locks are much less efficient than unfair locks, so unless there are special requirements, we all use unfair locks.
Read-write locks (shared locks) ReentrantReadWriteLock has read-write two Lock objects. Read locks are obtained by readLock method (read locks cannot create conditions), and write locks are obtained by writeLock (write locks can create conditions). When the lock method for writing Lock is called, if the read Lock has been acquired by another thread or the write Lock has been acquired by another thread, it waits. When the lock method for reading Lock is called, if the write Lock has been acquired, it waits.
One or more conditions can be waited on mutex and write locks, and a waiting Condition on the lock can be returned by the new Condition method of the lock. The Condition is that await lets the current thread release the lock while waiting on the Condition, and only other threads release the waiting Condition by signaling or signalAll. A waiting thread will not wake up from the blocking state until it is locked. In the critical region, conditional waiting is usually set by while {waiting Condition. await()} which is not established in the hope of waiting. Only when the lock is acquired can the Condition on the lock be waited.
Object itself holds a built-in lock (listener), similar to a built-in mutex object, which also carries a wait condition. synchronized modifier code blocks are actually lock objects held by lock and unlock objects at the beginning and end of code blocks, while wait, notify and notify All of the objects are await, signal and signalAll methods corresponding to the waiting conditions of lock objects respectively. synchronized modification of non-static method is actually to lock the whole method body with the lock of the current object. synchronized modification of static method is to lock the whole method body with the lock of the Class object of the current class.
The working memory is refreshed when the lock is acquired and written back to the main memory when it is released. volatile can only guarantee visibility. Locks can guarantee both mutually exclusive operation and visibility.  
We should mutex state attributes in a class that may be accessed by multiple threads (such as synchronizing get and set methods, and setting state attributes to private domains to prevent direct external calls), instead of mutexing attributes defined in a class in multiple classes, that is, we should Allowing objects that may be accessed by multiple threads is thread-safe in itself, such as the design of servlets.  
For non-thread-safe classes such as SimpleDateFormat (java 8 has thread-safe class DateTimeFormatter), confusion may occur when multiple threads call the method of a singleton object of that class, although mutually exclusive access can be used (the lock with the singleton object will be set as a critical zone between initialization and completion of use) However, it is inefficient to use variables as local variables and create too many objects. It is a good idea for a thread to have a variable. ThreadLocal < T > is the local variable of a thread. We only need to define a global ThreadLocal variable by using the get and set methods of the variable. This variable of the current thread operates without affecting the variables of other threads. In fact, each Thread object (corresponding to a thread) saves a map variable of threadLocals, whose key is the ThreadLocal variable itself. The get and set of the ThreadLocal variable are ultimately the get and set of the ThreadLocals whose key is the value of the ThreadLocal variable in the threadLocals of the current thread.

// Lock interface
public interface Lock {
  void lock(); // The acquisition lock returns, and access to an uninterruptible blocking wait without acquisition.
  void lockInterruptibly() throws InterruptedException; // If the lock is retrieved, it returns, and if it is not retrieved, it enters an interruptible blocking wait.
  boolean tryLock(); // If a lock can be acquired, it gets the lock and returns true, and if it cannot be acquired, it returns false.
  boolean tryLock(long time, TimeUnit unit) throws InterruptedException; // It can be retrieved or retrieved within the waiting time to return true, if it times out to return false, and if it is interrupted during the waiting process to throw an interrupt exception.
  void unlock(); // Release the lock.
  Condition newCondition(); // Create a conditional wait. 
}    
    
// exclusive lock
public class ReentrantLock implements Lock, java.io.Serializable {
  public ReentrantLock(boolean fair);  
  public ReentrantLock(); // Default unfair lock.
  public int getHoldCount(); // Get the reentrant count.
  public boolean isHeldByCurrentThread(); // Does the current thread hold the lock?
  public boolean isLocked(); // Is there a thread holding the lock?
  public final boolean isFair(); // Whether the lock is fair or not.
  public final boolean hasQueuedThreads(); // Is there a thread waiting for the lock?
  public final boolean hasQueuedThread(Thread thread); // Whether a thread is waiting for the lock.
  public final int getQueueLength(); // Number of threads waiting for the lock.
  public boolean hasWaiters(Condition condition); // Are there threads waiting on the condition?
  public int getWaitQueueLength(Condition condition); // Number of threads waiting for a condition. _
}
  
// Shared locks, including a ReadLock that can't create conditions, and a WriteLock that can create conditions, both implement Lock interfaces without any additional methods
public class ReentrantReadWriteLock implements ReadWriteLock, java.io.Serializable { 
  public ReentrantReadWriteLock(boolean fair);
  public ReentrantReadWriteLock(); // The default is an unfair lock.
  public ReentrantReadWriteLock.ReadLock readLock(); // Get the read lock.
  public ReentrantReadWriteLock.WriteLock writeLock(); // Get the write lock.
  public final boolean isFair(); // Whether the lock is fair or not.
  public int getReadLockCount(); // How many threads have acquired read locks?
  public boolean isWriteLocked(); // Are there threads that have access to write locks?
  public boolean isWriteLockedByCurrentThread(); // Does the current thread get a write lock?
  public int getWriteHoldCount(); // Number of write locks reentered by the current thread.
  public int getReadHoldCount(); // Number of read locks reentry for the current thread.
  public final boolean hasQueuedThreads(); // Is there a thread blocking on the shared lock?
  public final int getQueueLength(); // Number of threads waiting for shared locks.
  public boolean hasWaiters(Condition condition); // Are there threads waiting on the condition?
  public int getWaitQueueLength(Condition condition); // Number of threads waiting on condition.
}    
    
// Lock condition
public interface Condition {
  void await() throws InterruptedException; // Enter the interruptible wait state until awakened.
  void awaitUninterruptibly(); // Enter an uninterruptible wait state.
  long awaitNanos(long nanosTimeout) throws InterruptedException; // Wait no more than nanos Time out nanoseconds, return the actual number of nanoseconds waiting, and return the specified number of nanoseconds when timed out.
  boolean await(long time, TimeUnit unit) throws InterruptedException; // Wait to return true if awakened and false if timed out.
  boolean awaitUntil(Date deadline) throws InterruptedException; // If you wait for the condition before the deadline, return true, otherwise return false.
  void signal(); // Random wake-up of a thread waiting for that condition.
  void signalAll(); // Waking up all threads waiting for that condition is more general and less deadlock-prone than signal ing.
}

// Object implicitly comes with a lock and condition
public class Object {
  public final void wait() throws InterruptedException;
  public final native void wait(long timeout) throws InterruptedException;
  public final void wait(long timeout, int nanos) throws InterruptedException;
  public final native void notify();  
  public final native void notifyAll();
}
    
// Thread local variables to ensure that each thread creates an instance
public class ThreadLocal<T> {
  public ThreadLocal(); 
  protected T initialValue(); // This is the way to create the local variable of the thread, which needs to be rewritten.
  public T get(); // If the local variable exists in the current thread, it is returned, otherwise initialValue is invoked to create it.
  public void set(T value); // Set the local variable of the current thread to value.
  public void remove(); // Remove the local variable of the current thread.
}

Synchronization strategy

_Semaphore manages a number of licenses. When a license is needed, it is obtained by acquisition method. If the license is not available, it waits for the semaphore until it is available. In other places, it can release the license through release, because the release license is unconditional (no permit is required). Licenses can only be released if they are licensed, so the number of licenses may be larger than the number of initial licenses.
CountDownLatch, the countdown bolt, allows the calling thread to wait until the count is zero, which can be used to allow one or more threads to wait for other threads to complete a specified number of tasks before executing. Countdown can only be reduced or not added after initialization, that is to say, it is not reusable.
The barrier Cyclic Barrier has a count. When the number of threads intercepted by the barrier reaches the count, a specified method is executed and the barrier is opened to wake up all threads. When an intercepted thread waits for a timeout or interruption, the other intercepted threads throw Broken Barrier Exception, which means the barrier is broken. After the barrier releases all waiting threads, it can re-intercept them until a new count of threads is intercepted and all threads are awakened again. That is to say, compared with the countdown bolt, it can be reused. It is useful for multiple threads when they are ready to perform the following operations, each thread is ready. Then intercept.
Exchanger is only useful for two threads to exchange data with the same data structure. After both threads call the exchange method of the same switch, the two threads exchange data. If one calls the exchange method, they wait for another thread to call the exchange method.
Synchronous Queue is equivalent to a blocking queue with a capacity of 0, so putting or fetching data in it will block until someone comes to fetch or put it, that is, producers, consumers and other producers, with no buffer area in between.
These synchronization mechanisms and mutexes are mostly implemented through AQS (AbstractQueued Synchronizer), which uses atomic classes, mainly CAS methods. These classes contain a member of the implementation class of AQS. The main method of these classes is to call the corresponding method of the member. Generally, the member inherits the specific implementation of AQS as the inner class of these classes.

public class Semaphore implements java.io.Serializable {
  public Semaphore(int permits); // Default unfair semaphores.
  public Semaphore(int permits, boolean fair); //  Set the number of initial licenses and determine whether they are fair semaphores. Fair semaphores obtain licenses in a waiting order when they are available.
  public void acquire() throws InterruptedException; // Get a license, if you can't get an access interruptible wait state until you have a license available.
  public void acquireUninterruptibly(); // Get a license if access to an uninterruptible wait state cannot be obtained.
  public boolean tryAcquire(); // Attempt to obtain a license and return whether or not to obtain a license.
  public boolean tryAcquire(long timeout, TimeUnit unit) throws InterruptedException; // Attempt to obtain a license if the acquisition fails and wait for the specified time without operation.
  public void release(); // Add an available license.
  public void acquire(int permits) throws InterruptedException; // Get permits licenses.
  public void acquireUninterruptibly(int permits); 
  public boolean tryAcquire(int permits);
  public boolean tryAcquire(int permits, long timeout, TimeUnit unit);
  public void release(int permits);
  public int availablePermits(); // Returns the number of licenses available.
  public int drainPermits(); // Get all licenses available, and return the number of licenses available.
  public boolean isFair(); // Is it a fair semaphore?
  public final boolean hasQueuedThreads(); // Are there threads waiting for licenses?
  public final int getQueueLength(); // Number of threads waiting for a license.
}    
    
public class CountDownLatch {
  public CountDownLatch(int count); // Its count can only be specified at creation time.
  public void await() throws InterruptedException; // The waiting count is 0.
  public boolean await(long timeout, TimeUnit unit) throws InterruptedException; // Wait for the count to be zero or timeout, if wait for the count to be zero to return true, otherwise return false.
  public void countDown(); // Count minus one.
  public long getCount(); // Gets the current count.
}    
    
public class CyclicBarrier {
  public CyclicBarrier(int parties, Runnable barrierAction);
  public CyclicBarrier(int parties);
  public int getParties(); // Returns the number of intercepts to be intercepted.
  public int getNumberWaiting(); // Returns the number of waiting.
  public int await() throws InterruptedException, BrokenBarrierException; // Return to the number of threads you are waiting for.
  public int await(long timeout, TimeUnit unit) throws InterruptedException, BrokenBarrierException, TimeoutException; // Return to the number of threads you are waiting for.
  public boolean isBroken(); // Has the barrier been damaged?
  public void reset(); // Reduction.
}    
    
//  Type parameters are data structures for exchanging content
public class Exchanger<V> {
  public Exchanger(); 
  public V exchange(V x) throws InterruptedException; // The parameter returns the data exchanged for itself.
  public V exchange(V x, long timeout, TimeUnit unit) throws InterruptedException, TimeoutException; 
}

// Always empty and always full, usually only put and take methods are used.
public class SynchronousQueue<E> extends AbstractQueue<E> {
  public SynchronousQueue(); // Default is unfair.
  public SynchronousQueue(boolean fair); // Whether the wake-up thread wakes up first by waiting first.
  public void put(E e) throws InterruptedException; // If there are already threads waiting for resources to be put in, wake up the waiting threads, otherwise they block and wait for other threads to fetch resources.
  public E take() throws InterruptedException; // If a thread has already waited for resources to be taken, it wakes up the waiting thread, otherwise it blocks itself and waits for other threads to put resources into it.
}

Blocking queue

_blocking queue is thread-safe queue, which is the embodiment of consumer producer model.

public interface BlockingQueue<E> extends Queue<E> {
  boolean offer(E e, long timeout, TimeUnit unit) throws InterruptedException; // Add a new element that blocks the current thread if the queue is full, returns false if it waits for a timeout, and returns true if it is successfully inserted.
  E poll(long timeout, TimeUnit unit) throws InterruptedException; // Remove the element, block the current thread if the queue is empty, and return null if waiting for a timeout.
  void put(E e) throws InterruptedException; // Add a new element from the end of the queue and block the current thread if the queue is full.
  E take() throws InterruptedException; // Remove the element from the queue head and block the current thread if the queue is empty.
}

public interface BlockingDeque<E> extends BlockingQueue<E>, Deque<E> {
    BlockingDueue<E>: Inheritance from BlockingQueue. 
  boolean offerFirst(E e, long timeout, TimeUnit unit) throws InterruptedException;
  boolean offerLast(E e, long timeout, TimeUnit unit) throws InterruptedException;
  E pollFirst(long timeout, TimeUnit unit) throws InterruptedException;
  E pollLast(long timeout, TimeUnit unit) throws InterruptedException;
  void putFirst(E e) throws InterruptedException;
  void putLast(E e) throws InterruptedException;
  E takeFirst() throws InterruptedException;
  E takeLast() throws InterruptedException;
}    
    
// Mutually exclusive access through two locks (full and empty) and two conditions (full and empty)
public class LinkedBlockingQueue<E> extends AbstractQueue<E> implements BlockingQueue<E>, java.io.Serializable {
  public LinkedBlockingQueue(); // Build a queue with the longest Integer.
  public LinkedBlockingQueue(int capacity);
  public LinkedBlockingQueue(Collection<? extends E> c);
}    
    
// Mutually exclusive access through locks and two conditions (full and empty)
public class ArrayBlockingQueue<E> extends AbstractQueue<E> implements BlockingQueue<E>, java.io.Serializable {
  public ArrayBlockingQueue(int capacity);
  public ArrayBlockingQueue(int capacity, boolean fair); // Fair determines whether the lock is a fair lock.
  public ArrayBlockingQueue(int capacity, boolean fair, Collection<? extends E> c);  
}    
    
//Data is not fetched on a first-in, first-out basis, but from the smallest start.
public class PriorityBlockingQueue<E> extends AbstractQueue<E> implements BlockingQueue<E>, java.io.Serializable {
  public PriorityBlockingQueue(); // Build a queue with the default length.
  public PriorityBlockingQueue(int initialCapacity);
  public PriorityBlockingQueue(int initialCapacity, Comparator<? extends E> comparator);
  public PriorityBlockingQueue(Collection<? extends E> c);
  public Comparator<E> comparator(); // Returns the comparator.
  public void put(E e); // Equivalent to offer(e), no blocking.
  public boolean offer(E e, long timeout, TimeUnit unit); // Equivalent to offer(e), there is no blocking, and timeout and unit are virtually useless.
}    
    
// Double-ended queue
public class LinkedBlockingDeque<E> extends AbstractQueue<E> implements BlockingDeque<E>, java.io.Serializable {
  public LinkedBlockingDeque(); // Build a double-ended queue with the longest Integer.
  public LinkedBlockingDeque(int capacity); // Specify the queue length.
  public LinkedBlockingDeque(Collection<? extends E> c);
}    

Actuator

Executor has only one execute method to execute Runnable tasks. According to its specific implementation, it can be executed in different ways, such as the current thread or the new thread to execute tasks. Executor Service is a sub-interface of Executor. It extends the functions of Executor and is also a more useful interface. It can not only perform Runnable tasks, but also perform Callable tasks. It also allows tasks to return to status and control tasks. The virtual class AbstractExecutorService implements some methods of ExecutorService. Scheduled Executor Service directly inherits from Executor Service interface for the scheduled execution of tasks; ThreadPoolExecutor class inherits from AbstractExecutor Service, which executes tasks in a thread pool manner.
Thread pool has two main advantages: one is the reuse of threads to avoid the huge overhead of constructing and releasing threads, especially when a large number of threads with short life cycle are needed; the other is to control the number of concurrent threads and specify the maximum number of concurrent threads according to the system performance (when the number of threads exceeds a certain number, the performance will decline drastically). Heavy or even the system crashes, allowing extra threads to wait for idle threads to appear in the thread pool.
The task submitted by the executor service can be a class that implements Runnable or Callable. The Runnable interface has only one void run() method, so there is no return value for the task that implements its interface, while the Callable interface has only one V call() method, so the task that implements the Callable interface can return a V-type value.
The return value of the submission task of the executor service is a Future object, through which the submission task can be controlled or the return value of the task execution can be obtained.
Executors, java's executor factory class, provide some common executors in a static way, such as newCachedThreadPool, newFixedThreadPool, newSignalThreadExecutor, newScheduled ThreadPool and newSignalScheduled ThreadExecutor.

// Actuator interface
public interface Executor {
  void execute(Runnable command); // To execute a task, whether the current thread executes the task or a new thread executes the task depends on the specific implementation of the executor.
}    
    
// Executor service interface inherits from Executor and is an extension of Executor function.
public interface ExecutorService extends Executor {
  void shutdown(); // Close the executor to perform previously submitted tasks, but do not accept new tasks.
  List<Runnable> shutdownNow(); // Close the executor, attempt to stop all active tasks that are executing (send interrupt signals to tasks), suspend processing of tasks that are waiting, and return to the list of tasks that are waiting to be executed without accepting new tasks.
  boolean isShutdown(); // Is it closed?
  boolean isTerminated(); // Only if the executor is shut down and all the tasks to be performed are returned to true.
  boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedException; // Waiting until isTerminated is true returns true, and waiting until the timeout returns false.
  <T> Future<T> submit(Callable<T> task); // Submit a task execution and return the object of type T after the task execution.
  <T> Future<T> submit(Runnable task, T result); // Submit a task to execute and return the specified value when the task is completed.
  Future<?> submit(Runnable task); // Submit a task execution and block the return value get until the task execution completes and returns null.
  <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks) throws InterruptedException; // Executing a given task blocks until all tasks are completed, returning a Future list that maintains the status and results of the task.
  <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks, long timeout, TimeUnit unit) throws InterruptedException; // If the timeout returns and cancels the unfinished task.
  <T> T invokeAny(Collection<? extends Callable<T>> tasks) throws InterruptedException, ExecutionException; // If one task is completed, it returns the result and cancels other tasks, such as subregional searching, one region, one task.
  <T> T invokeAny(Collection<? extends Callable<T>> tasks, long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException;
}    

// The predefined executor service interface inherits from Executor Service and can be scheduled to perform tasks once or periodically, which is more efficient than Timer.
public interface ScheduledExecutorService extends ExecutorService {
  public ScheduledFuture<?> schedule(Runnable command, long delay, TimeUnit unit); // The command task is executed after delay time.
  public ScheduledFuture<V> schedule(Callable<V> callable, long delay, TimeUnit unit); // After delay time, the callable task is executed.
  public ScheduledFuture<?> scheduleAtFixedRate(Runnable command, long initialDelay, long period, TimeUnit unit); // After the initial Delay time, the command task is executed every other period time.
  public ScheduledFuture<?> scheduleWithFixedDelay(Runnable command, long initialDelay, long delay, TimeUnit unit); // Tasks are executed between initial Delays and again at period time after each task is played.
}

// The thread pool executor class inherits from the virtual class AbstractExecutorService
public class ThreadPoolExecutor extends AbstractExecutorService {
  public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler);
      // If the number of threads is less than corePoolSize, the newly submitted task will be executed by a new thread, even if the other threads are idle. If the number of threads is between corePoolSize and maximun PoolSize, the newly submitted task will wait for the idle thread, and if the waiting queue is full, the new thread will execute.
      // If the number of threads exceeds maximumPoolSize, the task is rejected. For more idle threads than corePoolSize, keeping AliveTime more will be destroyed.
  public boolean isTerminating();
  public boolean remove(Runnable task); // If it is not executed, it will not be executed.
  public long getTaskCount();
  public long getCompletedTaskCount();
  ...Corresponding to constructor parameters getter and setter. 
}    
    
// Executor service submits the interface returned by the task, where V is the type returned after the task is executed
public interface Future<V> {
  boolean cancel(boolean mayInterruptIfRunning); // If the task has not started, it will not be executed. If the task has started and the parameters are interruptible, an interrupt signal is sent to the thread executing the task.
  boolean isCancelled(); // Whether it has been cancelled or not, it may be that the task is still being carried out.
  boolean isDone(); // Whether or not the task is completed (in fact, the task may still be executed, such as returning true after cancellation, but the task may still be executed after cancellation, because the task may not capture the interrupt signal), whether it is normal termination, cancellation or exception.
  V get() throws InterruptedException, ExecutionException; // Block the thread that executes the task until it finishes getting the return value of the task.
  V get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException; // Block the thread executing the task until the task completes to get the return value of the task or throw an exception over time.
}

// Actuator Factory Class
public class Executors {
  public static ExecutorService newFixedThreadPool(int nThreads); // A thread pool of a specified size that waits for other threads to release if the requested task does not have an available thread to execute.
  public static ExecutorService newFixedThreadPool(int nThreads, ThreadFactory threadFactory); // ThreadFactory has only one new Thread (Runnable) method that returns a thread.
  public static ExecutorService newSingleThreadExecutor(); // All tasks are executed serially in a new thread.
  public static ExecutorService newSingleThreadExecutor(ThreadFactory threadFactory); // Only one thread will be created, so the new Thread method of threadFactory will only be called once.
  public static ExecutorService newCachedThreadPool(); // Create threads if necessary, and idle threads will remain for 60 seconds, which is suitable for situations where task requests are frequent but the total amount of tasks is not too large.
  public static ExecutorService newCachedThreadPool(ThreadFactory threadFactory); 
  public static ScheduledExecutorService newSingleThreadScheduledExecutor(); // All scheduled tasks are executed serially in the same thread.
  public static ScheduledExecutorService newSingleThreadScheduledExecutor(ThreadFactory threadFactory);
  public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize); // Take a thread from the thread pool of a specified size to perform the scheduled task. If there are no available threads, let the task wait.
  public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize, ThreadFactory threadFactory); 
  public static ExecutorService unconfigurableExecutorService(ExecutorService executor); // Returns a proxy object of executor.
                          // No matter what method the executor implements, the object returned by this method can not be forced. It can only call the method of ExecutorService, that is, freeze the configuration method of the specific class.
  public static ScheduledExecutorService unconfigurableScheduledExecutorService(ScheduledExecutorService executor);
  public static ThreadFactory defaultThreadFactory();
  public static <T> Callable<T> callable(Runnable task, T result); // In fact, it calls the run method of task in the call method of Callable and returns the result.
  public static Callable<Object> callable(Runnable task); // The return value is null.
}

// One disadvantage of batch execution is that all tasks are completed before returning, and the completion executor maintains a queue as a service. When the submitted tasks are completed, the results will be put into the queue, which is suitable for processing when tasks are completed.
public class ExecutorCompletionService<V> implements CompletionService<V> {
  public ExecutorCompletionService(Executor executor);
  public ExecutorCompletionService(Executor executor, BlockingQueue<Future<V>> completionQueue);
  public Future<V> submit(Callable<V> task);
  public Future<V> submit(Runnable task, V result);
  public Future<V> take() throws InterruptedException; // Gets and removes the first completed task return handle.
  public Future<V> poll();
  public Future<V> poll(long timeout, TimeUnit unit) throws InterruptedException;  
}

Keywords: Programming Java jvm less

Added by areid on Sun, 04 Aug 2019 16:32:52 +0300