Concurrent HashMap in Java 7
Concurrency Level: Parallel level, concurrency number, Segment number, default is 16, that is, Concurrent HashMap has 16 Segments, at this time, in theory can support up to 16 threads concurrently write, as long as their operations are distributed on different Segments. This value can be set to other values at initialization time, but once initialized, it can not be expanded.
Specifically, within each Segment, the structure of the Segment is similar to that of HashMap, but it's more cumbersome to handle because it's thread-safe.
Initialization
Initial Capacity: Initial capacity, which refers to the initial capacity of the entire Concurrent HashMap, is allocated equally to each Segment in actual operation.
Load Factor: Load factor, because the number of Segments in Concurrent HashMap is not expandable, the load silver is used internally for each Segment.
public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) { if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0) throw new IllegalArgumentException(); if (concurrencyLevel > MAX_SEGMENTS) concurrencyLevel = MAX_SEGMENTS; // Find power-of-two sizes best matching arguments int sshift = 0; int ssize = 1; // Compute the Parallel Level ssize, because keep the Parallel Level to the n th power of 2 while (ssize < concurrencyLevel) { ++sshift; ssize <<= 1; } // Let's not brainstorm like that. By default, concurrency Level is 16 and sshift is 4. // Then we calculate segmentShift at 28 and segmentMask at 15, which will be used later. this.segmentShift = 32 - sshift; this.segmentMask = ssize - 1; if (initialCapacity > MAXIMUM_CAPACITY) initialCapacity = MAXIMUM_CAPACITY; // The initial Capacity is to set the initial size of the entire map. // Here, the size of each position in the Segment array is calculated based on the initial Capacity. // If initial Capacity is 64, then each Segment or "slot" can be divided into four. int c = initialCapacity / ssize; if (c * ssize < initialCapacity) ++c; // The default MIN_SEGMENT_TABLE_CAPACITY is 2, and this value is also exquisite, because in this case, for specific slots, // Inserting an element will not expand, but inserting a second element will expand. int cap = MIN_SEGMENT_TABLE_CAPACITY; while (cap < c) cap <<= 1; // Create a Segment array, // And create the first element segment[0] of the array Segment<K,V> s0 = new Segment<K,V>(loadFactor, (int)(cap * loadFactor), (HashEntry<K,V>[])new HashEntry[cap]); Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize]; // Write segment[0] to the array UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0] this.segments = ss; }
When the initialization is completed, a Segment array is obtained. If the new Concurrent HashMap () parametric constructor is used for initialization, then the initialization is completed:
1) Segment array length is 16 (Concurrency Level), can not be expanded
2) The default size of Segment[i] is 2, and the load factor is 0.75, so the initial threshold is 1.5, so inserting the first element will not trigger expansion, and inserting the second element will not trigger the first expansion.
3) Initialize Segment[0], the other location is null
4) The value of segmentShift is 32-sshift (4)= 28 (sshift: In order to ensure that ConcurrencyLevel is an integer power of 2 through the number of left shifts to 1), the value of segmentMask is 16 (ssize) - 1 = 15 (ssize is the ConcurrencyLevel parameter that is the closest to the initial value, and is guaranteed to be an integer power of 2 by 1 left shift). Both values are used to obtain the segment array subscripts and the segment internal array subscripts by operating with hash values.
put process
public V put(K key, V value) { Segment<K,V> s; if (value == null) throw new NullPointerException(); // 1. Calculate the hash value of key int hash = hash(key); // 2. Find the location j in the Segment array based on the hash value // hash is 32 bits, unsigned right shift segmentShift(28) bits, the remaining low 4 bits. // Then do an operation with segmentMask(15), that is, j is the last four bits of the hash value, which is the array subscript of the slot. int j = (hash >>> segmentShift) & segmentMask; // As I just said, segment[0] is initialized at initialization, but the other location is null. // ensureSegment(j) initializes segment[j] if ((s = (Segment<K,V>)UNSAFE.getObject // nonvolatile; recheck (segments, (j << SSHIFT) + SBASE)) == null) // in ensureSegment s = ensureSegment(j); // 3. Insert new values into slots return s.put(key, hash, value, false); }
This layer is very simple. It calculates the 32-bit hash value by key value, then hash moves the segmentShift (28 = 32 - sshift) bit to the right, obtains the lower 4-bit segmentMast (15 = ssize-1) and obtains the segmentMast subscript for the stored segmentMast. Since only segment[0] is initialized, initialization detection is needed, and then the new value is added. Insert into segment.
Insertion inside Segment
The internal structure of Segment is similar to HashMap and is also a combination of linked list and array, but the difference is that if the newly inserted key pair of Concurrent HashMap does not have the same key in the corresponding linked list, the new key pair will be set as the linked list header.
final V put(K key, int hash, V value, boolean onlyIfAbsent) { ? HashEntry<K,V> node = tryLock() ? null : scanAndLockForPut(key, hash, value); V oldValue; try { // This is an array inside a segment. HashEntry<K,V>[] tab = table; // Then use hash value to find the array subscripts that should be placed int index = (tab.length - 1) & hash; // first is the header of the linked list at that location of the array HashEntry<K,V> first = entryAt(tab, index); // The following for loops are long, but understandable. Consider that there are no elements in the location and there is already a list. for (HashEntry<K,V> e = first;;) { if (e != null) { K k; if ((k = e.key) == key || (e.hash == hash && key.equals(k))) { oldValue = e.value; if (!onlyIfAbsent) { // Covering old values e.value = value; ++modCount; } break; } // Continue to follow the list e = e.next; } else { // Whether a node is null or not depends on the process of acquiring the lock, but it has nothing to do with it here. // If it is not null, set it directly to the list header; if it is null, initialize and set it to the list header. if (node != null) node.setNext(first); else node = new HashEntry<K,V>(hash, key, value, first); int c = count + 1; // If the threshold of the segment is exceeded, the segment needs to be expanded if (c > threshold && tab.length < MAXIMUM_CAPACITY) rehash(node); // Expansion will also be analyzed in detail. else // Without reaching the threshold, place the node in the index position of the array tab. // In fact, it is to set the new node as the header of the original list. setEntryAt(tab, index, node); ++modCount; count = c; oldValue = null; break; } } } finally { // Unlock unlock(); } return oldValue; }
The entire put process is similar to the put in HashMap. Because of the exclusive lock protection, the operation inside the segment is not complicated.
Here are some key steps.
Initialize Segmet: ensureSegment
The first Segment is created when the Concurrent HashMap is initialized and initialized when the first value is inserted for other Segments.
Concurrency needs to be considered here, because it is likely that multiple threads will come in at the same time to initialize the same subscript Segment.
private Segment<K,V> ensureSegment(int k) { final Segment<K,V>[] ss = this.segments; long u = (k << SSHIFT) + SBASE; // raw offset Segment<K,V> seg; if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) { // Here you see why segment[0] was initialized before. // Initialize segment[k] using the array length and load factor at the current segment[0] // Why use "current" because segment[0] may have been expanded long ago Segment<K,V> proto = ss[0]; int cap = proto.table.length; float lf = proto.loadFactor; int threshold = (int)(cap * lf); // Initialize the array inside segment[k] HashEntry<K,V>[] tab = (HashEntry<K,V>[])new HashEntry[cap]; if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) { // Check again whether the slot has been initialized by other threads. Segment<K,V> s = new Segment<K,V>(lf, threshold, tab); // Use the while loop, use CAS internally, exit after the current thread has successfully set its value or other threads have successfully set their value. while ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) { if (UNSAFE.compareAndSwapObject(ss, u, null, seg = s)) break; } } } return seg; }
Get the write lock: scanAndLockForPut
When put ting into a Segment, you first call node = tryLock ()? null: scanAndLockForPut (key,hash,value), that is to say, a tryLock () is first used to quickly acquire the exclusive lock of the segment. If it fails, enter the method of scanAndLockForPut to acquire the lock.
private HashEntry<K,V> scanAndLockForPut(K key, int hash, V value) { HashEntry<K,V> first = entryForHash(this, hash); HashEntry<K,V> e = first; HashEntry<K,V> node = null; int retries = -1; // negative while locating node // Loop acquisition lock while (!tryLock()) { HashEntry<K,V> f; // to recheck first below if (retries < 0) { if (e == null) { if (node == null) // speculatively create node // Enter here to show that the list at that location of the array is empty, without any elements. // Of course, another reason to get in here is the failure of tryLock(), so there is concurrency in the slot, not necessarily the location. node = new HashEntry<K,V>(hash, key, value, null); retries = 0; } else if (key.equals(e.key)) retries = 0; else // Go down the chain e = e.next; } // If the number of retries exceeds MAX_SCAN_RETRIES (single core 1 multi-core 64), then no grab, enter the blocking queue and wait for the lock. // lock() is a blocking method until the lock is retrieved else if (++retries > MAX_SCAN_RETRIES) { lock(); break; } else if ((retries & 1) == 0 && // At this time, there is a big problem, that is, there are new elements into the list, become a new header. // So the strategy here is to go through the scanAndLockForPut method again. (f = entryForHash(this, hash)) != first) { e = first = f; // re-traverse if entry changed retries = -1; } } return node; }
Expansion: rehash
Since Segment arrays cannot be resized after initialization, scaling operations can only be performed on HashEntry of an instance inside Segment.
When put ting, if it is judged that inserting the key pair will cause the current capacity to exceed the threshold, it will expand in the interpolation first.
There is no need to consider concurrency when scaling up, because the Segment's exclusive lock is held here.
Array capacity will be doubled, traversing the original array, and the list at subscript i will be split into the list at the subscript i and i + oldCap. If the subscript is a list of 3, its elements will only be allocated to the locations 3 and 19 of the new array. In addition, you will get a lastRun node, and the nodes after the lastRun node will be placed in the same subscript (it may feel that the distribution is uniform enough, the latter node does not need another copy, if lastRun is the end of the list or the later element, the traversal will be wasteful).
// The node on the method parameter is the data that needs to be added to the new array after this expansion. private void rehash(HashEntry<K,V> node) { HashEntry<K,V>[] oldTable = table; int oldCapacity = oldTable.length; // 2 times int newCapacity = oldCapacity << 1; threshold = (int)(newCapacity * loadFactor); // Create a new array HashEntry<K,V>[] newTable = (HashEntry<K,V>[]) new HashEntry[newCapacity]; // If the new mask is expanded from 16 to 32, then the sizeMask is 31, corresponding to the binary `000... 00011111'. int sizeMask = newCapacity - 1; // Traversing through the original array, the old routine, splitting the list at the position I of the original array into two positions I and i+oldCap of the new array for (int i = 0; i < oldCapacity ; i++) { // e is the first element of the list HashEntry<K,V> e = oldTable[i]; if (e != null) { HashEntry<K,V> next = e.next; // Computing should be placed in a new array. // Assuming that the length of the original array is 16 and e is at oldTable[3], idx may only be 3 or 3 + 16 = 19 int idx = e.hash & sizeMask; if (next == null) // There is only one element in this location, which is easier to do. newTable[idx] = e; else { // Reuse consecutive sequence at same slot // e is the list head HashEntry<K,V> lastRun = e; // idx is the new location of the head node e of the current linked list int lastIdx = idx; // The for loop below finds a lastRun node, after which all the elements are to be put together. for (HashEntry<K,V> last = next; last != null; last = last.next) { int k = last.hash & sizeMask; if (k != lastIdx) { lastIdx = k; lastRun = last; } } // Put the list of lastRun and all subsequent nodes in lastIdx newTable[lastIdx] = lastRun; // The following operation deals with the nodes before lastRun. // These nodes may be assigned to another list or to the list above. for (HashEntry<K,V> p = e; p != lastRun; p = p.next) { V v = p.value; int h = p.hash; int k = h & sizeMask; HashEntry<K,V> n = newTable[k]; newTable[k] = new HashEntry<K,V>(h, p.key, v, n); } } } } // Place the new node at the head of one of the two lists just created in the new array int nodeIndex = node.hash & sizeMask; // add the new node node.setNext(newTable[nodeIndex]); newTable[nodeIndex] = node; table = newTable; }
get process:
The get process is simpler than put.
1. Calculate the hash value to find the location in the segment array
2. Find the subscript of the internal array of segment based on the hash value
3. Traversal list
public V get(Object key) { Segment<K,V> s; // manually integrate access methods to reduce overhead HashEntry<K,V>[] tab; // 1. hash value int h = hash(key); long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE; // 2. Find the corresponding segment according to hash if ((s = (Segment<K,V>)UNSAFE.getObjectVolatile(segments, u)) != null && (tab = s.table) != null) { // 3. Find the linked list of the corresponding positions of the internal array of segment s and traverse it for (HashEntry<K,V> e = (HashEntry<K,V>) UNSAFE.getObjectVolatile (tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE); e != null; e = e.next) { K k; if ((k = e.key) == key || (e.hash == h && key.equals(k))) return e.value; } } return null; }
Analysis of Concurrent Problems
It can be seen that there is no lock in the get process. The put operation of adding nodes and remove operation of deleting nodes are to obtain the exclusive lock of segment. The problem to be considered is that put or remove operation occurs simultaneously in the same segment.
1. Thread security of put operation
ensureSegment initialization slot: Initialization slot using the parameters of the current segment[0] through CAS
The insertion node of put operation is inserted into the head of the list, and the get operation starts at the top of the head node of the old list, so it does not have an impact. If the get operation is after put, you need to ensure that the node just inserted into the header is read, which depends on the CAS operation used in the setEntryAt method.
Expansion resize: Expansion is to create new arrays, then migrate data. If get first, query on the old table directly. If put first, because table uses volatile keyword, table visibility is guaranteed.
2. Thread security of remote operation
The get operation needs to traverse the list, but the remove operation will destroy the list. If the get operation of the node destroyed by the remove has already traversed, there will be no problem. If the remove destroys a node first, consider two cases. 1. If this node is a header node, then the next of the header node needs to be set as the element of the location of the array. table uses volatile modification, but volatile can not provide visibility guarantee for the internal operation of the array. So the source code uses UNSAFE to operate the array. See the method setEntryAt. 2. If the node to be deleted is not a header node, it will connect the successor node of the deleted node to the precursor node. The concurrency guarantee here is that the next attribute is volatile.
Concurrent HashMap in Java 8
Basic structure
In Java 8, Concurrent HashMap abandons the structure of segmented lock, adopts a similar structure to HashMap and introduces a red-black tree. To ensure thread safety, the source code is more complex than HashMap.
Initialization
// Nothing in this constructor public ConcurrentHashMap() { } public ConcurrentHashMap(int initialCapacity) { if (initialCapacity < 0) throw new IllegalArgumentException(); int cap = ((initialCapacity >= (MAXIMUM_CAPACITY >>> 1)) ? MAXIMUM_CAPACITY : tableSizeFor(initialCapacity + (initialCapacity >>> 1) + 1)); this.sizeCtl = cap; }
put process
public V put(K key, V value) { return putVal(key, value, false); } final V putVal(K key, V value, boolean onlyIfAbsent) { if (key == null || value == null) throw new NullPointerException(); // Get the hash value int hash = spread(key.hashCode()); // Length used to record the corresponding linked list int binCount = 0; for (Node<K,V>[] tab = table;;) { Node<K,V> f; int n, i, fh; // If the array is empty, initialize the array if (tab == null || (n = tab.length) == 0) // Initialize arrays, as detailed later tab = initTable(); // Find the array subscript corresponding to the hash value and get the first node f else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) { // If the position of the array is empty, // Put the new value into it with a CAS operation, and the put operation is almost over and can be pulled to the end. // If CAS fails, it's a concurrent operation, just go to the next loop. if (casTabAt(tab, i, null, new Node<K,V>(hash, key, value, null))) break; // no lock when adding to empty bin } // hash can be equal to MOVED, which needs to be seen later, but it can be guessed from the name, it must be because of expansion. else if ((fh = f.hash) == MOVED) // Help with data migration, which is easy to understand after reading the introduction of the data migration section. tab = helpTransfer(tab, f); else { // That is to say, f is the head of the position, and it's not empty. V oldVal = null; // Monitor lock to get the header node of the array at that location synchronized (f) { if (tabAt(tab, i) == f) { if (fh >= 0) { // The hash value of the head node is greater than 0, indicating that it is a linked list. // For accumulation, record the length of the linked list binCount = 1; // Traversal list for (Node<K,V> e = f;; ++binCount) { K ek; // If an "equal" key is found, determine whether value coverage is required, and then break if (e.hash == hash && ((ek = e.key) == key || (ek != null && key.equals(ek)))) { oldVal = e.val; if (!onlyIfAbsent) e.val = value; break; } // At the end of the list, put the new value at the end of the list. Node<K,V> pred = e; if ((e = e.next) == null) { pred.next = new Node<K,V>(hash, key, value, null); break; } } } else if (f instanceof TreeBin) { // red-black tree Node<K,V> p; binCount = 2; // Call the interpolation method of red-black tree to insert new nodes if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key, value)) != null) { oldVal = p.val; if (!onlyIfAbsent) p.val = value; } } } } // BinCount!= 0 indicates that the list operation is done above if (binCount != 0) { // To determine whether to convert a linked list to a red-black tree, the critical value is 8, just like HashMap. if (binCount >= TREEIFY_THRESHOLD) // This method is slightly different from HashMap in that it is not necessarily a red-black tree conversion. // If the length of the current array is less than 64, the array expansion is chosen instead of converting to a red-black tree. // We will not look at the specific source code, the expansion section later said ___________. treeifyBin(tab, i); if (oldVal != null) return oldVal; break; } } } // addCount(1L, binCount); return null; }
Initialization array: initTable
Initialize an array of suitable size and set sizeCtl. Concurrency problem is controlled by CAS operation of sizeCtl.
SizeCtl = 1 indicates that the table is being initialized by a thread, and size = n indicates that there are n-1 threads expanding the table.
private final Node<K,V>[] initTable() { Node<K,V>[] tab; int sc; while ((tab = table) == null || tab.length == 0) { // Initialized "credit" was "grabbed" by other threads if ((sc = sizeCtl) < 0) Thread.yield(); // lost initialization race; just spin // CAS, set sizeCtl to - 1, which means the lock is grabbed. else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) { try { if ((tab = table) == null || tab.length == 0) { // DEFAULT_CAPACITY defaults to an initial capacity of 16 int n = (sc > 0) ? sc : DEFAULT_CAPACITY; // Initialization array, 16 or the length provided at initialization Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n]; // Assign this array to table, which is volatile table = tab = nt; // If n is 16, sc = 12 // Actually, it's 0.75 * n. sc = n - (n >>> 2); } } finally { // Set sizeCtl to sc. Let's call it 12. sizeCtl = sc; } break; } } return tab; }
Chain list to red-black tree: treeifybin
treeifybin may not necessarily convert to a red-black tree, or simply expand the array.
private final void treeifyBin(Node<K,V>[] tab, int index) { Node<K,V> b; int n, sc; if (tab != null) { // MIN_TREEIFY_CAPACITY is 64 // So, if the length of the array is less than 64, that is, 32 or 16 or less, it will expand the array. if ((n = tab.length) < MIN_TREEIFY_CAPACITY) // Later we will analyze this method in detail. tryPresize(n << 1); // b is the head node else if ((b = tabAt(tab, index)) != null && b.hash >= 0) { // Lock up synchronized (b) { if (tabAt(tab, index) == b) { // Here's how to traverse the list and build a red-black tree TreeNode<K,V> hd = null, tl = null; for (Node<K,V> e = b; e != null; e = e.next) { TreeNode<K,V> p = new TreeNode<K,V>(e.hash, e.key, e.val, null, null); if ((p.prev = tl) == null) hd = p; else tl.next = p; tl = p; } // Set the red-black tree to the corresponding position of the array setTabAt(tab, index, new TreeBin<K,V>(hd)); } } } } }
Expansion: tryPresize
// First of all, the method parameter size doubled when it came in. private final void tryPresize(int size) { // c: size 1.5 times, plus 1, and then take the nearest 2 n power up. int c = (size >= (MAXIMUM_CAPACITY >>> 1)) ? MAXIMUM_CAPACITY : tableSizeFor(size + (size >>> 1) + 1); int sc; while ((sc = sizeCtl) >= 0) { Node<K,V>[] tab = table; int n; // This if branch is basically the same code as the initialization array we said before. Here, we can leave this code alone. if (tab == null || (n = tab.length) == 0) { n = (sc > c) ? sc : c; if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) { try { if (table == tab) { @SuppressWarnings("unchecked") Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n]; table = nt; sc = n - (n >>> 2); // 0.75 * n } } finally { sizeCtl = sc; } } } else if (c <= sc || n >= MAXIMUM_CAPACITY) break; else if (tab == table) { // I don't understand what rs really means, but it doesn't matter. int rs = resizeStamp(n); if (sc < 0) { Node<K,V>[] nt; if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 || sc == rs + MAX_RESIZERS || (nt = nextTable) == null || transferIndex <= 0) break; // 2. Add sizeCtl to 1 with CAS, and then execute transfer method // At this point nextTab is not null if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1)) transfer(tab, nt); } // 1. Set sizeCtl to (rs < RESIZE_STAMP_SHIFT) +2) // Did I understand the true meaning of this value? But what you can calculate is that the result is a relatively large negative number. // Call the transfer method, where the nextTab parameter is null else if (U.compareAndSwapInt(this, SIZECTL, sc, (rs << RESIZE_STAMP_SHIFT) + 2)) transfer(tab, null); } } }
The core of this method is the operation of sizeCtl value. First, set it to a negative number, then execute transfer(tab, null), then add sizeCtl to 1 in the next loop, and execute transfer(tab, nt), then maybe continue adding 1 to sizeCtl and execute transfer(tab, nt).
So, the possible operation is to execute a transfer(tab, null) and multiple transfers (tab, nt), where how to end the cycle needs to see the transfer source code before it is clear.
Data migration:transfer
Although the tryPresize method we mentioned earlier does not involve multithreading, this transfer method can be invoked elsewhere. Typically, as we said earlier when we talked about the put method, look up at the put method. Is there a place where the helpTransfer method is called, helpTr? The ansfer method calls the transfer method.
This method supports multi-threaded execution. When the method is called peripherally, the first thread that initiates data migration is guaranteed. The nextTab parameter is null. When this method is called later, the nextTab will not be null.
Before you read the source code, you need to understand the mechanism of concurrent operations. The original array length is n, so we have n migration tasks. It is simplest for each thread to take charge of a small task at a time. Once a task has been completed, we can check whether there are other unfinished tasks to help migrate. Doug Lea uses a stride, which is simply understood as step size. Each thread has a negative step. Part of the responsibility migration, such as 16 small tasks per migration. Therefore, we need a global scheduler to arrange which thread to perform which tasks, which is the role of the attribute transferIndex.
The first thread that initiates the data migration will point transferIndex to the last position of the original array, then the stride tasks from back to front belong to the first thread, then transfer Index to the new location, and the stride tasks from forward belong to the second thread, and so on. Of course, the second thread mentioned here does not necessarily refer to the second thread, but it can also refer to the same thread. This reader should be able to understand. In fact, it divides a large migration task into task packages.
private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) { int n = tab.length, stride; // stride is directly equal to N in single core mode and (n > > 3) / NCPU in multi-core mode. The minimum value is 16. // stride can be understood as "step size". There are n locations that need to be migrated. // Divide the n tasks into multiple task packages, each with stride tasks if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE) stride = MIN_TRANSFER_STRIDE; // subdivide range // If nextTab is null, initialize it once first // As we said earlier, the periphery guarantees that when the first thread that initiates the migration calls this method, the parameter nextTab is null. // nextTab will not be null when the thread participating in the migration calls this method if (nextTab == null) { try { // Capacity doubling Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1]; nextTab = nt; } catch (Throwable ex) { // try to cope with OOME sizeCtl = Integer.MAX_VALUE; return; } // nextTable is an attribute in Concurrent HashMap nextTable = nextTab; // TransfIndex is also an attribute of Concurrent HashMap to control the location of migration transferIndex = n; } int nextn = nextTab.length; // Forwarding Node is translated as the Node being migrated. // This constructor generates a Node with null keys, value s, and next. The key is that hash is MOVED. // As we will see later, when the node at position i in the original array completes the migration work, // Location i is set to this Forwarding Node to tell other threads that the location has been processed // So it's actually a sign. ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab); // advance refers to the migration of one location to prepare for the next. boolean advance = true; boolean finishing = false; // to ensure sweep before committing nextTab /* * The following for loop, the most difficult to understand in the front, and to understand them, should first understand the back, and then back to see. * */ // i is the location index, bound is the boundary, note that from back to front for (int i = 0, bound = 0;;) { Node<K,V> f; int fh; // The following while is really hard to understand // advance is true to indicate that migration to the next location is possible // Simply understand the ending: i points to transfer index, bound points to transfer index-stride while (advance) { int nextIndex, nextBound; if (--i >= bound || finishing) advance = false; // Assign the transferIndex value to nextIndex // Once transferIndex is less than or equal to 0, it means that all the positions of the original array are processed by corresponding threads. else if ((nextIndex = transferIndex) <= 0) { i = -1; advance = false; } else if (U.compareAndSwapInt (this, TRANSFERINDEX, nextIndex, nextBound = (nextIndex > stride ? nextIndex - stride : 0))) { // Look at the code in parentheses. nextBound is the boundary of the migration task. Note that it's backward and forward. bound = nextBound; i = nextIndex - 1; advance = false; } } if (i < 0 || i >= n || i + n >= nextn) { int sc; if (finishing) { // All migration operations have been completed nextTable = null; // Assign the new nextTab to the table attribute to complete the migration table = nextTab; // Recalculating sizeCtl: n is the length of the original array, so sizeCtl will get a value of 0.75 times the length of the new array. sizeCtl = (n << 1) - (n >>> 1); return; } // As we said before, sizeCtl is set to (rs < RESIZE_STAMP_SHIFT) +2 before migration. // Then, each thread participating in the migration adds sizeCtl to 1. // Here we use CAS operation to subtract 1 from sizeCtl, which means that we have finished our own task. if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) { // Task End, Method Exit if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT) return; // Here, note (sc - 2) == resizeStamp (n) << RESIZE_STAMP_SHIFT, // That is to say, when all migration tasks are completed, they will enter the if(finishing) {} branch above. finishing = advance = true; i = n; // recheck before commit } } // If the location i is empty and there are no nodes, place the newly initialized Forwarding Node empty node.“ else if ((f = tabAt(tab, i)) == null) advance = casTabAt(tab, i, null, fwd); // The location is a Forwarding Node, which indicates that the location has been moved. else if ((fh = f.hash) == MOVED) advance = true; // already processed else { // Lock the node at the location of the array and start processing migration at that location of the array synchronized (f) { if (tabAt(tab, i) == f) { Node<K,V> ln, hn; // The hash of the header node is greater than 0, indicating that it is the Node node of the linked list. if (fh >= 0) { // The following section is similar to the Concurrent HashMap migration in Java 7. // We need to split the list in two. // Find lastRun in the original list, and then last Run and its subsequent nodes migrate together. // The nodes before lastRun need to be cloned and then divided into two linked lists int runBit = fh & n; Node<K,V> lastRun = f; for (Node<K,V> p = f.next; p != null; p = p.next) { int b = p.hash & n; if (b != runBit) { runBit = b; lastRun = p; } } if (runBit == 0) { ln = lastRun; hn = null; } else { hn = lastRun; ln = null; } for (Node<K,V> p = f; p != lastRun; p = p.next) { int ph = p.hash; K pk = p.key; V pv = p.val; if ((ph & n) == 0) ln = new Node<K,V>(ph, pk, pv, ln); else hn = new Node<K,V>(ph, pk, pv, hn); } // One of the linked lists is in the position of the new array i setTabAt(nextTab, i, ln); // Another list is placed i n the new array position i+n setTabAt(nextTab, i + n, hn); // Setting the location of the original array to fwd means that the location has been processed. // Once other threads see the hash value of the location as MOVED, they will not migrate. setTabAt(tab, i, fwd); // advance is set to true to indicate that the location has been migrated advance = true; } else if (f instanceof TreeBin) { // Migration of Red and Black Trees TreeBin<K,V> t = (TreeBin<K,V>)f; TreeNode<K,V> lo = null, loTail = null; TreeNode<K,V> hi = null, hiTail = null; int lc = 0, hc = 0; for (Node<K,V> e = t.first; e != null; e = e.next) { int h = e.hash; TreeNode<K,V> p = new TreeNode<K,V> (h, e.key, e.val, null, null); if ((h & n) == 0) { if ((p.prev = loTail) == null) lo = p; else loTail.next = p; loTail = p; ++lc; } else { if ((p.prev = hiTail) == null) hi = p; else hiTail.next = p; hiTail = p; ++hc; } } // If the number of nodes is less than 8 after split into two, then the red-black tree is converted back to the linked list ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) : (hc != 0) ? new TreeBin<K,V>(lo) : t; hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) : (lc != 0) ? new TreeBin<K,V>(hi) : t; // Placing ln in the position of the new array i setTabAt(nextTab, i, ln); // Place hn i n the position i+n of the new array setTabAt(nextTab, i + n, hn); // Setting the location of the original array to fwd means that the location has been processed. // Once other threads see the hash value of the location as MOVED, they will not migrate. setTabAt(tab, i, fwd); // advance is set to true to indicate that the location has been migrated advance = true; } } } } }
In the final analysis, the transfer method does not achieve all the migration tasks. Each call to this method only implements the migration of transfer Index to the forward stride location. The rest needs to be controlled by the peripheral.
At this point, it may be clearer to go back and look at the tryPresize method.
get process analysis
The get method has always been the simplest, and this is no exception:
- Calculate hash value
- Find the corresponding position of the array according to the hash value: (n - 1) & H
- According to the node property at the location, the corresponding search is carried out.
If the location is null, return null directly.
If the node at that location is exactly what we need, return the value of that node.
If the hash value of the node in this location is less than 0, it means that it is expanding, or the red-black tree. We will introduce the find method later.
If none of the above three items is satisfied, it is a linked list, which can be compared by traversal.
public V get(Object key) { Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek; int h = spread(key.hashCode()); if ((tab = table) != null && (n = tab.length) > 0 && (e = tabAt(tab, (n - 1) & h)) != null) { // Judging whether the header node is the node we need if ((eh = e.hash) == h) { if ((ek = e.key) == key || (ek != null && key.equals(ek))) return e.val; } // If the hash of the header node is less than 0, it indicates that it is expanding or that the location is a red-black tree. else if (eh < 0) // Refer to ForwardingNode.find(int h, Object k) and TreeBin.find(int h, Object k) return (p = e.find(h, key)) != null ? p.val : null; // Traversal list while ((e = e.next) != null) { if (e.hash == h && ((ek = e.key) == key || (ek != null && key.equals(ek)))) return e.val; } } return null; }
Simply put, most of the content of this method is very simple, only when it happens to be expanded, Forwarding Node. find (int h, Object k) is a little more complex, but after understanding the process of data migration, this is not difficult, so I will not expand it here.