Source code analysis of ConcurrentHashMap

6. Membership method

6.1 some auxiliary methods

spread(int h) method

This method is used to calculate the hash value of Node nodes. When calculating the hash value, the high order of h is also used to make the Hash list more scattered.

// 0x7fffffff is converted to binary, which is 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
// HASH_ The value of bits is 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
// H ^ (H > > > 16) is used here to use the high order of h when calculating the hash value, so as to make the hash table more scattered
static final int spread(int h) {
        return (h ^ (h >>> 16)) & HASH_BITS;
    }

Tabat (node < K, V > [] tab, int i) method

This method is to obtain the Node node at the specified index i in the table

static final <K,V> Node<K,V> tabAt(Node<K,V>[] tab, int i) {
    return (Node<K,V>)U.getObjectVolatile(tab, ((long)i << ASHIFT) + ABASE);
}

casTabAt(Node<K,V>[] tab, int i, Node<K,V> c, Node<K,V> v)

Modify the Node value at the specified position i in the table through CAS operation

// c: Expected value v: the node value to be set
static final <K,V> boolean casTabAt(Node<K,V>[] tab, int i, Node<K,V> c, Node<K,V> v) {
    return U.compareAndSwapObject(tab, ((long)i << ASHIFT) + ABASE, c, v);
}

setTabAt(Node<K,V>[] tab, int i, Node<K,V> v)

Modify the Node value at the specified index in the table according to the Node value

static final <K,V> void setTabAt(Node<K,V>[] tab, int i, Node<K,V> v) {
    U.putObjectVolatile(tab, ((long)i << ASHIFT) + ABASE, v);
}

resizeStamp(int n)

Get an expanded version number. During capacity expansion, you must obtain the capacity expansion version number before capacity expansion.

static final int resizeStamp(int n) {
    return Integer.numberOfLeadingZeros(n) | (1 << (RESIZE_STAMP_BITS - 1));
}

tableSizeFor(int c) method

Gets the smallest power greater than or equal to 2 of the passed in parameter

The principle is to change each bit of the number under binary into 1, and then add 1 to become the smallest power greater than or equal to 2 of c

// This method is to convert c into binary, and each bit becomes 1. Then add 1 to calculate the minimum power greater than or equal to 2 of c
private static final int tableSizeFor(int c) {
    int n = c - 1;
    n |= n >>> 1;
    n |= n >>> 2;
    n |= n >>> 4;
    n |= n >>> 8;
    n |= n >>> 16;
    return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
}

6.2 main methods

put(K key, V value) method

Insert element into table

put(K key, V value) method

public V put(K key, V value) {
    // putVal(key, value, false) method was called
    return putVal(key, value, false);
}

putVal(K key, V value, boolean onlyIfAbsent) method

// Parameter key: indicates the added key
// Parameter value: indicates the added value
/* 
   Parameter onlyIfAbsent: indicates whether to update the old value to the new value if the keys are the same
			false: Update old value to new value
            true: Do not update old values   
*/
final V putVal(K key, V value, boolean onlyIfAbsent) {
    // First, judge whether the key and value are null. As long as one is null, throw an exception directly (this also shows that the key and value of ConcurrentHashMap are not allowed to be null)
    if (key == null || value == null) throw new NullPointerException();
    // Calculate the hash value of the key
    int hash = spread(key.hashCode());
    int binCount = 0;
    for (Node<K,V>[] tab = table;;) {
        Node<K,V> f; int n, i, fh;
        // If the list is null or the length of the list is 0, it means that the list has not been created yet. First
        // Call the initTable() method to create the list, and then
        if (tab == null || (n = tab.length) == 0)
            tab = initTable();
        // If you create a list, you will get whether the location where the current key value pair should be stored is empty. If it is equal to null, it indicates that the location is null. You can add it directly
        else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
            // Try to add using CAS operation, and the addition will exit the loop successfully, otherwise
            if (casTabAt(tab, i, null,
                         new Node<K,V>(hash, key, value, null)))
                break;                   // no lock when adding to empty bin
        }
        // This shows that the storage location should not be empty. If the hash of the node at the storage location is MOVED, it means that it is migrating to help expand the capacity
        else if ((fh = f.hash) == MOVED)
            tab = helpTransfer(tab, f);
        // If there are elements in the storage location and they are not being expanded, lock them first and then try to add them in the linked list or red black tree
        else {
            V oldVal = null;
            synchronized (f) {
               	// Second judgment
                if (tabAt(tab, i) == f) {
                    // If fh (f.hash) > = 0, it indicates that it is a linked list. Try to add data to the linked list
                    if (fh >= 0) {
                        // binCount must be at least 1
                        binCount = 1;
                        // binCount records the number of nodes in the linked list
                        for (Node<K,V> e = f;; ++binCount) {
                            K ek;
                            // If you find it
                            if (e.hash == hash &&
                                ((ek = e.key) == key ||
                                 (ek != null && key.equals(ek)))) {
                                // The hash of the key is the same and the key of the key is the same. Get the old value first
                                oldVal = e.val;
                                // If onlyIfAbsent is false, the old value is updated to the new value
                                if (!onlyIfAbsent)
                                    e.val = value;
                                break;
                            }
                            Node<K,V> pred = e;
                            // If the next node is null, it indicates that the same key is not found, and a new node is added at the end of the linked list
                            if ((e = e.next) == null) {
                                pred.next = new Node<K,V>(hash, key,
                                                          value, null);
                                break;
                            }
                        }
                    }
                    // If the storage location is a TreeBin type node, it indicates that the storage location is a red black tree
                    else if (f instanceof TreeBin) {
                        Node<K,V> p;
                        // Set binCount to 2
                        binCount = 2;
                        // Try to add a node in the red black tree
                        if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
                                                       value)) != null) {
                            // Get old value
                            oldVal = p.val;
                            // If the parameter onlyIfAbsent is false, update the old value to the new value
                            if (!onlyIfAbsent)
                                p.val = value;
                        }
                    }
                }
            }
            // If binCount is not 0, the addition is successful or the node with the same key is found
            if (binCount != 0) {
                // If binCount is greater than or equal to the treeing threshold, try treeing
                if (binCount >= TREEIFY_THRESHOLD)
                    treeifyBin(tab, i);
                // If the element exists, the old value is returned
                if (oldVal != null)
                    return oldVal;
                break;
            }
        }
    }
    // Count the number of elements in the current table and judge whether it reaches the capacity expansion threshold standard. If it reaches the capacity expansion threshold standard, try to expand the capacity
    addCount(1L, binCount);
    return null;
}

The process is as follows:

  1. If the bucket array is not initialized, it is initialized first.

  2. If the bucket array is not empty, check whether the cell at the specified index is empty. If the cell is empty, it will be directly inserted into the cell (if successful, the cycle will be interrupted, if failed).

  3. If the cell at the specified index is not empty, judge whether the element is being migrated. If the element is being migrated, go to help expand the capacity first.

  4. If the cell to be inserted is not empty and is not migrating elements, lock the cell

  5. If the current cell is stored as a linked list, try updating or inserting elements in the linked list

  6. If the current cell is stored as a red black tree, try updating or inserting elements in the red black tree

  7. Returns the old value if the element exists

  8. If the element does not exist, add 1 to the number of elements in the list, check whether capacity expansion is required, and return null

initTable() method

Initialize the hash table (in fact, create a Node array, and the bottom layer of the hash table is stored in the form of Node array). Note: this method is only called the first time the put element is.

private final Node<K,V>[] initTable() {
    Node<K,V>[] tab; int sc;
    // Determine whether the table has not been initialized
    while ((tab = table) == null || tab.length == 0) {
        // If sizeCtl is less than 0, it indicates that another thread is initializing, and the current thread releases cpu
        if ((sc = sizeCtl) < 0)
            Thread.yield(); // lost initialization race; just spin
        // If no thread is initializing the table, the current thread attempts to set sizeCtl to - 1, indicating that it wants to initialize,
        // If your settings are successful, initialize the table yourself; If the setting fails, it indicates that there are threads competing for the setting
        // If sc is - 1 and the other person sets it successfully, the other person will initialize the table and spin it
        else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
            try {
                	
                if ((tab = table) == null || tab.length == 0) {
                    // Set the table capacity to sizecl. If it is greater than 0, it will be set to sizecl. If it is less than or equal to 0, it will be set to default_ Capability (default capacity 16)
                    int n = (sc > 0) ? sc : DEFAULT_CAPACITY;
                    @SuppressWarnings("unchecked")
                    // new an array of nodes
                    Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n];
                    // table refers to the array of new
                    table = tab = nt;
                    // This actually calculates n-n/2, that is, n*0.75
                    sc = n - (n >>> 2);
                }
            } finally {
                // Set sizeCtl to the capacity of the table * 0.75, that is, the threshold that the number of elements in the table needs to reach when the table is expanded
                sizeCtl = sc;
            }
            break;
        }
    }
    // Return the new table
    return tab;
}

The process is as follows:

  1. First judge whether the hash table has been initialized. If it has not been initialized, continue to execute. If it has been initialized, directly return to the reference table of the hash table.

  2. Judge whether sizeCtl is less than 0. If it is less than 0, it indicates that a thread is initializing and the current thread releases the cpu; Otherwise, the current thread attempts to modify sizeCtl to using CAS

    -1. Indicates that you want to initialize the hash table. If the modification is successful, proceed to the next step.

  3. If sizecl is 0, set the table capacity as the default value (16); otherwise, set the table capacity as sizecl (at this time, sizecl must be greater than 0)

  4. After that, new a Node array specifying the table capacity, and point the table to this array

  5. Then set sizeCtl to table capacity * 0.75, which is the threshold value that should be reached in the next capacity expansion.

addCount(long x, int check) method

Add 1 to the number of hash table elements and judge whether capacity expansion is required.

When judging the expansion, we need to know the total number of all elements in the hash table. If we only use one BASECOUNT variable to record this number, if n threads try to modify this number, only one thread will succeed and the other N-1 threads will fail. The conflict rate is too high. In order to reduce the conflict, ConcurrentHashMap uses the same idea as LongAdder and uses the counterCells array to disperse conflicts. Adding the value to be added to the counterCells array will reduce the conflicts. When calculating the total number of elements, add the number in the counterCells array and BASECOUNT to obtain the number of all elements in the hash table.

private final void addCount(long x, int check) {
    CounterCell[] as; long b, s;
    // In fact, the same as the optimization strategy of LongAdder compared with AtomicLong, CounterCell array is used to disperse the hot data of baseCount
    // If the counterCells array is not empty, try adding a value to the counterCells array first
    // Otherwise, try to compete for baseCount (directly try to add a value at baseCount)
    if ((as = counterCells) != null ||
        !U.compareAndSwapLong(this, BASECOUNT, b = baseCount, s = b + x)) {
        // The execution here indicates that it has not been added successfully
        CounterCell a; long v; int m;
        boolean uncontended = true;
        if (as == null || (m = as.length - 1) < 0 ||
            (a = as[ThreadLocalRandom.getProbe() & m]) == null ||
            // When the first three if conditions are not true, it means that the CounterCell array is not empty and the corresponding CounterCell cell has been created
            // Then try adding data at the cell
            !(uncontended =
              U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))) {
            // If it still fails, execute the fullAddCount method and return directly
            fullAddCount(x, uncontended);
            return;
        }
        if (check <= 1)
            return;
        // Calculate the number of elements
        s = sumCount();
    }
    
    if (check >= 0) {
        Node<K,V>[] tab, nt; int n, sc;
        // If the number of elements reaches the capacity expansion threshold, try capacity expansion
        while (s >= (long)(sc = sizeCtl) && (tab = table) != null &&
               (n = tab.length) < MAXIMUM_CAPACITY) {
            // rs is the version representation during capacity expansion
            int rs = resizeStamp(n);
            // If capacity expansion is in progress
            if (sc < 0) {
                if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
                    sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
                    transferIndex <= 0)
                    // After capacity expansion, exit the cycle directly
                    break;
                // If the capacity expansion is not completed, the current thread is added to the migration element and the number of capacity expansion threads is increased by 1
                if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
                    transfer(tab, nt);
            }
            // If it is not expanding, the current thread will expand
            // The upper 16 bits of SIZECTL store the version number during capacity expansion, and the lower 16 bits store the number of capacity expansion threads + 1
            else if (U.compareAndSwapInt(this, SIZECTL, sc,
                                         (rs << RESIZE_STAMP_SHIFT) + 2))
                // Enter migration element
                transfer(tab, null);
            // Calculate the total number of elements
            s = sumCount();
        }
    }
}

The process is as follows:

  1. If the counterCells array is not empty and the specified cell is not empty, an attempt is made to add it to the cell. If it fails, execute the * * fullAddCount(x, uncontended) * * method and exit the loop.
  2. Use the sumCount() method to calculate the number of elements in the hash table (adding up all the numbers in BASECOUNT and counterCells array is the number of elements in the hash table).
  3. Judge whether capacity expansion is required. The standard for capacity expansion is that the number of elements reaches SIZECTL. During capacity expansion, the high 16 bits of SIZECTL store the version number during capacity expansion, and the low 16 bits store the number of capacity expansion threads + 1. If capacity expansion is in progress, first judge whether capacity expansion is completed, and directly break after capacity expansion; Go to help expand the capacity before the expansion is completed. If the capacity is not expanded, the current thread becomes the expanded thread to migrate elements.

Transfer (node < K, V > [] tab, node < K, V > [] nexttab) method

The capacity expansion method changes the capacity to twice the original array.

private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) {
    int n = tab.length, stride;
    if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE)
        stride = MIN_TRANSFER_STRIDE; // subdivide range
    if (nextTab == null) {            // initiating
        try {
            @SuppressWarnings("unchecked")
            // When the capacity is expanded, the capacity becomes twice that of the original
            Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1];
            nextTab = nt;
        } catch (Throwable ex) {      // try to cope with OOME
            sizeCtl = Integer.MAX_VALUE;
            return;
        }
        // Reference to new array
        nextTable = nextTab;
        transferIndex = n;
    }
    // The capacity size of the new array
    int nextn = nextTab.length;
    // Create a ForwardingNode node and store the reference of the new array in it
    ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab);
    boolean advance = true;
    boolean finishing = false; // to ensure sweep before committing nextTab
    for (int i = 0, bound = 0;;) {
        Node<K,V> f; int fh;
        // The value of i in the whole while loop will decrease from n-1 to (n represents the length of the old array), that is, each element in the old array will be migrated to the new array step by step
        while (advance) {
            int nextIndex, nextBound;
            if (--i >= bound || finishing)
                advance = false;
            		//nextIndex assigns transferIndex, which is n above
            else if ((nextIndex = transferIndex) <= 0) {
                i = -1;
                advance = false;
            }
            else if (U.compareAndSwapInt
                     (this, TRANSFERINDEX, nextIndex,
                      nextBound = (nextIndex > stride ?
                                   nextIndex - stride : 0))) {
                bound = nextBound;
                // Assignment nextIndex-1
                i = nextIndex - 1;
                advance = false;
            }
        }
        if (i < 0 || i >= n || i + n >= nextn) {
            int sc;
            // If one traversal is completed, that is, all elements in the old array are migrated, replace the old array with a new array and set the old array to null,
            // Also reset the sizeCtl value and the threshold value to be reached during the next capacity expansion
            if (finishing) {
                // Set new array to null
                nextTable = null;
                // The old array executes the newly created array
                table = nextTab;
                // Calculate the threshold to be reached next time (that is, 0.75 times the capacity of the new array)
                sizeCtl = (n << 1) - (n >>> 1);
                return;
            }
            if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) {
                // After the current thread is expanded, the number of expanded threads is - 1
                if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT)
                    return;
                // Set finishing to true
                finishing = advance = true;
                // i reset to n
                // This will traverse the bucket array again to see if the migration is complete
                i = n; // recheck before commit
            }
        }
        else if ((f = tabAt(tab, i)) == null)
            // If there is no data in the bucket, put it directly into the ForwardingNode node to mark that the location has been migrated
            advance = casTabAt(tab, i, null, fwd);
        // If the hash value of the first element in the bucket is MOVED, it indicates that it is a ForwardingNode node, that is, the bucket has been migrated
        else if ((fh = f.hash) == MOVED)
            advance = true; // already processed
        else {
            // Lock the bucket and migrate the elements
            synchronized (f) {
                // Again, judge whether the current node is still the original node
                // To prevent other threads from migrating elements first
                if (tabAt(tab, i) == f) {
                    // Divide a linked list into two linked lists
                    // The rule is based on the hash of each element in the bucket and the bucket size n
                    // Those equal to 0 are placed in the low linked list (low), and those not equal to 0 are placed in the high linked list (hn)
                    // The low linked list is migrated to the new array, and the relative position remains unchanged
                    // The high-order linked list is migrated to the new array, which is exactly the position + n in the old array
                    // This is also the reason why the capacity is doubled during expansion
                    Node<K,V> ln, hn;
                    if (fh >= 0) {
                        // The hash value of the first element is greater than or equal to 0
                        // It indicates that the elements in the bucket are stored in the form of linked list
                        // This is similar to HashMap migration, except that there is an additional lastRun
                        // lastRun is a sub linked list that needs no special processing after extracting the linked list
                        // For example, the hash value of all elements in the bucket and the value after n and are 0 0 4 0 0 0 0 0 respectively 
                        // Then the elements corresponding to the last three zeros must still be in the same bucket
                        // This is lastRun, which corresponds to the penultimate node
                        int runBit = fh & n;
                        Node<K,V> lastRun = f;
                        for (Node<K,V> p = f.next; p != null; p = p.next) {
                            int b = p.hash & n;
                            if (b != runBit) {
                                runBit = b;
                                lastRun = p;
                            }
                        }
                        // Let's see if the following elements belong to high-level linked list or low-level linked list
                        if (runBit == 0) {
                            ln = lastRun;
                            hn = null;
                        }
                        else {
                            hn = lastRun;
                            ln = null;
                        }
                        // Traverse the linked list and put the hash-n with 0 into the low-order linked list
                        // Those that are not 0 are placed in the high-order linked list
                        for (Node<K,V> p = f; p != lastRun; p = p.next) {
                            int ph = p.hash; K pk = p.key; V pv = p.val;
                            if ((ph & n) == 0)
                                ln = new Node<K,V>(ph, pk, pv, ln);
                            else
                                hn = new Node<K,V>(ph, pk, pv, hn);
                        }
                        // The relative position of the elements placed in the low linked list remains unchanged
                        setTabAt(nextTab, i, ln);
                        // The position of the element in the high-order linked list is the position + n in the old array
                        setTabAt(nextTab, i + n, hn);
                        // Mark current bucket migrated
                        setTabAt(tab, i, fwd);
                        // If advance is true, return to the above execution --i operation
                        advance = true;
                    }
                    else if (f instanceof TreeBin) {
                        // If the first element is a TreeBin object, it is a tree
                        // Split into two trees
                        // It is also divided according to whether hash & n is 0
                        // The hash & n with 0 is placed in the low tree
                        // Hash & n not 0 is placed in the high-level tree
                        TreeBin<K,V> t = (TreeBin<K,V>)f;
                        TreeNode<K,V> lo = null, loTail = null;
                        TreeNode<K,V> hi = null, hiTail = null;
                        int lc = 0, hc = 0;
                        // Traverse the whole tree and divide it into two trees according to whether hash & n is 0
                        for (Node<K,V> e = t.first; e != null; e = e.next) {
                            int h = e.hash;
                            TreeNode<K,V> p = new TreeNode<K,V>
                                (h, e.key, e.val, null, null);
                            if ((h & n) == 0) {
                                if ((p.prev = loTail) == null)
                                    lo = p;
                                else
                                    loTail.next = p;
                                loTail = p;
                                ++lc;
                            }
                            else {
                                if ((p.prev = hiTail) == null)
                                    hi = p;
                                else
                                    hiTail.next = p;
                                hiTail = p;
                                ++hc;
                            }
                        }
                        // If the number of elements in the differentiated tree is less than or equal to 6, it will degenerate into a linked list
                        ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) :
                            (hc != 0) ? new TreeBin<K,V>(lo) : t;
                        hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) :
                            (lc != 0) ? new TreeBin<K,V>(hi) : t;
                        // The relative position of the elements in the low tree remains unchanged
                        setTabAt(nextTab, i, ln);
                        // The position of the element in the high-order tree is the original position + n
                        setTabAt(nextTab, i + n, hn);
                        // Mark that the bucket has been migrated
                        setTabAt(tab, i, fwd);
                        // If advance is true, return to the above execution --i operation
                        advance = true;
                    }
                }
            }
        }
    }
}

Brief summary:

  1. The size of the new bucket array is twice that of the old bucket array

  2. The migration element starts with the bucket at the back

  3. A ForwardingNode type variable is placed in the bucket after migration to mark the completion of the bucket migration

  4. When migrating elements, divide the elements in the bucket into two linked lists or two trees according to whether hash & n is equal to 0

  5. The relative position of the low linked list (tree) stored in the new array remains unchanged

  6. The position of the high-order linked list (tree) stored in the new array is the original position + n

  7. When migrating elements, the current bucket will be locked, that is, the idea of segmented lock.

helpTransfer(Node<K,V>[] tab, Node<K,V> f)

Assist in capacity expansion. When adding an element, the thread finds that it is expanding and the bucket element where the current element is located has been migrated, it assists in migrating the elements of other buckets.

final Node<K,V>[] helpTransfer(Node<K,V>[] tab, Node<K,V> f) {
    Node<K,V>[] nextTab; int sc;
    // If the bucket array is not null and the node type stored in the location in the table is ForwardingNode and nextTable is not empty
    // This indicates that the current bucket has been migrated before helping to migrate the elements of other buckets
    // During capacity expansion, the first element of the old bucket will be set as ForwardingNode, and its nextTable will point to the new bucket array
    if (tab != null && (f instanceof ForwardingNode) &&
        (nextTab = ((ForwardingNode<K,V>)f).nextTable) != null) {
        // Recalculate the expansion version number
        int rs = resizeStamp(tab.length);
        // Sizecl < 0, indicating capacity expansion
        while (nextTab == nextTable && table == tab &&
               (sc = sizeCtl) < 0) {
            if ((sc >>> RESIZE_STAMP_SHIFT) != rs || sc == rs + 1 ||
                sc == rs + MAX_RESIZERS || transferIndex <= 0)
                break;
            // Number of threads for capacity expansion plus 1
            if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1)) {
                // The current thread helps migrate elements
                transfer(tab, nextTab);
                break;
            }
        }
        return nextTab;
    }
    return table;
}

remove(Object key) method

remove(Object key)

public V remove(Object key) {
    // Call the replaceNode(key, null, null) method
    return replaceNode(key, null, null);
}

replaceNode(Object key, V value, Object cv)

Like adding an element, deleting an element first finds the bucket where the element is located, then locks the whole bucket with the idea of segmented lock, and then carries out the operation.

final V replaceNode(Object key, V value, Object cv) {
    // Compute hash
    int hash = spread(key.hashCode());
    // spin
    for (Node<K,V>[] tab = table;;) {
        Node<K,V> f; int n, i, fh;
        // If the bucket where the target is located does not exist, jump out of the loop and return null
        if (tab == null || (n = tab.length) == 0 ||
            (f = tabAt(tab, i = (n - 1) & hash)) == null)
            break;
        else if ((fh = f.hash) == MOVED)
            // If capacity expansion is in progress, assist in capacity expansion
            tab = helpTransfer(tab, f);
        else {
            V oldVal = null;
            // Has the tag been processed
            boolean validated = false;
            synchronized (f) {
                // Verify again whether the first element of the current bucket has been modified
                if (tabAt(tab, i) == f) {
                    if (fh >= 0) {
                        // FH > = 0 indicates that it is a linked list node
                        validated = true;
                        // Traverse the linked list to find the target node
                        for (Node<K,V> e = f, pred = null;;) {
                            K ek;
                            if (e.hash == hash &&
                                ((ek = e.key) == key ||
                                 (ek != null && key.equals(ek)))) {
                                // Target node found
                                V ev = e.val;
                                // Check whether the value of the target node is equal to cv
                                if (cv == null || cv == ev ||
                                    (ev != null && cv.equals(ev))) {
                                    oldVal = ev;
                                    if (value != null)
                                        // If value is not empty, replace the old value
                                        e.val = value;
                                    else if (pred != null)
                                        // If the front node is not empty, delete the current node
                                        pred.next = e.next;
                                    else
                                        // If the front node is empty, it indicates that it is the first element in the bucket. Delete and set the new first element as the old first element
                                        // Successor node of
                                        setTabAt(tab, i, e.next);
                                }
                                break;
                            }
                            pred = e;
                            // After traversing to the end of the linked list, the element has not been found. Jump out of the loop
                            if ((e = e.next) == null)
                                break;
                        }
                    }
                    else if (f instanceof TreeBin) {
                        // If it is a tree node
                        validated = true;
                        TreeBin<K,V> t = (TreeBin<K,V>)f;
                        TreeNode<K,V> r, p;
                        // Traversing the tree found the target node
                        if ((r = t.root) != null &&
                            (p = r.findTreeNode(hash, key, null)) != null) {
                            V pv = p.val;
                            // Check whether the value of the target node is equal to cv
                            if (cv == null || cv == pv ||
                                (pv != null && cv.equals(pv))) {
                                oldVal = pv;
                                if (value != null)
                                    // If value is not empty, replace the old value
                                    p.val = value;
                                else if (t.removeTreeNode(p))
                                    // If value is empty, the element is deleted
                                    // If the number of elements in the deleted tree is small, it will degenerate into a linked list
                                    // t.removeTreeNode(p) this method returns true, indicating that the number of elements in the tree is small after the node is deleted
                                    setTabAt(tab, i, untreeify(t.first));
                            }
                        }
                    }
                }
            }
            // If it is processed, it returns whether the element is found or not
            if (validated) {
                // If an element is found, its old value is returned
                if (oldVal != null) {
                    // If the value to be replaced is empty, the number of elements is reduced by 1
                    if (value == null)
                        addCount(-1L, -1);
                    return oldVal;
                }
                break;
            }
        }
    }
    // Element not found returned null
    return null;
}
  1. Compute hash
  2. If the bucket does not exist, the target element is not found, and
  3. If capacity expansion is in progress, assist in deleting after capacity expansion is completed
  4. If it is stored in the form of a linked list, traverse the entire linked list to find elements, and then delete them
  5. If it is stored in the form of a tree, traverse the tree to find the elements, and then delete them
  6. If it is stored in the form of a tree and the number of elements in the tree is small after deleting elements, it will degenerate into a linked list
  7. If the element is indeed deleted, the number of elements in the entire hash table is reduced by 1 and the old value is returned
  8. If the element is not deleted, null is returned

get(Object key) method

Get elements. Get elements in different ways according to the first element of the bucket where the target key is located. The key point is to rewrite the find() method

public V get(Object key) {
    Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek;
    // Compute hash
    int h = spread(key.hashCode());
    // If the bucket in which the element is located exists and there are elements in it
    if ((tab = table) != null && (n = tab.length) > 0 &&
        (e = tabAt(tab, (n - 1) & h)) != null) {
        // If the first element is the element you are looking for, return it directly
        if ((eh = e.hash) == h) {
            if ((ek = e.key) == key || (ek != null && key.equals(ek)))
                return e.val;
        }
        else if (eh < 0)
            // If the hash is less than 0, it indicates that it is a tree or expanding capacity
            // Find is used to find elements. The search method of find has different implementation methods according to different subclasses of Node
            return (p = e.find(h, key)) != null ? p.val : null;
        // Traverse the entire linked list to find elements
        while ((e = e.next) != null) {
            if (e.hash == h &&
                ((ek = e.key) == key || (ek != null && key.equals(ek))))
                return e.val;
        }
    }
    return null;
}
  1. Calculate the hash and locate the bucket where the element is located
  2. If the first element in the bucket is the element to be found, it is returned directly
  3. If it is a tree or an element being migrated, call the find() method of the respective Node subclass to find the element
  4. If it is a linked list, traverse the whole linked list to find elements

sumCount() method

size() method

public int size() {
    // Call sumCount() to calculate the number of elements
    long n = sumCount();
    return ((n < 0L) ? 0 :
            (n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE :
            (int)n);
}

sumCount() method

The storage of the number of elements adopts the counter cell array decentralized hotspot baseCount. To obtain the elements, add the baseCount and the number in all counter cell arrays

final long sumCount() {
    // Calculates the sum of all numbers in the baseCount and CounterCell arrays
    CounterCell[] as = counterCells; CounterCell a;
    long sum = baseCount;
    if (as != null) {
        for (int i = 0; i < as.length; ++i) {
            if ((a = as[i]) != null)
                sum += a.value;
        }
    }
    return sum;
}

7. Summary

  • ConcurrentHashMap stores elements in the form of array + linked list + red black tree

  • Concurrent HashMap adopts CAS + spin + Synchronized mode to maintain the accuracy of concurrency

  • Compared with HashMap, concurrent HashMap uses sizeCtl to control the threshold to be reached during capacity expansion

    • -1: Currently, a thread is initializing the table array

    • -N: The upper 16 bits represent the version number of the expansion, the lower 16 bits represent the number of threads (1 + n threads), and N represents the number of threads currently participating in the expansion

    • =0: when the parameterless construction method is called to create a ConcurrentHashMap object, sizeCtl will not be assigned a value. The default value is 0, so the capacity of the table during the first initialization is 16

    • **>**0 :

      • If the table is not initialized, it indicates the capacity of the table during initialization
      • If the table has been initialized, it indicates the threshold that the table should reach in the next capacity expansion
  • The query operation is not locked, so the ConcurrentHashMap is not strongly consistent (because there is no synchronization)

Keywords: Java data structure JUC

Added by Skara on Sat, 22 Jan 2022 22:12:06 +0200