hashMap source code explanation

Default size of hashMap

Knowledge point: a prime number that is not close to the integer power of 2 is often a better choice for the size of hash table.

 * The default initial capacity - MUST be a power of two.
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
 public Hashtable() {
        this(11, 0.75f);

Why should hashmap be set to 16 and MUST be a power of two? Why not choose a good value like hashtable?
1. 1.7 in the source code, here is the function of module taking operation, but this and operation is much faster than module taking, which is why it is necessary to specify the integer power of size 2. The length-1 binary is exactly 1, and the function of taking mold can be completed after the operation.

static int indexFor(int h,int length){
retrun h&(length-1);

2. Moreover, the location migration process is involved after capacity expansion. If it is an integer power of 2, just judge whether the previous bit of binary is 0 or 1. If it is 0, it remains unchanged. If it is 1, just add the index position to the array length value before capacity expansion, which is the array subscript.

Some interviewers will test why the loading factor is 0.75, 0.75, in order to improve performance. If he continues to ask you why it is 0.75, the connection is over. This involves Poisson distribution. You ask him to prove it to you!!


 static final int hash(Object key) {
        int h;
        return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);

(H = key. Hashcode()) ^ (H > > > 16) such an operation is to reduce hash collision

H > > > 16 is the height 16 used to take out h, why H = key What about hashcode ()) and (H > > > 16) XOR?
Because the high 16 value is usually not used in hashing, an array with the size of 2 ^ 16 can be obtained by using the low 16 bits. Therefore, in order to make the result more random, we need to use the high 16 bits.

1.8 put in the source code
It's not particularly difficult to find out by yourself. It's easy to find out by writing notes one by one.

  public V put(K key, V value) {
        return putVal(hash(key), key, value, false, true);
  final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
                   boolean evict) {
        Node<K,V>[] tab; Node<K,V> p; int n, i;
        //Judge that if the array is empty or the length is 0, execute the resize() method to create the array
        if ((tab = table) == null || (n = tab.length) == 0)
            n = (tab = resize()).length;
         //(n - 1) & hash addressing operation, judge whether the position is null, and insert directly if yes.
        if ((p = tab[i = (n - 1) & hash]) == null)
            tab[i] = newNode(hash, key, value, null);
        else {
        //Otherwise, create a new linked list node
            Node<K,V> e; K k;
            //Judge whether the key values are equal
            if (p.hash == hash &&
                ((k = p.key) == key || (key != null && key.equals(k))))
                //Copy nodes directly
                e = p;
                //To determine whether it is a linked list, insert it in the form of a linked list.
            else if (p instanceof TreeNode)
	           // putTreeVal() method of red black tree insertion
                e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
            else {
                for (int binCount = 0; ; ++binCount) {
                //Judge whether there are any nodes below. If not, create a new one and hang it below. End
                    if ((e = p.next) == null) {
                        p.next = newNode(hash, key, value, null);
                        //Judge whether the length of the linked list is greater than or equal to 7. If yes, it will be turned into a red black tree
                        if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
                     	 // Turn into a tree
                            treeifyBin(tab, hash);
                    //Judge whether the key of p.next is equal to itself
                    if (e.hash == hash &&
                        ((k = e.key) == key || (key != null && key.equals(k))))
                        //Pointer down
                    p = e;
            //Determine whether the key already exists
            if (e != null) { // existing mapping for key
                V oldValue = e.value;
                //onlyIfAbsent if a value already exists in the current position, whether to replace it. false means to replace it, and true means not to replace it
                if (!onlyIfAbsent || oldValue == null)
                    e.value = value;
                    //Post access callback
                return oldValue;
         //Add 1 to the record modification times to judge whether the capacity needs to be expanded. If necessary, expand the capacity
        if (++size > threshold)
             //Post insert callback
        return null;

Pay attention to details: it turns into a red black tree when the length of the linked list is greater than 7. Here is the result of statistics. Why 7 doesn't need to be studied too deeply.
If the length of the linked list exceeds 6, the efficiency will become very low.
The following are the values given in the source code comments.

  * 0:    0.60653066
     * 1:    0.30326533
     * 2:    0.07581633
     * 3:    0.01263606
     * 4:    0.00157952
     * 5:    0.00015795
     * 6:    0.00001316
     * 7:    0.00000094
     * 8:    0.00000006

Added by TobesC on Fri, 04 Mar 2022 21:36:32 +0200