LruCache details and actual combat

First of all? Before reading this article, let's learn about LinkedHashMap and LruCache. First, let's look at the following two articles:

Working principle and implementation of Java LinkedHashMap
Android efficiently loads large and multi map solutions, effectively avoiding the program OOM - CSDN blog

It's the best to use it first and then analyze it!!!

Let's start with the source code action (let's assume you already know something about LinkedHashMap and LruCache)

Source code analysis

public class LruCache<K, V> {
    private final LinkedHashMap<K, V> map;

    /** Size of this cache in units. Not necessarily the number of elements. */
    //The unit of this cache size is not necessarily the number of elements (this means that the default is to return the number of elements sizeOf() method returns 1 by default, but if we rewrite it according to requirements, it may return capacity, etc.)
    private int size;  //The value of the current cache (that is, the value of the current cache)
    private int maxSize;//Maximum value, which can be passed in through the construction method or resize method

    private int putCount;//Number added to cache
    private int createCount; //Number of created
    private int evictionCount; //Number of removed
    private int hitCount; //The number of hits is actually the cumulative number of queries
    private int missCount;//Missing number

    /**
     * @param maxSize Maximum value of cache
     */
    public LruCache(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }
        this.maxSize = maxSize;
        //Initialize the construction method, and it can be seen that it is based on LinkedHashMap, and it is also based on the access order
        //(i.e. according to the time sequence) rather than the number of times to access the elements
        this.map = new LinkedHashMap<K, V>(0, 0.75f, true);
    }

    /**
     * Reset cache maximum - same as calling trimToSize() method
     */
    public void resize(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }

        synchronized (this) {
            this.maxSize = maxSize;
        }
        //This method is to make the current cache < = maximum cache
        trimToSize(maxSize);
    }

    /**
     * Returns the value for {@code key} if it exists in the cache or can be
     * created by {@code #create}. If a value was returned, it is moved to the
     * head of the queue. This returns null if a value is not cached and cannot
     * be created.
     */
     //If the value corresponding to the key in the cache exists (or is created later by create(String key)), the corresponding value will be returned
     //, and this value will be added to the head of the moving queue (linkedHashMap is a two-way linked list, so the head looks like the tail,
     //But from a normal point of view, the most recently visited elements will be saved to the tail pointer, which is what we call the tail,
     //The foreigner who wrote this article didn't understand it the same way. He thought that the tail pointer was also from the front of the queue);
     //There are two ways to return null: one is that there is no corresponding value in the cache; the other is that the create(String key) method does not create a value
    public final V get(K key) {
        if (key == null) {
            throw new NullPointerException("key == null");
        }

        V mapValue;//Map value, which refers to the value value of map
        synchronized (this) {
        //The purpose of adding a lock is to prevent too many map.get(key) operations during multithreaded access, resulting in too much burden of hash algorithm search
            mapValue = map.get(key);
            if (mapValue != null) {  //Existing words
                hitCount++;//The hit value plus 1 is actually the cumulative query value plus 1
                return mapValue;                        //---Note 1---
            }
            //mapValue value does not exist, indicating that it has been lost. Here, many small partners may ask,
            //Why can't the value be null? Although the key/value value value of LinkedHashMap can be null,
            //But it's subtle that the put method of LruCache has stipulated that the key/value cannot be null stored,
            //What may be lost is that we call bitmap.recycle() to recycle and so on
            missCount++;
        }

        /*
         * Attempt to create a value. This may take a long time, and the map
         * may be different when create() returns. If a conflicting value was
         * added to the map while create() was working, we leave that value in
         * the map and release the created value.
         */
         //Try to create a value through the create(String key) method, which may require
         //It will take a little time to create. If the creation is not successful, null will be returned. If the creation is successful, null will be returned,
         //In addition to returning the new non null value, the value will also be left in the map

        //A lot of people here will ask: why is the synchronized lock created?
        //If the lock is not created, many createdvalues will be created for multithreaded access.
        //Yes, you're right, but most of the time we use single thread access, even for multithreaded operations,
        //We seldom override the create(String key) method,
        //So I personally think it's good for him to design like this
        V createdValue = create(key);
        if (createdValue == null) {
            return null;
        }

         //I decided to worry about the hash algorithm caused by too many simultaneous (ms difference ignored) put s of multithreaded access
         //Pressure, add a lock
        synchronized (this) {
            createCount++; //Add 1 to create to accumulate the value value created
            //mapValue is the old value, createdValue is the new value
            mapValue = map.put(key, createdValue);

            if (mapValue != null) {
                // There was a conflict so undo that last put
                // There is a conflict, so undo the previous operation
                //Why are there conflicts? Here is the multithreaded call createdValue(String key)
                //There are many createdvalues. If the value corresponding to the key already exists in the previous map,
                //The instructions are multithreaded; many partners will ask: why not?
                //Wow, you are looking at the previous code, if there is value before (see Note 1),
                //The call to return has already ended. If there is a value here, it means yes
                //The non null value created by createdValue(String key),
                //As for why there are so many, because many threads call create(key) at the same time,
                //But here's the synchronized block,
                //Only one thread can access at a time, and can come here,
                //Indicates that the synchronization code block is not accessed for the first time (but the value is indeed the first thread's)
                map.put(key, mapValue);
            } else {
                 //It indicates that the value corresponding to the key is empty,
                 //So the cache value should be accumulated,       
                 //The other is that the first thread (or the first thread accessing the synchronized code block) goes this way
                size += safeSizeOf(key, createdValue);
            }
        }

        if (mapValue != null) {
            //This method is void entryRemoved(boolean evicted, K key, V oldValue, V newValue),
            //This method has been called many times, but it is very special here. You can see the parameters in the following method
            //createdValue, mapValue position slightly changed because it didn't want to call trimToSize()
            entryRemoved(false, key, createdValue, mapValue);
            return mapValue;
        } else {
            //Control size < = maxsize, which will be explained in detail below
            trimToSize(maxSize);
            return createdValue;
        }
    }

    /**
     * Caches {@code value} for {@code key}. The value is moved to the head of
     * the queue.
     *
     * @return the previous value mapped by {@code key}.
     */
     //The above means that it will move the new value value to the queue header (here, the queue header beyond the tail pointer,
     //We can also understand it as the tail of a queue. After all, tail means tail in Chinese. You don't need to tangle this,
     //Because LinkedHashMap is a two-way linked list

     //The previously stored value of map, previous, is returned
    public final V put(K key, V value) {
        if (key == null || value == null) {
            throw new NullPointerException("key == null || value == null");
        }

        V previous;
        synchronized (this) {
            putCount++;
            //accumulate
            size += safeSizeOf(key, value);
            previous = map.put(key, value);
            if (previous != null) {
                size -= safeSizeOf(key, previous);
            }
        }

        if (previous != null) {
            entryRemoved(false, key, previous, value);
        }
         //In order to make size < maxsize
        trimToSize(maxSize);
        return previous;
    }

    /**
     * Remove the eldest entries until the total of remaining entries is at or
     * below the requested size.
     *
     * @param maxSize the maximum size of the cache before returning. May be -1
     *            to evict even 0-sized elements.
     */
     //When size > = maxsize, remove the elements that are not accessed for the longest time (that is, those elements that are not accessed frequently),
     //Note here that it is a deletion; when size < maxsize stops
     //If maxsize is - 1, the elements in lruCache will be emptied
    public void trimToSize(int maxSize) {
        while (true) {
            K key;
            V value;
            synchronized (this) {
                if (size < 0 || (map.isEmpty() && size != 0)) {
                    throw new IllegalStateException(getClass().getName()
                            + ".sizeOf() is reporting inconsistent results!");
                }

                if (size <= maxSize) {
                    //Stop when size < = maxsize
                    break;
                }

                Map.Entry<K, V> toEvict = map.eldest();
                if (toEvict == null) {
                    break;
                }
                //This is toEvict. It's not null
                key = toEvict.getKey();
                value = toEvict.getValue();
                map.remove(key);
                size -= safeSizeOf(key, value);//Constant cuts
                evictionCount++;//Reduction +1
            }
             //Let's analyze this method
            entryRemoved(true, key, value, null);
        }
    }

    /**
     * Removes the entry for {@code key} if it exists.
     *
     * @return the previous value mapped by {@code key}.
     */
     //Delete the value corresponding to the key. Whether the value is null or not, it will be deleted
     // When the value is not found in the entry, the key is also erased by the map
     //Note: null value can be stored and found in the map. Don't confuse it here
     //map can be null with key/value, but LruCache is not allowed, because the put of LruCache is processed
    public final V remove(K key) {
        if (key == null) {
            throw new NullPointerException("key == null");
        }

        V previous;
        synchronized (this) {
            previous = map.remove(key);
            if (previous != null) {
                size -= safeSizeOf(key, previous);
            }
        }

        if (previous != null) {
            //Let's analyze this method
            entryRemoved(false, key, previous, null);
        }

        return previous;
    }

    /**
     * Called for entries that have been evicted or removed. This method is
     * invoked when a value is evicted to make space, removed by a call to
     * {@link #remove}, or replaced by a call to {@link #put}. The default
     * implementation does nothing.
     *
     * <p>The method is called without synchronization: other threads may
     * access the cache while this method is executing.
     *
     * @param evicted true if the entry is being removed to make space, false
     *     if the removal was caused by a {@link #put} or {@link #remove}.
     * @param newValue the new value for {@code key}, if it exists. If non-null,
     *     this removal was caused by a {@link #put}. Otherwise it was caused by
     *     an eviction or a {@link #remove}.
     */
     //So much, in fact, it means remove or evicted,
     //We didn't tell the garbage collector to recycle it. If you want to recycle it, you need to rewrite this method yourself
     //As for how to use it, the following article finally explains
     // http://blog.csdn.net/jxxfzgy/article/details/44885623
    protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {}

    /**
     * Called after a cache miss to compute a value for the corresponding key.
     * Returns the computed value or null if no value can be computed. The
     * default implementation returns null.
     *
     * <p>The method is called without synchronization: other threads may
     * access the cache while this method is executing.
     *
     * <p>If a value for {@code key} exists in the cache when this method
     * returns, the created value will be released with {@link #entryRemoved}
     * and discarded. This can occur when multiple threads request the same key
     * at the same time (causing multiple values to be created), or when one
     * thread calls {@link #put} while another is creating a value for the same
     * key.
     */
    protected V create(K key) {
        return null;
    }

    private int safeSizeOf(K key, V value) {
        int result = sizeOf(key, value);
        if (result < 0) {
            throw new IllegalStateException("Negative size: " + key + "=" + value);
        }
        return result;
    }

    /**
     * Returns the size of the entry for {@code key} and {@code value} in
     * user-defined units.  The default implementation returns 1 so that size
     * is the number of entries and max size is the maximum number of entries.
     *
     * <p>An entry's size must not change while it is in the cache.
     */
     //This method returns 1 by default. 99% of them will rewrite this method in development, and 1% forget to rewrite it
    protected int sizeOf(K key, V value) {
        return 1;
    }

    /**
     * Clear the cache, calling {@link #entryRemoved} on each removed entry.
     */
    public final void evictAll() {
        trimToSize(-1); // -1 will evict 0-sized elements
    }

    /**
     * For caches that do not override {@link #sizeOf}, this returns the number
     * of entries in the cache. For all other caches, this returns the sum of
     * the sizes of the entries in this cache.
     */
    public synchronized final int size() {
        return size;
    }

    /**
     * For caches that do not override {@link #sizeOf}, this returns the maximum
     * number of entries in the cache. For all other caches, this returns the
     * maximum sum of the sizes of the entries in this cache.
     */
    public synchronized final int maxSize() {
        return maxSize;
    }

    /**
     * Returns the number of times {@link #get} returned a value that was
     * already present in the cache.
     */
    public synchronized final int hitCount() {
        return hitCount;
    }

    /**
     * Returns the number of times {@link #get} returned null or required a new
     * value to be created.
     */
    public synchronized final int missCount() {
        return missCount;
    }

    /**
     * Returns the number of times {@link #create(Object)} returned a value.
     */
    public synchronized final int createCount() {
        return createCount;
    }

    /**
     * Returns the number of times {@link #put} was called.
     */
    public synchronized final int putCount() {
        return putCount;
    }

    /**
     * Returns the number of values that have been evicted.
     */
    public synchronized final int evictionCount() {
        return evictionCount;
    }

    /**
     * Returns a copy of the current contents of the cache, ordered from least
     * recently accessed to most recently accessed.
     */
    public synchronized final Map<K, V> snapshot() {
        return new LinkedHashMap<K, V>(map);
    }

    @Override public synchronized final String toString() {
        int accesses = hitCount + missCount;
        int hitPercent = accesses != 0 ? (100 * hitCount / accesses) : 0;
        return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]",
                maxSize, hitCount, missCount, hitPercent);
    }
}

actual combat

After reading the source code analysis, it is sure to practice the following. At the end of this article:
Android memory optimization explained by LruCache - CSDN blog

After analyzing and playing LruCache, I hope you can read the following articles:
1.Android efficiently loads large and multi map solutions, effectively avoiding the program OOM - CSDN blog
2.Android DiskLruCache full resolution, the best solution for hard disk cache - CSDN blog
3. Android photo wall application implementation, no matter how many pictures are not afraid to crash - CSDN blog

Keywords: Android Java snapshot

Added by Rolando_Garro on Thu, 02 Apr 2020 17:44:02 +0300