Memory leak analysis tool: LeakCanary principle analysis

Memory leakage is an important part of Android performance optimization, and leakcanary is a common tool to find and analyze memory problems. This article analyzes the principle of leakcanary, hoping to help you. Author: Freeman Gordon Original text: https://juejin.cn/user/3368559355374285 This article is published by authorization of the author

1, What is a memory leak

Memory leak refers to that the program does not release the memory space after applying to the system for allocation, resulting in occupying the memory space all the time Memory unit, which can no longer be used by the program. In Android system, it generally refers to that after the object exceeds its own life cycle, The object is still not recycled. Leakage includes:

  • java heap leak
  • native field pointer
  • fd handle leak

Leakage can easily lead to a surge in the memory of the application process, and eventually lead to a crash related to OOM or Too many open files. These crash points Generally, it is the last straw that overwhelms the camel. It is not the root cause of the crash. It needs the opening of dump memory or handle to repair the problem intuitively

2, Scheme for detecting memory leakage

1. Byte Liko

When the OOM and memory reach the top, the HPROF file is obtained through the user's imperceptible dump. When the App exits to the background and the memory Analyze sufficient conditions, cut the HPROF and send it back for analysis, and analyze the HPROF files and generate links and reports by online MAT. You can report the memory shortage caused by large objects or too many small objects created frequently to prevent OOM

2. Kwai KOOM

Using the system kernel cow (copy on write) mechanism, pause the virtual machine before each dump memory image, Then fork the child process to perform the dump operation, and the parent process will resume the virtual machine operation immediately after the fork is successful. The whole process is for the parent process The total time is only a few milliseconds. The memory image is analyzed locally by independent process single thread in idle time, and is deleted after analysis.

https://github.com/KwaiAppTeam/KOOM

3.Leakcanary customization

Make use of the customization of leakcanary and report the leak trace to the business server

3, Leakcanary overview

LeakCanary: It automatically watches destroyed activities and fragments, triggers a heap dump, runs Shark Android and then displays the result.

The icon of leak Canary is a bird, which is actually the literal translation of Canary - Canary. Because earlier, canaries were often used to detect gas in mines because of their sensitivity to harmful gases LeakCanary is divided into five parts

  • Leakage monitoring, memory dump and result display at the application layer
  • Shark, a heap analysis library based on kotlin implementation, is similar to MAT tool
  • Memory analysis Service
  • Leak display UI
  • Leak database

4, Leakcanary introduction

The introduction of the new version of Leakcanary is very simple, requiring only a gradle dependency

debugImplementation 'com.squareup.leakcanary:leakcanary-android:2.7'

perhaps

debugImplementation 'com.squareup.leakcanary:leakcanary-android-process:2.7'

The two have the same point:

  • Both use appwatcherinstaller $mainprocess (because the memory of the main process needs to be dump ed), and use ContentProvider to initialize automatically when the process starts
  • Object detection and memory dump are in the main process
  • You can configure the leak of xml_ canary_ watcher_ auto_ Install attribute to customize the initialization time

The difference between the two:

  • The leakcanary Android memory analysis service heapananalyzerservice runs in the sub thread of the main process;
  • The leakcanary Android process memory analysis service heapananalyzerservice runs in a separate process: leakcanary

5, Leakcanary start time

In the earlier version, the initialization of Leakcanary needs to be handled in onCreate of Application. In the new version, in order to reduce the access cost, Collapse the initialization of Leakcanary into the AppWatcherInstaller defined by the library. The principle is to use the ContentProvider The initialization time of onCreate is earlier than onCreate of Application (PS: later than attachBaseContext of Application)

AcitivtyThread.java

private void handleBindApplication(AppBindData data) {
    .....Omit code
    try {
        // If the app is being launched for full backup or restore, bring it up in
        // a restricted environment with the base application class.
        app = data.info.makeApplication(data.restrictedBackupMode, null); // Application#attachBaseContext()
        .....Omit code
        // don't bring up providers in restricted mode; they may depend on the
        // app's custom Application class
        if (!data.restrictedBackupMode) {
            if (!ArrayUtils.isEmpty(data.providers)) {
                installContentProviders(app, data.providers); // Create ContentProvider
            }
        }

        .....Omit code
        try {
            mInstrumentation.callApplicationOnCreate(app);// Application#onCreate()
        }
        .....Omit code
}

6, Leakcanary initialization

AppWatcherInstaller.onCreate calls the initialization of Leakcanary

  /**
   * contentProvider Start initializing Leakcanary when creating
   */
  override fun onCreate(): Boolean {
    val application = context!!.applicationContext as Application
    AppWatcher.manualInstall(application) // initialization
    return true
  }

AppWatcher.manualInstall will set a default object detection delay time and the default object type

  @JvmOverloads
  fun manualInstall(
    application: Application,
    retainedDelayMillis: Long = TimeUnit.SECONDS.toMillis(5), // Start detection 5s after adding watchedObjects
    watchersToInstall: List<InstallableWatcher> = appDefaultWatchers(application) // Detection object
  ) {
    // ... Omit code

    // The leakcanary core component is responsible for detecting leaks and triggering heap dump. The default implementation class is internalleakcanary kt
    LeakCanaryDelegate.loadLeakCanary(application)

    // Register objects detected by default
    watchersToInstall.forEach {
      it.install()
    }
  }

The objects detected by default are

  • Activity
  • Fragment (view and fragment itself)
  • ViewMoel
  • RootView(Window)
  • Service
  fun appDefaultWatchers(
    application: Application,
    reachabilityWatcher: ReachabilityWatcher = objectWatcher
  ): List<InstallableWatcher> {
    return listOf(
      ActivityWatcher(application, reachabilityWatcher), // Activity
      FragmentAndViewModelWatcher(application, reachabilityWatcher), // Fragment and viewModel
      RootViewWatcher(reachabilityWatcher), // window
      ServiceWatcher(reachabilityWatcher) // service
    )
  }

Each type will be added to watchedObjects (a map that saves the weak references of the objects to be detected and is defined in ObjectWatcher.kt) at a specific time to wait for detection. It is mainly realized through some component declaration cycle listening callbacks and some hook points

6.1 timing of effectiveness test

Register the ActivityLifecycleCallbacks callback and add the activity into watchedObjects in onActivityDestroyed to wait for detection

ActivityWatcher.kt

  private val lifecycleCallbacks =
    object : Application.ActivityLifecycleCallbacks by noOpDelegate() {
      override fun onActivityDestroyed(activity: Activity) {
        reachabilityWatcher.expectWeaklyReachable(
          activity, "${activity::class.java.name} received Activity#onDestroy() callback"
        )
      }
    }

6.2 Fragment(AndroidX) detection timing

Register the FragmentLifecycleCallbacks callback and add the View and Fragment into watchedObjects respectively in onFragmentViewDestroyed and onFragmentDestroyed to wait for detection

AndroidXFragmentDestroyWatcher.kt

    override fun onFragmentViewDestroyed(
      fm: FragmentManager,
      fragment: Fragment
    ) {
      val view = fragment.view
      if (view != null) {
        reachabilityWatcher.expectWeaklyReachable(
          view, "${fragment::class.java.name} received Fragment#onDestroyView() callback " +
          "(references to its views should be cleared to prevent leaks)"
        )
      }
    }

    override fun onFragmentDestroyed(
      fm: FragmentManager,
      fragment: Fragment
    ) {
      reachabilityWatcher.expectWeaklyReachable(
        fragment, "${fragment::class.java.name} received Fragment#onDestroy() callback"
      )
    }
  }

6.3 ViewModel detection timing

The detection of ViewModel is quite ingenious. Leakcanary adds a ViewModel for the current Fragment during Fragment onCreate, and this ViewModel uses the principle of following the host life cycle. When onClear is executed, after hook ing all viewmodels of the current host, traverse and add these viewmodels to watchedObjects

AndroidXFragmentDestroyWatcher.kt

    override fun onFragmentCreated(
      fm: FragmentManager,
      fragment: Fragment,
      savedInstanceState: Bundle?
    ) {
      // Add a ViewModel for the current Fragment when the Fragment executes onCreate
      ViewModelClearedWatcher.install(fragment, reachabilityWatcher)
    }



ViewModelClearedWatcher.kt
  fun install(
      storeOwner: ViewModelStoreOwner,
      reachabilityWatcher: ReachabilityWatcher
    ) {
      val provider = ViewModelProvider(storeOwner, object : Factory {
        @Suppress("UNCHECKED_CAST")
        override fun <T : ViewModel?> create(modelClass: Class<T>): T =
          ViewModelClearedWatcher(storeOwner, reachabilityWatcher) as T
      })
      provider.get(ViewModelClearedWatcher::class.java) // Add to storeOwner
    }

    // hook to all viewmodels of the current Fragment
    viewModelMap = try {
      val mMapField = ViewModelStore::class.java.getDeclaredField("mMap")
      mMapField.isAccessible = true
      @Suppress("UNCHECKED_CAST")
      mMapField[storeOwner.viewModelStore] as Map<String, ViewModel>
    } catch (ignored: Exception) {
      null
    }

  // When the ViewModel is onCleared, all viewmodels are added to the detection queue
  override fun onCleared() {
    viewModelMap?.values?.forEach { viewModel ->
      reachabilityWatcher.expectWeaklyReachable(
        viewModel, "${viewModel::class.java.name} received ViewModel#onCleared() callback"
      )
    }
  }

6.3 Service detection timing

Service is similar to Activity. It adds service objects to watchedObjects during onDestroy. However, since the service does not open the callback of the declaration cycle, it also obtains the declaration cycle of the service through hook

ServiceWatcher.kt

  override fun install() {
    checkMainThread()
    check(uninstallActivityThreadHandlerCallback == null) {
      "ServiceWatcher already installed"
    }
    check(uninstallActivityManager == null) {
      "ServiceWatcher already installed"
    }
    try {
      // mCallback of mH in hook ActivityThread
      swapActivityThreadHandlerCallback { mCallback ->
        uninstallActivityThreadHandlerCallback = {
          swapActivityThreadHandlerCallback {
            mCallback
          }
        }
        // Proxy object
        Handler.Callback { msg ->
          // https://github.com/square/leakcanary/issues/2114
          // On some Motorola devices (Moto E5 and G6), the msg.obj returns an ActivityClientRecord
          // instead of an IBinder. This crashes on a ClassCastException. Adding a type check
          // here to prevent the crash.
          if (msg.obj !is IBinder) {
            return@Callback false
          }

          // Intercept stop_ The service message is mainly preprocessed to obtain the service object to be destroy ed
          if (msg.what == STOP_SERVICE) {
            val key = msg.obj as IBinder
            activityThreadServices[key]?.let {
              onServicePreDestroy(key, it)
            }
          }
          // Execute original logic
          mCallback?.handleMessage(msg) ?: false
        }
      }

      // hook Activity Manage object
      swapActivityManager { activityManagerInterface, activityManagerInstance ->
        uninstallActivityManager = {
          swapActivityManager { _, _ ->
            activityManagerInstance
          }
        }
        // Dynamic proxy object
        Proxy.newProxyInstance(
          activityManagerInterface.classLoader, arrayOf(activityManagerInterface)
        ) { _, method, args ->
          // hook to the time when the service is really destroy ed. There is no way to get the servcie object, so we need the previous pre operation: onServicePreDestroy
          if (METHOD_SERVICE_DONE_EXECUTING == method.name) {
            val token = args!![0] as IBinder
            if (servicesToBeDestroyed.containsKey(token)) {
              // Encapsulate the service as a weak reference and trigger the retention detection after 5s
              onServiceDestroyed(token)
            }
          }
          // Execute original logic
          try {
            if (args == null) {
              method.invoke(activityManagerInstance)
            } else {
              method.invoke(activityManagerInstance, *args)
            }
          } catch (invocationException: InvocationTargetException) {
            throw invocationException.targetException
          }
        }
      }
    } catch (ignored: Throwable) {
      SharkLog.d(ignored) { "Could not watch destroyed services" }
    }
  }

Look again at the onServiceDestroyed method

  private fun onServiceDestroyed(token: IBinder) {
    // Match the service object obtained during preprocessing through token
    servicesToBeDestroyed.remove(token)?.also { serviceWeakReference ->
      serviceWeakReference.get()?.let { service ->
      // Add the service object to watchedObjects
        reachabilityWatcher.expectWeaklyReachable(
          service, "${service::class.java.name} received Service#onDestroy() callback"
        )
      }
    }
  }

The timing of reading all default detected objects has been described above. After that, the object will be encapsulated into a weak reference and associated with a recycling queue. The principle is:

  • If the object contained by weak reference has no other reference, execute GC, and the object will be added to the recycling queue

For example, an Activity A is encapsulated into a weak reference weakA, and the weak reference weakA is added to watchedObjects. gc is triggered after 5s. If the reference of a is added to the recycling queue, the a can be recycled, and the weakA is removed from watchedObjects. On the contrary, if the reference of a is not added to the collection queue, and the A is referenced by other objects, it is determined as a memory leak and the heap dump and analyze processes are triggered

7, Leakcanary heap dump and analyze processes

In the previous article, it has been introduced that the target object is added to the watchedObjects at a specific opportunity. After being judged to be leaked, the heap dump will be started. This process is relatively complex. Let's take the Activity leak as an example and look at a UML diagram first

Let's look back at the listening time of the Activity object

ActivityWatcher.kt

private val lifecycleCallbacks =
    object : Application.ActivityLifecycleCallbacks by noOpDelegate() {
      override fun onActivityDestroyed(activity: Activity) {
        // Add watchedObjects during Activity onDestroy
        reachabilityWatcher.expectWeaklyReachable(
          activity, "${activity::class.java.name} received Activity#onDestroy() callback"
        )
      }
    }

Tips: the noOpDelegate in the above code only needs to implement the methods of the interface concerned through Kotlin's delegation mechanism and java's dynamic agent, and other methods are automatically supplemented by the delegation, so that the code can be a little refreshing

The reachability watcher is actually an ObjectWatcher. Next, look at the expectWeaklyReachable method

  @Synchronized override fun expectWeaklyReachable(
    watchedObject: Any,
    description: String
  ) {
    if (!isEnabled()) {
      return
    }
    // Clean up the objects that have been recycled
    removeWeaklyReachableObjects()
    val key = UUID.randomUUID()
      .toString()
    val watchUptimeMillis = clock.uptimeMillis()
    // Encapsulated into a weak reference. If there are no other references, the objects in the weak reference will be recycled during gc and added to the queue
    val reference =
      KeyedWeakReference(watchedObject, key, description, watchUptimeMillis, queue)
    SharkLog.d {
      "Watching " +
        (if (watchedObject is Class<*>) watchedObject.toString() else "instance of ${watchedObject.javaClass.name}") +
        (if (description.isNotEmpty()) " ($description)" else "") +
        " with key $key"
    }

    // Add a retained monitoring map. If the objects still in the map after gc, they will be judged as leaked
    watchedObjects[key] = reference
    // Add a monitoring runnable and execute it after 5S
    checkRetainedExecutor.execute {
      moveToRetained(key)
    }
  }

Let's take a look at the implementation of removeWeaklyReachableObjects. This method is called in many places to clear the recycled object records in time

  private fun removeWeaklyReachableObjects() {
    // WeakReferences are enqueued as soon as the object to which they point to becomes weakly
    // reachable. This is before finalization or garbage collection has actually happened.
    var ref: KeyedWeakReference?
    do {
      /**
       * Objects contained in weak references, if recycled, will be added to the associated recycling queue
       * That is, if an object is successfully recycled, the reference of the object will appear in the recycling queue
       */
      ref = queue.poll() as KeyedWeakReference?
      if (ref != null) {
        // Objects that have been recycled remove records from watchedObjects
        watchedObjects.remove(ref.key)
      }
    } while (ref != null)
  }

After cleaning, Leakcanary encapsulates the Activity object into a weak reference and associates a key generated by UUID and a recycling queue. The key and the weak reference object key value are added to the watchedObjects, and the random checkretainedexecutor posts a runnable to the main thread. The execution time defaults to the delay of 5s set during initialization. Let's look at the implementation of moveToRetained(key)

  @Synchronized private fun moveToRetained(key: String) {
    removeWeaklyReachableObjects()
    val retainedRef = watchedObjects[key]
    if (retainedRef != null) {
      retainedRef.retainedUptimeMillis = clock.uptimeMillis()
      // onObjectRetainedListeners is actually the internalleakcanary passed in during initialization KT object
      onObjectRetainedListeners.forEach { it.onObjectRetained() }
    }
  }

First, go back to the initialization logic handled by InternalLeakCanary during initialization

InternalLeakCanary.kt

  override fun invoke(application: Application) {
    // Incoming application object
    _application = application

    checkRunningInDebuggableBuild()

    // Register object retention detection and listening with ObjectWatcher
    AppWatcher.objectWatcher.addOnObjectRetainedListener(this)

    // heap dump
    val heapDumper = AndroidHeapDumper(application, createLeakDirectoryProvider(application))

    // Determine whether the GC conditions are met
    val gcTrigger = GcTrigger.Default

    val configProvider = { LeakCanary.config }

    val handlerThread = HandlerThread(LEAK_CANARY_THREAD_NAME)
    handlerThread.start()
    val backgroundHandler = Handler(handlerThread.looper)

    // Judge whether the heap dump condition is met
    heapDumpTrigger = HeapDumpTrigger(
      application, backgroundHandler, AppWatcher.objectWatcher, gcTrigger, heapDumper,
      configProvider
    )
    // Application front and background monitoring, front and rear monitoring logic differentiation processing
    application.registerVisibilityListener { applicationVisible ->
      this.applicationVisible = applicationVisible
      heapDumpTrigger.onApplicationVisibilityChanged(applicationVisible)
    }
    registerResumedActivityListener(application)

    // Add a leakcanary icon on the desktop
    addDynamicShortcut(application)

    // We post so that the log happens after Application.onCreate()
    mainHandler.post {
      // https://github.com/square/leakcanary/issues/1981
      // We post to a background handler because HeapDumpControl.iCanHasHeap() checks a shared pref
      // which blocks until loaded and that creates a StrictMode violation.
      backgroundHandler.post {
        SharkLog.d {
          when (val iCanHasHeap = HeapDumpControl.iCanHasHeap()) {
            is Yup -> application.getString(R.string.leak_canary_heap_dump_enabled_text)
            is Nope -> application.getString(
              R.string.leak_canary_heap_dump_disabled_text, iCanHasHeap.reason()
            )
          }
        }
      }
    }
  }

Go back to the onObjectRetained method, and you will eventually call heapdumptrigger scheduleRetainedObjectCheck() of KT

HeapDumpTrigger.kt

  fun scheduleRetainedObjectCheck(
    delayMillis: Long = 0L
  ) {
    val checkCurrentlyScheduledAt = checkScheduledAt
    if (checkCurrentlyScheduledAt > 0) { // If it is greater than 0, it means it is already under monitoring to avoid frequent repeated detection
      return
    }
    // Record the current detection time
    checkScheduledAt = SystemClock.uptimeMillis() + delayMillis
    backgroundHandler.postDelayed({ // Child thread
      checkScheduledAt = 0
      // Detect retained objects
      checkRetainedObjects()
    }, delayMillis)
  }

By setting the timestamp, frequent repeated detection is avoided, and a runnable is post ed to the child thread

HeapDumpTrigger.kt

  private fun checkRetainedObjects() {
    // Can dump heap be triggered
    val iCanHasHeap = HeapDumpControl.iCanHasHeap()

    val config = configProvider()

    if (iCanHasHeap is Nope) {
      if (iCanHasHeap is NotifyingNope) { // Send a notification. After the user clicks it, the monitoring can be triggered forcibly
        // ... Omit code
      }
      return
    }

    // Gets the number of objects that are still retained
    var retainedReferenceCount = objectWatcher.retainedObjectCount

    if (retainedReferenceCount > 0) {
      gcTrigger.runGc() // Trigger GC
      retainedReferenceCount = objectWatcher.retainedObjectCount // Get the number of objects that have not been recycled again
    }

    /**
     *  Judge whether to start dump heap according to the number of reserved
     *  In order to minimize the impact, the number of judgments applied in the front and back stages is different. At least 5 are used in the foreground by default, and at least 1 is used in the background (the time to switch to the background should exceed the monitoring cycle)
     */
    if (checkRetainedCount(retainedReferenceCount, config.retainedVisibleThreshold)) return

    val now = SystemClock.uptimeMillis()
    val elapsedSinceLastDumpMillis = now - lastHeapDumpUptimeMillis
    if (elapsedSinceLastDumpMillis < WAIT_BETWEEN_HEAP_DUMPS_MILLIS) { // Do not repeat heap dump for one minute
      onRetainInstanceListener.onEvent(DumpHappenedRecently)
      showRetainedCountNotification(
        objectCount = retainedReferenceCount,
        contentText = application.getString(R.string.leak_canary_notification_retained_dump_wait)
      )
      scheduleRetainedObjectCheck(
        delayMillis = WAIT_BETWEEN_HEAP_DUMPS_MILLIS - elapsedSinceLastDumpMillis
      )
      return
    }

    dismissRetainedCountNotification()
    val visibility = if (applicationVisible) "visible" else "not visible"
    dumpHeap( // Trigger dump heap
      retainedReferenceCount = retainedReferenceCount,
      retry = true,
      reason = "$retainedReferenceCount retained objects, app is $visibility"
    )
  }

According to the number of watchedObjects viewed after GC is triggered, if it is in the foreground, it is greater than or equal to 5 by default. If it is in the background, it is greater than or equal to 1. When the conditions are met, heap dump is triggered

HeapDumpTrigger.kt

  private fun dumpHeap(
    retainedReferenceCount: Int,
    retry: Boolean,
    reason: String
  ) {
    saveResourceIdNamesToMemory()
    val heapDumpUptimeMillis = SystemClock.uptimeMillis()
    KeyedWeakReference.heapDumpUptimeMillis = heapDumpUptimeMillis
    // dump heap result
    when (val heapDumpResult = heapDumper.dumpHeap()) {
      is NoHeapDump -> { // fail
        // ... Omit code
      }
      is HeapDump -> { // success
        lastDisplayedRetainedObjectCount = 0
        lastHeapDumpUptimeMillis = SystemClock.uptimeMillis()
        objectWatcher.clearObjectsWatchedBefore(heapDumpUptimeMillis)
         // Start an IntentService to analyze hprof in the child thread
        HeapAnalyzerService.runAnalysis(
          context = application,
          heapDumpFile = heapDumpResult.file,
          heapDumpDurationMillis = heapDumpResult.durationMillis,
          heapDumpReason = reason
        )
      }
    }
  }

Finally, the hprof file dump ed will be handed over to HeapAnalyzerService for analysis, and HeapAnalyzerService will start shark to analyze the memory file. Shark is not familiar with it, so he won't analyze it further. Each step in the analysis process will have a corresponding callback:

OnAnalysisProgressListener.kt

  enum class Step {
    PARSING_HEAP_DUMP,
    EXTRACTING_METADATA,
    FINDING_RETAINED_OBJECTS,
    FINDING_PATHS_TO_RETAINED_OBJECTS,
    FINDING_DOMINATORS,
    INSPECTING_OBJECTS,
    COMPUTING_NATIVE_RETAINED_SIZE,
    COMPUTING_RETAINED_SIZE,
    BUILDING_LEAK_TRACES, // Build leak path
    REPORTING_HEAP_ANALYSIS
  }

It is worth mentioning that Leakcanary has opened the callback of memory analysis results. Users can implement the interface and report the results to their own business server

HeapAnalyzerService.kt#onHandleIntentInForeground()
    /**
     * You can customize onHeapAnalyzedListener to get the HeapAnalysis report trace
     * LeakTraceWrapper.wrap(heapAnalysis.toString(), 120) Format trace
     */
    config.onHeapAnalyzedListener.onHeapAnalyzed(fullHeapAnalysis)

The officially recommended method is

class LeakUploader : OnHeapAnalyzedListener {

  // The default implementation of leakcanary is mainly to record future leakage and avoid repeated reporting
  val defaultListener = DefaultOnHeapAnalyzedListener.create()

  override fun onHeapAnalyzed(heapAnalysis: HeapAnalysis) {
    TODO("Upload heap analysis to server")

    // Delegate to default behavior (notification and saving result) 
    defaultListener.onHeapAnalyzed(heapAnalysis)
  }
}

class DebugExampleApplication : ExampleApplication() {

  override fun onCreate() {
    super.onCreate()
    // Use the customized onheapananalyzedlistener to process your own business and then proxy the original logic
    LeakCanary.config = LeakCanary.config.copy(
        onHeapAnalyzedListener = LeakUploader()
    )
  }
}

Finally, the official Leakcanary document on how to repair the leak is attached, which explains how to check the source of the leak. https://square.github.io/leakcanary/fundamentals-fixing-a-memory-leak/

The last excerpt of leakcanary's instructions on using weak references to repair memory leaks. Use weak references carefully to solve the problem from the source of the leak

Memory leaks cannot be fixed by replacing strong references with weak references. It's a common solution when attempting to quickly address memory issues, however it never works. The bugs that were causing references to be kept longer than necessary are still there. On top of that, it creates more bugs as some objects will now be garbage collected sooner than they should. It also makes the code much harder to maintain.

Added by gigas10 on Fri, 25 Feb 2022 04:54:00 +0200