Face + + realizes face detection in the way of flow

The following steps can be used for face detection:

  1. Load the model file and initialize the interface
  2. Obtain authorization
  3. Call face detection to obtain key points
  4. Notification draw key
  5. Release interface resources

Steps 3 and 4 here are repeated after receiving the camera data. We can abstract these steps into data flow for encapsulation. Face + + continuously generates face coordinate data after detecting face, so the whole detection process is the process of producing face coordinate data, which corresponds to the concept of flow.

Face + + needs to process camera data to obtain face key points, so camera is the data provider of face + + face detection api. The following figure shows the process of data flow:

See here, we also think that it is more reasonable for us to realize this process through flow! Let's create detect flow step by step

The first step is to create a flow through the flow builder:

 private val imageChannel = Channel<FacePPImage>()

flow {
            while (currentCoroutineContext().isActive) {
                emit(imageChannel.receive())
            }
        }

Here, Channel is used to receive and transmit image data. Before detecting flow processes image data, we still need some work to initialize face + +, which is processed in onStart.

flow.onStart {
            var ret = -1
            context.assets.open("megviifacepp_model").use { ios ->
                modelBuffer = ByteArray(ios.available())
                ios.read(modelBuffer)
                FaceppApi.getInstance().setLogLevel(4)
                ret = FaceppApi.getInstance().initHandle(modelBuffer)
            }
            if (ret != 0) {
                Log.d("dragon_debug", " onStart open failed!")
                throw RuntimeException("init")
            }
            if (requestTakeLicense && modelBuffer != null) {
                Log.d("dragon_debug", " onStart takeLicense")
                ret = takeLicense(context, modelBuffer!!)
            }
            if (ret != 0) {
                Log.d("dragon_debug", " onStart takeLicense failed!")
                throw RuntimeException("takeLicense")
            }
            ret = FaceDetectApi.getInstance().initFaceDetect()
            DLmkDetectApi.getInstance().initDLmkDetect()
            if (ret != 0) {
                if (requestTakeLicense) {
                    Log.d("dragon_debug", " onStart initFaceDetect error")
                    throw RuntimeException("error")
                }
                requestTakeLicense = true
                Log.d("dragon_debug", " onStart initFaceDetect retry exception")
                throw RuntimeException("initFace")
            }
            val config = FaceDetectApi.getInstance().faceppConfig
            config.face_confidence_filter = 0.6f
            config.detectionMode = FaceDetectApi.FaceppConfig.DETECTION_MODE_TRACKING
            FaceDetectApi.getInstance().faceppConfig = config
        }

The initialization operation of face + + includes loading model, authorization of detection interface, initialization of face detection interface and other steps.

The interface authorization processing of face + + is special. The interface authorization of face + + is only executed when the authorization expires. Therefore, the expired authorization is handled in conjunction with the retryWhen of flow.

flow.retryWhen { cause, attempt ->
            Log.d("dragon_debug", " retryWhen $cause attempt $attempt")
            if (attempt > 1) {
                false
            } else {
                (cause as? RuntimeException)?.message?.equals("initFace") ?: false
            }
        }

First, try to initialize the face + + interface. If the initialization fails, an exception RuntimeException("init") will be thrown here. retryWhen catches this exception and initiates retry processing. When onStart executes again, we will try to obtain the interface permission.

After the successful initialization of face + + interface authorization, we can use face + + face detection interface. Here, map is used to convert the image data into adult face coordinate data, and the conversion operation is completed by the face + + face detection interface.

flow.map { image ->
            val faces = FaceDetectApi.getInstance().detectFace(image)
            faces.forEach { face ->
                FaceDetectApi.getInstance().getLandmark(face, FaceDetectApi.LMK_84, true)
            }
            block.invoke(faces)
            faces
        }

The obtained face coordinate data notifies the picture drawing through the block callback.

The release operation of the face + + interface is in onCompletion,

flow.onCompletion {
            Log.d("dragon_debug", " onCompletion ")
            FaceppApi.getInstance().ReleaseHandle()
            DLmkDetectApi.getInstance().releaseDlmDetect()
        }

Complete code: https://github.com/mjlong1231...

Original address: https://blog.csdn.net/mjlong1...

Keywords: Android kotlin

Added by kaser on Tue, 15 Feb 2022 15:04:40 +0200