The static model is one click dynamic, which teaches you how to integrate motion capture ability

How is the interesting static model dynamic?

Huawei 3D modeling service (3D Modeling Kit) has the ability of motion capture. Using human body detection technology, model acceleration and compression technology and monocular attitude estimation algorithm based on deep learning, it can capture the three-dimensional information of 24 bone key points of human body only by using the RGB camera of ordinary mobile phone, so as to easily realize the dynamic of static model and make our model more vivid and vivid.

It's easy to realize this function. Let's take a look at the integration steps!

Application scenario

The service is widely used in the field of 3D content production, especially in games, film and television, medical and other industries. For example, role driving and animation video production in UGC games, real-time driving of virtual anchors, rehabilitation guidance in the medical industry, etc.

Integration code

1. Development preparation

For detailed preparation steps, please refer to the official website of Huawei developer Alliance:

Configure AppGallery Connect development preparation Android 3D modeling service (huawei.com)

2 Edit engineering integration

Before starting API development, you need to configure AppGallery Connect in 3.3.1. At the same time, please ensure that your project has configured the Maven warehouse address of HMS Core SDK, and has completed the integration of HMS Core SDK in 3.3.2 of this service.

2.1 create a motion capture engine.
// Custom parameter configuration. 
Modeling3dMotionCaptureEngineSetting setting = new Modeling3dMotionCaptureEngineSetting.Factory() 
    // Set the detection mode. 
    // Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON_QUATERNION: output the quaternion of bone points corresponding to human posture. 
    // Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON: output the coordinates of bone points corresponding to human posture. 
.setAnalyzeType(Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON_QUATERNION 
                        | Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON) 
.create(); 
Modeling3dMotionCaptureEngine engine = Modeling3dMotionCaptureEngineFactory.getInstance().getMotionCaptureEngine(setting);

Modeling3dFrame encapsulates the video frame or still picture data from the camera and the related data processing logic.

You can process the video stream by yourself and convert the video frame into a Modeling3dFrame object, which is used to detect the video frame and supports NV21 format.

Via Android graphics. Bitmap creates a Modeling3dFrame object for the motion capture engine to detect pictures. The supported picture formats include: JPG/JPEG/PNG.

// Create Modeling3dFrame through bitmap. 
Modeling3dFrame frame = Modeling3dFrame.fromBitmap(bitmap); 
// Create a Modeling3dFrame from a video frame. 
Modeling3dFrame.Property property = new Modeling3dFrame.Property.Creator().setFormatType(ImageFormat.NV21) 
    // Set the video frame width. 
    .setWidth(width) 
    // Set the video frame height. 
    .setHeight(height) 
    // Sets the rotation angle of the video frame. 
    .setQuadrant(quadant) 
    // Set the sequence number of video frames. 
    .setItemIdentity(framIndex) 
    .create(); 
Modeling3dFrame frame = Modeling3dFrame.fromByteBuffer(byteBuffer, property);
2.2 call synchronous or asynchronous methods for motion capture detection.

asyncAnalyseFrame asynchronous method example code:

Task<List<Modeling3dMotionCaptureSkeleton>> task = engine.asyncAnalyseFrame(frame); 
task.addOnSuccessListener(new OnSuccessListener<List<Modeling3dMotionCaptureSkeleton>>() { 
    @Override 
    public void onSuccess(List<Modeling3dMotionCaptureSkeleton> results) { 
        // Detection succeeded. 
    } 
}).addOnFailureListener(new OnFailureListener() { 
    @Override 
    public void onFailure(Exception e) { 
        // Detection failed. 
    } 
});

Sample code of analyseFrame synchronization method:

SparseArray<Modeling3dMotionCaptureSkeleton> sparseArray = engine.analyseFrame(frame); 
for (int i = 0; i < sparseArray.size(); i++) { 
    // Processing of test results. 
};
2.3 after the detection is completed, stop the engine and release the detection resources.
try { 
    if (engine != null) { 
        engine.stop(); 
    } 
} catch (IOException e) { 
    // Exception handling. 
}

Demo demo

In the process of integrated development, you can have any questions Online bill of lading , someone will answer for you.

Learn more > >

visit Huawei developer alliance official website
obtain Development guidance document
Huawei mobile service open source warehouse address: GitHub,Gitee

Follow us and learn the latest technical information of HMS Core at the first time~

Keywords: Java Android kotlin

Added by NoFear on Thu, 10 Mar 2022 09:45:34 +0200