webrtc series - image capture

webrtc series - image capture

I. principle introduction

For webrtc p2p audio and video function, whether it is local video stream or peer-to-peer video stream, the source of video stream is camera, and the end point is screen (different terminals need different space support). With this in mind, every frame of data needs to call VideoRenderer, which then draws on the control through the renderer.
The following interfaces are defined in VideoRenderer:

public static interface Callbacks {
    void renderFrame(org.webrtc.VideoRenderer.I420Frame i420Frame);
}

When the data of each frame is refreshed, the interface will be called back.

II. Code implementation

private static void copyPlane(ByteBuffer src, ByteBuffer dst) {
    src.position(0).limit(src.capacity());
    dst.put(src);
    dst.position(0).limit(dst.capacity());
}

public static android.graphics.YuvImage ConvertTo(org.webrtc.VideoRenderer.I420Frame src, int imageFormat) {
    byte[] bytes = new byte[src.yuvStrides[0] * src.height +
            src.yuvStrides[1] * src.height / 2 +
            src.yuvStrides[2] * src.height / 2];
    int[] strides = new int[3];
    switch (imageFormat) {
        default:
            return null;
        case android.graphics.ImageFormat.YV12: {
            ByteBuffer tmp = ByteBuffer.wrap(bytes, 0, src.yuvStrides[0] * src.height);
            copyPlane(src.yuvPlanes[0], tmp);
            tmp = ByteBuffer.wrap(bytes, src.yuvStrides[0] * src.height, src.yuvStrides[2] * src.height / 2);
            copyPlane(src.yuvPlanes[2], tmp);
            tmp = ByteBuffer.wrap(bytes, src.yuvStrides[0] * src.height + src.yuvStrides[2] * src.height / 2, src.yuvStrides[1] * src.height / 2);
            copyPlane(src.yuvPlanes[1], tmp);
            strides[0] = src.yuvStrides[0];
            strides[1] = src.yuvStrides[2];
            strides[2] = src.yuvStrides[1];
            return new YuvImage(bytes, imageFormat, src.width, src.height, strides);
        }

        case android.graphics.ImageFormat.NV21: {
            if (src.yuvStrides[0] != src.width)
                return null;
            if (src.yuvStrides[1] != src.width / 2)
                return null;
            if (src.yuvStrides[2] != src.width / 2)
                return null;

            ByteBuffer tmp = ByteBuffer.wrap(bytes, 0, src.width * src.height);
            copyPlane(src.yuvPlanes[0], tmp);

            byte[] tmparray = new byte[src.width / 2 * src.height / 2];
            tmp = ByteBuffer.wrap(tmparray, 0, src.width / 2 * src.height / 2);

            copyPlane(src.yuvPlanes[2], tmp);
            for (int row = 0; row < src.height / 2; row++) {
                for (int col = 0; col < src.width / 2; col++) {
                    bytes[src.width * src.height + row * src.width + col * 2] = tmparray[row * src.width / 2 + col];
                }
            }
            copyPlane(src.yuvPlanes[1], tmp);
            for (int row = 0; row < src.height / 2; row++) {
                for (int col = 0; col < src.width / 2; col++) {
                    bytes[src.width * src.height + row * src.width + col * 2 + 1] = tmparray[row * src.width / 2 + col];
                }
            }
            return new YuvImage(bytes, imageFormat, src.width, src.height, null);
        }
    }
}

private static class ProxyRenderer implements VideoRenderer.Callbacks {
    private VideoRenderer.Callbacks target;

    @Override
    synchronized public void renderFrame(VideoRenderer.I420Frame frame) {
        if (target == null) {
            Logging.d(TAG, "Dropping frame in proxy because target is null.");
            VideoRenderer.renderFrameDone(frame);
            return;
        }

        // View frame data
        Logging.d(TAG, "height = " + frame.height
                + " width = " + frame.width
                + " rotationDegree = " + frame.rotationDegree
                + " textureId = " + frame.textureId
                + " rotatedHeight = " + frame.rotatedHeight()
                + " rotatedWidth = " + frame.rotatedWidth());

        // Save data
        android.graphics.YuvImage yuvImage = ConvertTo(frame, ImageFormat.NV21);
        java.io.File newFile = new File("/storage/emulated/0/1/webrtc_1");
        FileOutputStream fileOutputStream = null;
        try {
            fileOutputStream = new FileOutputStream(newFile);
        } catch (FileNotFoundException e) {
            e.printStackTrace();
        }
        yuvImage.compressToJpeg(new Rect(0, 0, yuvImage.getWidth(), yuvImage.getHeight()), 100, fileOutputStream);

        target.renderFrame(frame);
    }

    synchronized public void setTarget(VideoRenderer.Callbacks target) {
        this.target = target;
    }
}

The main function of the above code is to save the first image of webrtc to the local.

III. precautions

The frame data will be changed after the screenshot, and the renderer may not be able to parse the frame data.
Solution: create a frame data backup

VideoRenderer.I420Frame frame2 =  new VideoRenderer.I420Frame(frame.width, frame.height, frame.rotationDegree, frame.yuvStrides, frame.yuvPlanes, NULL);

Reference link:
https://www.jianshu.com/p/5902d4953ed9
https://blog.csdn.net/weixin_38372482/article/details/80817274
https://www.jianshu.com/p/1513e51e043d

Keywords: Android Java

Added by LAX on Wed, 30 Oct 2019 16:48:54 +0200