Webcam preview of WebRTC

in front WebRTC server construction We have built the server environment required by WebRTC, mainly including three servers:
Room server, signaling server and TURN penetration server.

Later, we will learn how to use WebRTC to realize audio and video calls step by step. Today we will learn how to preview camera data using WebRTC.

Here's the bottom. Most of the practices in the later learning process are based on the official packaging Library of WebRTC. Therefore, most of the code is Java or Kotlin, and the relevant code of JNI will not be involved for the time being, so the threshold is still very low.

Good good study´╝îday day up. So easy...

Import dependency Library

First, we introduce the dependency Library of WebRTC into the Android Studio project:

 implementation 'org.webrtc:google-webrtc:1.0.+'

Dynamic permissions

First of all, CAMERA permission is definitely required. If audio data is required, record is also required_ Audio permissions.

I believe that children's shoes based on Android development are no stranger to dynamic permissions. There are many related open source libraries on gitHub, so I won't introduce them here.

Preview camera

As a complete solution for point-to-point communication, WebRTC has completely encapsulated the acquisition and preview of camera data. Developers can directly call relevant API s,
Developers do not need to write OpenGL texture rendering and other related logic code.

If children's shoes are interested in how to render camera data using OpenGL, please refer to the author's previous article: Preview CameraX camera data with OpenGL

Using WebRTC to preview camera data mainly includes the following steps:

1. Create EglBase and SurfaceViewRenderer

An important function of EglBase is to provide EGL rendering context and EGL version compatibility.

SurfaceViewRenderer is a rendering View inherited from SurfaceView, which provides the function of OpenGL rendering image data.

// Create EglBase
rootEglBase = EglBase.create()
camera_preview.init(rootEglBase?.eglBaseContext, null)
//hardware speedup
camera_preview.setEnableHardwareScaler(true)
camera_preview.setScalingType(RendererCommon.ScalingType.SCALE_ASPECT_FILL)

2. Create videocamera

Videocamera is mainly used to provide camera data, such as control resolution and frame rate.

Camera1 and Camera2 compatible via CameraEnumerator interface:

   // Create videocamera
    private fun createVideoCapture(): VideoCapturer? {
        return if (Camera2Enumerator.isSupported(this)) {
            createCameraCapture(Camera2Enumerator(this))
        } else {
            createCameraCapture(Camera1Enumerator(true))
        }
    }


   //  Really create the implementation of videocamera
    private fun createCameraCapture(enumerator: CameraEnumerator): VideoCapturer? {
        val deviceNames = enumerator.deviceNames
        // First, try to find front facing camera
        for (deviceName in deviceNames) {
            if (enumerator.isFrontFacing(deviceName)) {

                val videoCapture: VideoCapturer? = enumerator.createCapturer(deviceName, null)
                if (videoCapture != null) {
                    return videoCapture
                }
            }
        }

        for (deviceName in deviceNames) {
            if (!enumerator.isFrontFacing(deviceName)) {
                val videoCapture: VideoCapturer? = enumerator.createCapturer(deviceName, null)
                if (videoCapture != null) {
                    return videoCapture
                }
            }
        }
        return null
    }

After the videocamera is created, it needs to be initialized with the surfaceexturehelper, otherwise an uninitialized exception will be thrown when calling the preview:

  // initialization
        mSurfaceTextureHelper =
            SurfaceTextureHelper.create("CaptureThread", rootEglBase?.eglBaseContext)
        // Create VideoSource
        val videoSource = mPeerConnectionFactory!!.createVideoSource(false)
        mVideoCapture?.initialize(
            mSurfaceTextureHelper,
            applicationContext,
            videoSource.capturerObserver
        )


    /**
     * Create PeerConnectionFactory
     */
    private fun createPeerConnectionFactory(context: Context?): PeerConnectionFactory? {
        val encoderFactory: VideoEncoderFactory
        val decoderFactory: VideoDecoderFactory
        encoderFactory = DefaultVideoEncoderFactory(
            rootEglBase?.eglBaseContext,
            false /* enableIntelVp8Encoder */,
            true
        )
        decoderFactory = DefaultVideoDecoderFactory(rootEglBase?.eglBaseContext)
        PeerConnectionFactory.initialize(
            PeerConnectionFactory.InitializationOptions.builder(context)
                .setEnableInternalTracer(true)
                .createInitializationOptions()
        )
        val builder = PeerConnectionFactory.builder()
            .setVideoEncoderFactory(encoderFactory)
            .setVideoDecoderFactory(decoderFactory)
        builder.setOptions(null)
        return builder.createPeerConnectionFactory()
    }

3. Create VideoTrack

VideoTrack is a video track, similar to AudioTrack audio track. Its function is to output the video data obtained by videoapter combined with VideoSource to SurfaceViewRenderer for rendering and display.

val VIDEO_TRACK_ID = "1" //"ARDAMSv0"
// Create VideoTrack
 mVideoTrack = mPeerConnectionFactory!!.createVideoTrack(VIDEO_TRACK_ID,
            videoSource
        )
        mVideoTrack?.setEnabled(true)

      // Bind render View
        mVideoTrack?.addSink(camera_preview)

Turn on rendering with videocamera

You can open the preview in the relevant life cycle of the Activity:

   override fun onResume() {
        super.onResume()
        // Turn on camera preview
        mVideoCapture?.startCapture(
            VIDEO_RESOLUTION_WIDTH,
            VIDEO_RESOLUTION_HEIGHT,
            VIDEO_FPS
        )
    }

Complete code

CapturePreviewActivity.kt:

package com.fly.webrtcandroid

import android.content.Context
import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import org.webrtc.*

/**
 * Camera preview
 */
class CapturePreviewActivity : AppCompatActivity() {

    val VIDEO_TRACK_ID = "1" //"ARDAMSv0"

    private val VIDEO_RESOLUTION_WIDTH = 1280
    private val VIDEO_RESOLUTION_HEIGHT = 720
    private val VIDEO_FPS = 30


    //    Draw global context
    private var rootEglBase: EglBase? = null
    private var mVideoTrack: VideoTrack? = null

    private var mPeerConnectionFactory: PeerConnectionFactory? = null

    //Texture rendering
    private var mSurfaceTextureHelper: SurfaceTextureHelper? = null

    private var mVideoCapture: VideoCapturer? = null

    private val camera_preview by lazy {
        findViewById<SurfaceViewRenderer>(R.id.camera_preview)
    }

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_capture_preview)

        rootEglBase = EglBase.create()

        camera_preview.init(rootEglBase?.eglBaseContext, null)

        //Suspended Top
        camera_preview.setZOrderMediaOverlay(true)
        //hardware speedup
        camera_preview.setEnableHardwareScaler(true)
        camera_preview.setScalingType(RendererCommon.ScalingType.SCALE_ASPECT_FILL)

        mPeerConnectionFactory = createPeerConnectionFactory(this)

        mVideoCapture = createVideoCapture()

        // initialization
        mSurfaceTextureHelper =
            SurfaceTextureHelper.create("CaptureThread", rootEglBase?.eglBaseContext)
        // Create VideoSource
        val videoSource = mPeerConnectionFactory!!.createVideoSource(false)
        mVideoCapture?.initialize(
            mSurfaceTextureHelper,
            applicationContext,
            videoSource.capturerObserver
        )

        mVideoTrack = mPeerConnectionFactory!!.createVideoTrack(VIDEO_TRACK_ID,
            videoSource
        )
        mVideoTrack?.setEnabled(true)
        mVideoTrack?.addSink(camera_preview)

    }

    /**
     * Create PeerConnectionFactory
     */
    private fun createPeerConnectionFactory(context: Context?): PeerConnectionFactory? {
        val encoderFactory: VideoEncoderFactory
        val decoderFactory: VideoDecoderFactory
        encoderFactory = DefaultVideoEncoderFactory(
            rootEglBase?.eglBaseContext,
            false /* enableIntelVp8Encoder */,
            true
        )
        decoderFactory = DefaultVideoDecoderFactory(rootEglBase?.eglBaseContext)
        PeerConnectionFactory.initialize(
            PeerConnectionFactory.InitializationOptions.builder(context)
                .setEnableInternalTracer(true)
                .createInitializationOptions()
        )
        val builder = PeerConnectionFactory.builder()
            .setVideoEncoderFactory(encoderFactory)
            .setVideoDecoderFactory(decoderFactory)
        builder.setOptions(null)
        return builder.createPeerConnectionFactory()
    }

    private fun createVideoCapture(): VideoCapturer? {
        return if (Camera2Enumerator.isSupported(this)) {
            createCameraCapture(Camera2Enumerator(this))
        } else {
            createCameraCapture(Camera1Enumerator(true))
        }
    }

    private fun createCameraCapture(enumerator: CameraEnumerator): VideoCapturer? {
        val deviceNames = enumerator.deviceNames
        // First, try to find front facing camera
        for (deviceName in deviceNames) {
            if (enumerator.isFrontFacing(deviceName)) {

                val videoCapture: VideoCapturer? = enumerator.createCapturer(deviceName, null)
                if (videoCapture != null) {
                    return videoCapture
                }
            }
        }

        for (deviceName in deviceNames) {
            if (!enumerator.isFrontFacing(deviceName)) {
                val videoCapture: VideoCapturer? = enumerator.createCapturer(deviceName, null)
                if (videoCapture != null) {
                    return videoCapture
                }
            }
        }
        return null
    }

    override fun onResume() {
        super.onResume()
        // Turn on camera preview
        mVideoCapture?.startCapture(
            VIDEO_RESOLUTION_WIDTH,
            VIDEO_RESOLUTION_HEIGHT,
            VIDEO_FPS
        )
    }

}

Layout file activity_capture_preview.xml:

<?xml version="1.0" encoding="utf-8"?>
<org.webrtc.SurfaceViewRenderer xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:id="@+id/camera_preview"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".CapturePreviewActivity">

</org.webrtc.SurfaceViewRenderer>

Pay attention to me and make progress together. Life is more than coding!!!

Keywords: webrtc

Added by Altair on Sat, 22 Jan 2022 10:56:40 +0200