WebRTC gets pictures from the camera and sends them to canvas

WebRTC gets pictures from the camera and sends them to canvas

Previously, we have been able to take advantage of the functions of WebRTC, Open camera via browser And display the preview image in the video element.
Next, we try to capture a frame from the video and display it on the interface.


First prepare the interface and place the controls. The following is the key part of the code.

<video playsinline autoplay></video>
<button id="showVideo">Turn on the camera</button>
<button id="takeSnapshot">intercept</button>
<button id="clearList">Clear record</button>
<canvas id="mainCanvas"></canvas>
<div id="list" style="display: grid; grid-template-columns: repeat(auto-fill, 100px);
    column-gap: 20px; row-gap: 20px;"></div>
  • Video is used to preview the video
  • Three button s are used to open the camera, intercept pictures and clear records respectively
  • canvas is used to display the captured pictures
  • The following div is used to store multiple intercepted image records. grid is assigned to it. It looks better when displaying multiple pictures

As usual, adapter-latest.js needs to be introduced

<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>


Prepare to implement the function.

Open camera and Preview

Similar to opening the camera before, you still need to use the getUserMedia method. Get the video stream and give it to video to play.

const video = document.querySelector('video');
const constraints = {
  audio: false,
  video: true
// ....

function openCamera(e) {

function gotStream(stream) {
  window.stream = stream;
  video.srcObject = stream;

function onError(error) {
  console.log('navigator.MediaDevices.getUserMedia error: ', error.message, error.name);

Capture picture

Get the canvas in the interface and book a size first (no reservation is OK).

const mCanvas = window.canvas = document.querySelector('#mainCanvas');
mCanvas.width = 480;
mCanvas.height = 360;

// Start interception
  mCanvas.width = video.videoWidth;
  mCanvas.height = video.videoHeight;
  mCanvas.getContext('2d').drawImage(video, 0, 0, mCanvas.width, mCanvas.height);

After initiating interception, use getContext to obtain the CanvasRenderingContext2D object. And then call its drawImage method.
Draw the video frames in the video.

In addition to drawing this canvas, we can create a new canvas every time we launch (click the button) and display them like an album.

const list = document.querySelector('#list'); //  Used to store multiple elements

  // Add 1 sheet
  var divItem = document.createElement("div");
  divItem.style.display = "block";
  divItem.width = 100;
  divItem.height = divItem.width * video.videoHeight / video.videoWidth; // Calculate the scale
  divItem.style.width = divItem.width + "px";
  divItem.style.height = divItem.height + "px";
  console.log("div item size: ", divItem.width, divItem.height);

  var c1 = document.createElement("canvas");
  c1.width = divItem.width;
  c1.height = divItem.height;
  c1.getContext('2d').drawImage(video, 0, 0, mCanvas.width, mCanvas.height, 0, 0, c1.width, c1.height);


The storage method of sub items is div package canvas. First create a divItem with document.createElement("div").
Calculate and adjust the size of divItem according to the width and height of the video, and set style.

document.createElement("canvas") creates c1, and its width and height are set to the width and height of the previous divItem. Then draw the picture in.
During drawImage, the previous input is the range of the video (source), and the next four parameters are their own drawing range.
So a child item is generated. Add to our prepared list (div).

Clear record

Clear the sub items in div (list). Get and remove children in a loop.

var child = list.lastElementChild;
while (child) {
    child = list.lastElementChild;


Turn on the camera and display the video. Draw the video on canvas. Create multiple canvases to make the effect of history.
It mainly uses the drawing method of canvas. When drawing, pay attention to the passed parameters, which can specify the drawing boundary.
In other words, it is also feasible to draw only a part of the video size.

Key methods used in the example

  • getUserMedia
  • getContext
  • drawImage
  • createElement


Simple Preview link

Keywords: Web Development

Added by JUMC_Webmaster on Thu, 25 Nov 2021 04:00:02 +0200