In this tutorial, we will discuss all the steps required to take a picture or snapshot using the getUserMedia API from WebRTC and the standard Video element. The API will prompt the user for access to their connected microphones and cameras. Also, make sure that the API is used either on localhost or a website or web application using https protocol.
Step 1: Getting Ready
This section shows the basic HTML, CSS, and JS required to use the API to take a picture or snapshot.
CSS
The below mentioned is the basic CSS required for this tutorial.
.button-group, .play-area { border: 1px solid grey; padding: 1em 1%; margin-bottom: 1em; }
.button { padding: 0.5em; margin-right: 1em; }
.play-area-sub { width: 47%; padding: 1em 1%; display: inline-block; text-align: center; }
#capture { display: none; }
#snapshot { display: inline-block; width: 320px; height: 240px; }
HTML
This section shows the basic HTML required for this tutorial. It divides the screen into buttons container and another container having video and canvas elements as shown below.
<!-- The buttons to control the stream --> <div class="button-group"> <button id="btn-start" type="button" class="button">Start Streaming</button> <button id="btn-stop" type="button" class="button">Stop Streaming</button> <button id="btn-capture" type="button" class="button">Capture Image</button> </div>
<!-- Video Element & Canvas --> <div class="play-area"> <div class="play-area-sub"> <h3>The Stream</h3> <video id="stream" width="320" height="240"></video> </div> <div class="play-area-sub"> <h3>The Capture</h3> <canvas id="capture" width="320" height="240"></canvas> <div id="snapshot"></div> </div> </div>
JS
We will start with the basic JS and create variables for all the buttons and video & canvas elements.
// The buttons to start & stop stream and to capture the image var btnStart = document.getElementById( "btn-start" ); var btnStop = document.getElementById( "btn-stop" ); var btnCapture = document.getElementById( "btn-capture" );
// The stream & capture var stream = document.getElementById( "stream" ); var capture = document.getElementById( "capture" ); var snapshot = document.getElementById( "snapshot" );
Step 2: Start Streaming
In this section, we will update the JS and write code to detect the browser feature and to start and stop the video stream.
The below-mentioned code shows the code having stream variables and listeners attached to the start and stop buttons.
// The video stream var cameraStream = null;
// Attach listener btnStart.addEventListener( "click", startStreaming ); btnStop.addEventListener( "click", stopStreaming );
The
// Start Streaming function startStreaming() { var mediaSupport = 'mediaDevices' in navigator; if( mediaSupport && null == cameraStream ) {
navigator.mediaDevices.getUserMedia( { video: true } ) .then( function( mediaStream ) { cameraStream = mediaStream; stream.srcObject = mediaStream; stream.play(); }) .catch( function( err ) {
console.log( "Unable to access camera: " + err ); }); } else {
alert( 'Your browser does not support media devices.' );
return; } }
The
// Stop Streaming function stopStreaming() {
if( null != cameraStream ) {
var track = cameraStream.getTracks()[ 0 ]; track.stop(); stream.load(); cameraStream = null; } }
This is how we can simply start and stop the device camera streaming directly to the Video Element.
Step 3: Capture Snapshot
In this step, we will capture a screenshot from the Video Element and render the same on the Canvas Element. The Canvas can be used to capture the frame and store in Image as shown below.
btnCapture.addEventListener( "click", captureSnapshot );
function captureSnapshot() { if( null != cameraStream ) { var ctx = capture.getContext( '2d' ); var img = new Image(); ctx.drawImage( stream, 0, 0, capture.width, capture.height );
img.src = capture.toDataURL( "image/png" ); img.width = 240; snapshot.innerHTML = '';
snapshot.appendChild( img ); } }
Step 4: Upload
To upload the frame captured from the Canvas Element, we need to convert the data to Blob as shown below.
function dataURItoBlob( dataURI ) { var byteString = atob( dataURI.split( ',' )[ 1 ] ); var mimeString = dataURI.split( ',' )[ 0 ].split( ':' )[ 1 ].split( ';' )[ 0 ]; var buffer = new ArrayBuffer( byteString.length ); var data = new DataView( buffer );
for( var i = 0; i < byteString.length; i++ ) {
data.setUint8( i, byteString.charCodeAt( i ) ); }
return new Blob( [ buffer ], { type: mimeString } ); }
We can upload the captured image to the server using AJAX request as shown below.
var request = new XMLHttpRequest(); request.open( "POST", "/upload/url", true );
var data = new FormData(); var dataURI = snapshot.firstChild.getAttribute( "src" ); var imageData = dataURItoBlob( dataURI ); data.append( "image", imageData, "myimage" ); request.send( data );
These are the basic steps required to capture an image from the system device and upload the captured image to the server using AJAX.
The live example without file uploader can be found on CodePen as shown below.