Home  >  Article  >  Web Front-end  >  Detailed explanation of WebRTC new features of HTML5

Detailed explanation of WebRTC new features of HTML5

黄舟
黄舟Original
2017-03-30 11:27:155315browse

1. Overview

WebRTC is the abbreviation of "Web Real Time Communication". It is mainly used to allow browsers to obtain and exchange videos in real time. ##, Audio and data.

WebRTC is divided into three APIs.

  • MediaStream (also known as getUserMedia)

  • RTCPeerConnection

  • RTCDataChannel

getUserMedia is mainly used to obtain video and audio information, and the latter two APIs are used for data exchange between browsers.

2. getUserMedia

2.1

Introduction

First, check whether the browser supports the getUserMedia method.

navigator.getUserMedia || 
    (navigator.getUserMedia = navigator.mozGetUserMedia ||  navigator.webkitGetUserMedia || navigator.msGetUserMedia);

if (navigator.getUserMedia) {
    //do something
} else {
    console.log('your browser not support getUserMedia');
}

Chrome21, Opera 18 and Firefox 17 support this method. Currently IE does not support it. msGetUserMedia in the above code is only to ensure future compatibility.

The getUserMedia method accepts three parameters.

getUserMedia(streams, success, error);

The meaning is as follows:

Usage is as follows:

navigator.getUserMedia({
    video: true,
    audio: true}, onSuccess, onError);

The above code is used to obtain real-time information from the camera and microphone.

If the web page uses getUserMedia, the browser will ask the user whether to allow the provision of information. If the user refuses, the callback function onError is called.

When an error occurs, the parameter of the callback

function is an Error object, which has a code parameter with the following values:

  • PERMISSION_DENIED: User Refusal to provide information.

  • NOT_SUPPORTED_ERROR: The browser does not support the specified media type.

  • MANDATORY_UNSATISHIED_ERROR: No media stream was received for the specified media type.

2.2 Displaying camera images

To display images captured by the user's camera on a web page, you need to first place a video element on the web page. The image is displayed in this element.

<video id="webcam"></video>

Then, use code to get this element.

function onSuccess(stream) {    
var video = document.getElementById(&#39;webcam&#39;);    
//more code}

Finally, bind the src

attribute of this element to the data stream, and the image captured by the camera can be displayed.

function onSuccess(stream) {    
var video = document.getElementById(&#39;webcam&#39;);    
if (window.URL) {
        video.src = window.URL.createObjectURL(stream);
    } else {
        video.src = stream;
    }

    video.autoplay = true;    //or video.play();}

Its main purpose is to allow users to take photos of themselves using the camera.

2.3 Capturing microphone sound

Capturing sound through the browser is relatively complicated and requires the use of Web Audio API.

function onSuccess(stream) {    
//创建一个音频环境对像
    audioContext = window.AudioContext || window.webkitAudioContext;
    context = new audioContext();    
    //将声音输入这个对像
    audioInput = context.createMediaStreamSources(stream);    
    //设置音量节点
    volume = context.createGain();
    audioInput.connect(volume);    
    //创建缓存,用来缓存声音
    var bufferSize = 2048;    
    // 创建声音的缓存节点,createJavaScriptNode方法的
    // 第二个和第三个参数指的是输入和输出都是双声道。
    recorder = context.createJavaScriptNode(bufferSize, 2, 2);    
    // 录音过程的回调函数,基本上是将左右两声道的声音
    // 分别放入缓存。
    recorder.onaudioprocess = function(e){
        console.log(&#39;recording&#39;);        
        var left = e.inputBuffer.getChannelData(0);        
        var right = e.inputBuffer.getChannelData(1);        
        // we clone the samples
        leftchannel.push(new Float32Array(left));
        rightchannel.push(new Float32Array(right));
        recordingLength += bufferSize;
    }    // 将音量节点连上缓存节点,换言之,音量节点是输入
    // 和输出的中间环节。    
    volume.connect(recorder);    
    // 将缓存节点连上输出的目的地,可以是扩音器,也可以
    // 是音频文件。    
    recorder.connect(context.destination); 

}

3. Real-time data exchange

The other two APIs of WebRTC, RTCPeerConnection is used for point-to-point connections between browsers, and RTCDataChannel is used for point-to-point data communication.

RTCPeerConnection has a browser prefix, which is webkitRTCPeerConnection in Chrome browser and mozRTCPeerConnection in Firefox browser. Google maintains a function library adapter.js to abstract away the differences between browsers.

var dataChannelOptions = {
  ordered: false, // do not guarantee order
  maxRetransmitTime: 3000, // in milliseconds};
  var peerConnection = new RTCPeerConnection();
  // Establish your peer connection using your signaling channel herevar dataChannel =
  peerConnection.createDataChannel("myLabel", dataChannelOptions);

dataChannel.onerror = function (error) {
  console.log("Data Channel Error:", error);
};

dataChannel.onmessage = function (event) {
  console.log("Got Data Channel Message:", event.data);
};

dataChannel.onopen = function () {
  dataChannel.send("Hello World!");
};

dataChannel.onclose = function () {
  console.log("The Data Channel is Closed");
};

4. Reference link

[1] Andi Smith, Get Started with WebRTC

[2] Thibault Imbert, From microphone to .WAV with: getUserMedia and Web Audio

[3] Ian Devlin, Using the getUserMedia API with the

HTML5 video and canvas elements

[4] Eric Bidelman, Capturing Audio & Video in HTML5

[5] Sam Dutton, Getting Started with WebRTC

[6] Dan Ristic, WebRTC data channels

[7] Ruanyf, WebRTC

The above is the detailed content of Detailed explanation of WebRTC new features of HTML5. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn