Home > Article > Web Front-end > Detailed explanation of WebRTC new features of HTML5
WebRTC is the abbreviation of "Web Real Time Communication". It is mainly used to allow browsers to obtain and exchange videos in real time. ##, Audio and data.
WebRTC is divided into three APIs.navigator.getUserMedia || (navigator.getUserMedia = navigator.mozGetUserMedia || navigator.webkitGetUserMedia || navigator.msGetUserMedia); if (navigator.getUserMedia) { //do something } else { console.log('your browser not support getUserMedia'); }Chrome21, Opera 18 and Firefox 17 support this method. Currently IE does not support it. msGetUserMedia in the above code is only to ensure future compatibility. The getUserMedia method accepts three parameters.
getUserMedia(streams, success, error);The meaning is as follows:
object that indicates which multimedia devices are included
Callback function , called when the multimedia device is successfully obtained
navigator.getUserMedia({ video: true, audio: true}, onSuccess, onError);The above code is used to obtain real-time information from the camera and microphone. If the web page uses getUserMedia, the browser will ask the user whether to allow the provision of information. If the user refuses, the callback function onError is called. When an error occurs, the parameter of the callback
function is an Error object, which has a code parameter with the following values:
<video id="webcam"></video>Then, use code to get this element.
function onSuccess(stream) { var video = document.getElementById('webcam'); //more code}Finally, bind the src
attribute of this element to the data stream, and the image captured by the camera can be displayed.
function onSuccess(stream) { var video = document.getElementById('webcam'); if (window.URL) { video.src = window.URL.createObjectURL(stream); } else { video.src = stream; } video.autoplay = true; //or video.play();}Its main purpose is to allow users to take photos of themselves using the camera. 2.3 Capturing microphone soundCapturing sound through the browser is relatively complicated and requires the use of Web Audio API.
function onSuccess(stream) { //创建一个音频环境对像 audioContext = window.AudioContext || window.webkitAudioContext; context = new audioContext(); //将声音输入这个对像 audioInput = context.createMediaStreamSources(stream); //设置音量节点 volume = context.createGain(); audioInput.connect(volume); //创建缓存,用来缓存声音 var bufferSize = 2048; // 创建声音的缓存节点,createJavaScriptNode方法的 // 第二个和第三个参数指的是输入和输出都是双声道。 recorder = context.createJavaScriptNode(bufferSize, 2, 2); // 录音过程的回调函数,基本上是将左右两声道的声音 // 分别放入缓存。 recorder.onaudioprocess = function(e){ console.log('recording'); var left = e.inputBuffer.getChannelData(0); var right = e.inputBuffer.getChannelData(1); // we clone the samples leftchannel.push(new Float32Array(left)); rightchannel.push(new Float32Array(right)); recordingLength += bufferSize; } // 将音量节点连上缓存节点,换言之,音量节点是输入 // 和输出的中间环节。 volume.connect(recorder); // 将缓存节点连上输出的目的地,可以是扩音器,也可以 // 是音频文件。 recorder.connect(context.destination); }3. Real-time data exchangeThe other two APIs of WebRTC, RTCPeerConnection is used for point-to-point connections between browsers, and RTCDataChannel is used for point-to-point data communication. RTCPeerConnection has a browser prefix, which is webkitRTCPeerConnection in Chrome browser and mozRTCPeerConnection in Firefox browser. Google maintains a function library adapter.js to abstract away the differences between browsers.
var dataChannelOptions = { ordered: false, // do not guarantee order maxRetransmitTime: 3000, // in milliseconds}; var peerConnection = new RTCPeerConnection(); // Establish your peer connection using your signaling channel herevar dataChannel = peerConnection.createDataChannel("myLabel", dataChannelOptions); dataChannel.onerror = function (error) { console.log("Data Channel Error:", error); }; dataChannel.onmessage = function (event) { console.log("Got Data Channel Message:", event.data); }; dataChannel.onopen = function () { dataChannel.send("Hello World!"); }; dataChannel.onclose = function () { console.log("The Data Channel is Closed"); };4. Reference link[1] Andi Smith, Get Started with WebRTC[2] Thibault Imbert, From microphone to .WAV with: getUserMedia and Web Audio [3] Ian Devlin, Using the getUserMedia API with the
HTML5 video and canvas elements
[4] Eric Bidelman, Capturing Audio & Video in HTML5[5] Sam Dutton, Getting Started with WebRTC[6] Dan Ristic, WebRTC data channels[7] Ruanyf, WebRTCThe above is the detailed content of Detailed explanation of WebRTC new features of HTML5. For more information, please follow other related articles on the PHP Chinese website!