Register now! for receiving our WebRTC Standards Updates!
Constrainable properties – What are they and where are they used?
A WebRTC MediaStreamTrack has properties and capabilities that define it; good examples are width, height and FrameRate. An application can specify allowed ranges for a track object’s properties and get the values the browser set by using the WebRTC APIs.
Think of a media source you get access to using getUserMedia(). How it is presented to the user or how it is configured when sent to the remote user is defined using these constrainable properties.
There are 2 important things going on at the standards we would like to mention here.
More precise definitions
The syntax and values were not really precisely defined. This means that interpretation by different WebRTC implementations vary. There is work in W3C right now to define these browser interfaces (WebIDLs) in a clearer and more precise way in order to provide the application a consistent way to configure captured media, something that will work across browsers. This will naturally improve compatibility between browsers.
Status at W3C: Will appear in the Editor’s draft in January
Impact on my application: More compatibility between browsers going forward but this might require some changes in your application as interfaces will be added and even might change in some of the browsers.
IANA registration
Currently these constrainable properties (those capabilities we talked about) are defined in the Media Capture and Streams specification document and are requested to be added to the IANA registry.
What is the IANA registry? IANA is the organization responsible for maintaining registries such as IP address allocation, domain name allocation, media types and many other symbols and numbers related to the internet.
Some W3C participants don’t see a need for them to be registered in IANA and would like the existing references to an IANA registry to be removed. If this happens, these references will simply go away from the spec.
Status at W3C: There are significant disagreements about this, so it may not happen.
Impact on my application: No real impact, it only relates to who has change control over these properties. See this as an FYI so you will not be surprised if they disappear from the spec one day.
getUserMedia sources
getUserMedia() is one of the primary APIs of WebRTC and something that has been used since the very early days of WebRTC to demonstrate displaying the capture of your camera on a web page and playing with that media a bit.
Originally the media sources getUserMedia() could capture from included not only microphones and cameras but also files. It was decided that files would be removed from the list of possible sources. Now, this doesn’t mean that the browser may not allow the user to access a file; rather, that needs to be done through other means, not through getUserMedia().
Another interesting media source is the screen. There is work taking place right now for adding the screen or specific application windows to the media source list. There are some privacy issues related to that. Screen sharing some of you may use in WebRTC calls today is implemented via non-standard browser extensions at the moment.
Status at W3C: Files as a source type have been removed from the Editors’ draft of the Media Capture and Streams specification.
Impact on my application: If you are accessing files from your WebRTC application with getUserMedia() expect this to break since it was never officially standardized and will not be. Better be ready with your alternative implementation before that happens.
Low-level media control and information
This topic is continuously discussed at WebRTC conferences — how do I get more control over the media capabilities and how do I change this during the call.
People have been asking for more information and control over how media is sent. This essentially means changing bandwidth, resolution, frame rate, codec and other media parameters important for allowing the application control over media and media quality.
Very little of this was possible until now. Mainly information was available via the stats API. What should be possible for control wasn’t really agreed and different browsers gave different control, something that was kind of a moving target.
In practice, low-level control was done by applications using changes in the SDP. This means that in-call changes would require creating new SDP and sending it to the other side. SIP people can view this as kind of a RE-INVITE.
Coming your way are the RTCRtpSender/RTCRtpReceiver objects
RTCRtpSender/RTCRtpReceiver are objects created in ORTC that were proposed to be added to WebRTC 1.0. These are new objects that will have both informational properties and control methods for media control.
These objects are going into the WebRTC 1.0 specification in January as basic placeholders. What exactly will be included in them is still under discussion. Google already has some implementation of this, but it is still not clear when it will come into a release.
The addition of RTCRtpSender/RTCRtpReceiver to 1.0, in conjunction with the ORTC community group (non standards track) now committing to remaining compatible with WebRTC 1.0, is a good sign that WebRTC and ORTC will eventually merge. Good news for all those worried about interoperability and compatibility between the 2.
Status at W3C: Basic RTCRtpSender/RTCRtpReceiver objects will be in the Editors’ draft of the WebRTC specification in January.
Impact on my application: If you have a need to know more about the media sent and received and you want better control over media properties for optimization and quality reasons, this will be the answer to these needs.
Control over DTLS-SRTP keys
Developers wanted more control over when DTLS-SRTP keys are being exchanged and reestablished. There has been a proposal at W3C for how this could work but there are disagreements about the specific approach.
Once agreed and possible, this can be an important feature for server to server communication assuming it will allow for reuse of DTLS-SRTP connection for multiple calls.
Status at W3C: Under discussion.
Impact on my application: While this can be important also to client side implementations, the main benefit as we view it will be to server implementations. Given the overhead of opening a DTLS-SRTP connection and the number of these connections required on the server, more control and possible reuse will allow for server optimization.
WebRTC statistics
At the beginning of WebRTC, statistics were provided through the RTCPeerConnection.getStats method but statistics provided were rather basic. People wanted more. There was a proposal to move all of the WebRTC statistics out of the WebRTC spec, along with the getStats() method definition, and put them in a dedicated document. The motivation was to allow for the WebRTC spec to progress without waiting for stats to be finalized. A concern raised was that moving the getStats() method out would mean there is no definition of what stats must be supported in WebRTC implementations. We personally think that a reasonable compromise might be to include the minimal must-implement stats in the WebRTC spec and keep all the others as an extension.
Status at W3C: Under discussion
Impact on my application: No immediate impact but assuming you are using WebRTC statistics in order to provide session information to your users or for internal operation of your application, better stay tuned for updates on this, monitoring what new things are added and what statistics are eventually defined as mandatory. This will define what statistics your application can assume to be always available to it regardless of the environment in which in runs.
DTMF support
Currently, DTMF is being inserted into audio tracks using an object created via RTCPeerConnection.createDTMFSender(). This function is associated with a PeerConnection but a PeerConnection may include several media tracks (audio, video & data).
As described above, the RTCRtpSender/RTCRtpReceiver object will be added to WebRTC1.0. Since it is uniquely associated to a single media track sent over a single PeerConnection and since all media information and operations will be performed using this object there is a desire to move the DTMF sending function to this object.
Status at W3C: Under discussion
Impact on my application: If you are using DTMF in your application be sure to follow-up on this as this change, if done, will impact your application’s compatibility with browsers.
[…] Continue Reading […]