What comes instead of SDP?
ORTC is still a wildcard to many.
What most people know about ORTC is:
- It comes from Microsoft (and Hookflash)
- It doesn’t use SDP
- It is referred to as WebRTC 1.1
There have been many write-ups about it, lately by Sinch,back in 2014 by Tsahi Levent-Levi and several posts on WebRTCHacks.
In the December WebRTC conference in Paris this was one of our training topics. The one thing many were looking to understand and didn’t find a clear answer to was – So ORTC doesn’t use SDP. What replaces it? How do I tell the remote party what media I support?
The answer to this is surprisingly simple. This post comes to solve this mistory.
Object RTC is all about… objects
The O in ORTC stands for Object.
In many posts written by Dan Burnett and myself on WebRTCStandards where we talk about Media we talk about various new objects such as the RTCRtpSender and RTCRtpReceiver objects. We talk about them in context of more control and capabilities offered to the developer. Take a quick look at those posts and you will see us talking about simulcast, early media, transceivers, video preference of motion vs. sharpness, codec priority reordering and many other capabilities.
There is one thing we kind of took for granted and didn’t mention. This new extra fine grain control is related to ORTC as it is provided through those objects.
First takeaway for you from this post is: WebRTC is gradually adding ORTC functionalities into it. It is not the full ORTC story but just some parts of it.
SDP vs. ORTC
This is probably the more accurate question to ask and not WebRTC vs. ORTC as ORTC will be part of a future WebRTC standard version (let’s get WebRTC 1.0 really nailed down first).
WebRTC applications are doing the following things with SDP today:
- Get the SDP from the browser (through WebRTC API)
- Send the SDP in some magic way to the remote party
- The remote party gets the SDP from originating party and hands it to the browser (again via WebRTC API)
- Then remote party gets the SDP answer from the browser and by a similar magic signals it back to originating party which in turn hands it over to the browser
In ORTC on the other hand there are objects and APIs.
Application doesn’t get or provide the browser with the SDP but instead, sets and gets various media parameters in those objects. These parameters replace SDP.
Second takeaway for you from this post is: ORTC is all about setting and getting media parameters.
Panic…How do I know the media parameters of remote party?
We all know that WebRTC doesn’t define signaling. In the case of SDP it is sent between parties in any way application chooses. Something over Secured WebSockets (SIP or proprietary), smoke signals or whatever application chooses to use.
The common thing we have in this case is that SDP is exchanged between the parties. This makes life easier for GW developers as they get something they know, SDP, massage it as needed to fit into their SIP/VoIP/whatever world and get done with it.
In the case of ORTC, there is nothing defined that is being send between the parties.
What are the options for sending media information in ORTC?
- Application may build its own objects or use the structure already defined in ORTC and send that between the parties as was done with SDP
- A server may dictate media settings to both parties by pushing this information to them and have them set the parameters on each browser
- An application that lives in a well controlled environment and network may have these settings predefined and exchange no media information between parties
Third takeaway for you from this post is: What media parameters are send and how they are sent between parties is up to the application to decide.
The impact of ORTC on interoperability
I talked about this when Microsoft announced their support for ORTC in Edge.
It was also a concern raised in an audience survey I conducted at the last WebRTC conference.
Interoperability is twofold: APIs and what is sent over the wire.
API – There are adaptors/shims being worked on to solve this. You can find a few on GitHub but looks like there is still work to be done on this.
What is sent over the wire – This has to do with the media descriptors SDP/ORTC and the media itself, the codecs.
The media descriptors part is solved in level of application. It is a world of islands anyway and if you want to connect through a GW, same as signaling needs to be taken care of also what is carried in the signaling will need to be taken care of case by case.
The codec front is mainly up to the browsers. Voice is pretty much solved. Video is the tricky part as Edge implements it’s Microsoft variation of H.264 but they are working on interoperable H.264. Since Firefox already supports both VP8 and H.264 and Google Chrome has this in the works, eventually this issue will be solved. Hopefully (and in compliance to standard) we will have both VP8 and H.264 (and VP9) in all browsers.
Fourth takeaway for you from this post is: Interoperability is not out of the box but you can manage that in your application.
Microsoft already demonstrated such interoperability.
Conclusion
ORTC is not a wildcard anymore. It was originally pushed by Hookflash and Microsoft but today Google is part of this initiative and eventually it will find its way into the standard.
4 takeaways:
- WebRTC is gradually adding ORTC functionalities into it. It is not the full ORTC story but just some parts of it
- ORTC is all about setting and getting media parameters
- What media parameters are send and how they are sent between parties is up to the application to decide
- Interoperability is not out of the box but you can manage that in your application
Further readings for the more tech savvy:
Interoperability walkthrough by Philipp Hancke
Leave a Reply