WebRTC in iOS 11 and Safari 11, is it mission accomplished?
Yesterday at WWDC 2017 it finally happened. Apple announced WebRTC in Safari 11 as part of their iOS11 release. That was expected, actually I personally hoped it will happen earlier but better late than never.
Before I join the party of Tweets and posts about this important news, let’s do an inventory check to see how and if this move by apple is going to server the general public and WebRTC community.
In a previous post I spoke about apple’s WebRTC signals and what we should hope for.
Let’s look into the details provided by Apple, what was exactly announced and what information was published. As customary by apple, not much.
On the Safari 11.0 Webpage we see the following information:
Highlights of Safari 11.0
- Web conferencing.Implements peer to peer conferencing with the WebRTC standard.
WebKit
??? nothing on WebRTC
Web Developers
In the Web Developers section, under Media we see:
- New in Safari 11.0 – Support for real-time communication using WebRTC.
- New in Safari 11.0 – Camera and microphone access.
- Added support for theMedia Capture API.
- Websites can access camera and microphone streams from a user’s device (user permission is required.)
Apple’s WebRTC inventory checklist
Support for WebRTC by Apple is required to allow websites use WebRTC technology when users hit their website with Safari. Support is required across all Apple devices.
It is also required make life easy for App developers to include WebRTC features in their applications.
We would also expect to see support for WebRTC in an interoperable way both in means of APIs and codes. It will be great to see the media side added in an efficient way, meaning, have mercy on battery and CPU.
WebRTC in Safari
As it looks this is a done deal. This means that it will be possible to use WebRTC features on websites when browsing them with Safari.
Will it work out of the box or require API and codec adaptations, that’s another question. We’ll get to that later below.
Another thing to look at in the details Apple provide is the fact that WebRTC is mentioned in Safari highlights “Implements peer to peer conferencing with the WebRTC standard” and in the section for developers talking about support for media APIs.
Does the mentioning in the highlights section talk about more than just support for the APIs? Not clear.
WebRTC in WebKit and WebView
Having WebRTC in Safari is great for occasional users, you get to a website and click to contact the owner by voice/video. The browser use case is also relevant for cases in which users tend not to install an App or App doesn’t follow AppStore terms (may be because of industry type such as gambling or other reasons).
Still, most of our usage of the smartphone is within Apps so it is required to have WebRTC available for App developers as well.
This is where support for WebRTC in WebKit and WebView comes in place.
On Android, WebRTC is part of WebView for a long time and is being updated automatically as the browser itself is updated.
What will be Apple’s take on this? Still an open question.
Codecs
The expectation would be to have at least G.711 and Opus on the voice side and VP8, VP9 and H.264 on the video side.
We will need to see what Apple includes for us in their WebRTC bucket.
According to @saghul, it is H.264 only for now (thanks to Fippo for drawing my attention to this).
Talking about codecs, it will be interesting to see if HW acceleration is included and who has access to it. Apple only or also App developers.
Mobile SoC (System on Chip) typically have H.264 HW acceleration, a limited number of them also include HW acceleration for the VP8 codec. This is important for reducing battery consumption and reducing the load introduced to the CPU.
As one involved in SFU technology, it will be interesting to see if HW acceleration is available also for decoding multiple streams.
APIs
On the APIs front there are a few things to look at:
ORTC or WebRTC 1.0 – When Microsoft first came out with WebRTC in Edge they decided to go for ORTC. Later on they took one step back and added support for WebRTC1.0. Interesting to see what apple has decided to do
API compatibility – Lately Google announced they will close the standards compliance gaps in the Google WebRTC implementation. This is mainly causing interoperability issues between Chrome and Firefox, a topic we covered on our monthly WebRTCStandards monthly webinar yesterday with the help of Jan-Ivar from Mozilla. What will be Apple’s take on the API side and SDP Plan B vs. Unified plan is yet to be clarified. As it looks like right now, it is Plan B but we should hope this is simply due to an interperability with Chrome consideration and that it will change as Chrome makes its way to Unified Plan.
Conclusion
The official adoption of WebRTC by Apple is great news. As always, the devil is in the details. Before starting the party let’s see Apple’s approach to WebRTC, first and foremost not making it part of their walled garden and second in areas related to interoperability.
Alan Percy says
A poorly understood issue is hardware acceleration for video. Without it, any application based on WebRTC video will get a reputation as a battery-draining leech.
Amir Zmora says
Reality is that it is not always possible. Issues include HW acceleration codec support and access to HW acceleration in application for all streams.
When possible, it is of course desirable.
Chet Berry says
Will simulcast transmission of the video capture be supported for the video codecs?
Amir Zmora says
Hi Chet,
No simple answer to this as it requires specific support per codec in WebRTC to generate the proper SDP.
Additionally, when using simulcast there are a few streams (typically 3) generated. Not all platforms will allow for multiple HW accelerated stream processing.
Amir
Saska says
Hy guys. I’m struggling with one thing. Is it possible to access microphone in WKWebView in iOS? I have read multiple stackoverflow posts and they all are saying that that is not possible. Is there any work around? Thank you for you answer.