What’s the best route for building your WebRTC service?
The typical consideration that comes in mind by many when debating between self-hosting their WebRTC service and using a managed hosted communications API platform (CPaaS) is cost. But cost is something that can always be negotiated and eventually reach an optimal win-win balance point. The real dilemma is far beyond that.
As I’m about to touch this topic as a guest on the next Virtual Coffee webinar by Tsahi Levent-Levi, I want to delve further into it, hence this blog post.
While this post refers to WebRTC on premise vs. managed in general I specifically relate to WebRTC SFU and media related elements, their interfaces and complimentary elements.
What does technology ownership mean?
Does it require to self-develop the full stack of all solution elements? NO
If open source or commercially licensed components are used, does it mean you don’t own the technology? NO
If a 3rd party service is used for some specific capability, does that mean you don’t own the technology? NO
If solution is based on a CPaaS, does that necessarily mean you don’t own the technology? Not always.
The answer to technology ownership is not black or white, there are many gray areas between these 2 ends. A service may include many elements where communication is just a small part of it, in such case, using CPaaS might make a lot of sense and still not mean you don’t own the technology.
Developing the complete stack of all solution elements usually doesn’t make sense, it requires too much work and expertise in many areas.
The option of using some 3rd party elements, either open source or commercially licensed, make a lot of sense.
Using open source doesn’t necessarily mean greater control over the technology, sometimes it is the other way around.
The fact you use an open source component doesn’t mean you control what goes into its official releases and that you have a grasp of all the code it comprises.
Using a commercial license actually gives you more leverage on the vendor to implement features you require and naturally allows you to get support from those who developed the platform when things break.
The option of using some elements as hosted by a 3rd party might make sense in some cases. For example, I typically recommend companies to use NAT traversal as a service from companies such as Twilio. I view it as a commodity on one hand but a capability that requires global deployment and good configuration to work well.
Good chances that the value you will gain by building your own NAT traversal network and hosting it yourself will be negative. You should have a real large scale service or very special requirements to justify self-hosting of this. Even if this is the case, I wouldn’t do it as a first priority thing. You can always start by using a managed NAT traversal service and switch later down the road.
What level of control do you have over the technology?
As a general rule of thumb, the more you move to managed and CPaaS options, less control and ownership you have over the technology.
Let’s talk flexibility & agility
CPaaS typically requires a large number of customers in order to maintain the business. It is kind of a cookie cutter type of model. On the other hand, when licensing the technology, customer base is typically less crowded and customers have more flexibility due to the nature of the solution. Going for a commercial license gives the licensee more leverage with the licensing company for controlling the roadmap and having more flexibility.
Technology ownership and company valuation
Company valuation may be derived from business parameters such as revenue, profit, user base and usage growth or, in the case of smaller and early stage companies, the technology and people behind it.
Using CPaaS reduces the technology/product related value while having more control over the technology and how it is being used, even if parts of it are licensed. During my days as VP Product for Radvision’s technology business unit we had tens or even more cases where we approved reassignment of software licenses to acquiring companies when our customers got acquired. Our interest was clear, we maintained relationship with our customers and even managed to increase the business as the acquiring company also increased deployment scale. For our customers, being able to maintain usage rights of licensed software increased their valuation and allowed the acquisition to take place.
In cases of acquisition, when CPaaS is used, good chances that further to acquisition it will be required to replace the CPaaS with an on premise self-hosted option, something the acquirer would take into consideration and impact company valuation.
Controlling the deployment
When using CPaaS you can’t really control where servers are located, how load is distributed between them and the location from which users are being served.
Many are better off this way, especially if they lack the expertise of how to manage such a deployment and optimize it.
For applications that are mission critical or others that have special requirements, going the cookie cutter route of CPaaS doesn’t make sense. They need to control the servers, launch instances in various location and dynamically optimize the deployment for their need.
They would want to control their own destiny and be able to provide their customers SLA, commit to limited downtime metrics and plan their geographical distribution for improving customer experience. Not having control over the deployment limits these abilities.
Support and future proof
CPaaS come with support services offered by the provider. The concern is more in the future proof of such an option, mainly because there are so many small CPaaS providers out there, consolidation and service discontinuity are going to continue.
Being locked in to such a service can be risky in rainy days.
On the licensed technology option there are open source solutions and others offering a commercial license such as SwitchRTC. When going for open source, there are a few things to look at from support and future proof perspectives:
- Does the open source have a business model or is it just waiting to get picked up by some company as already happened in the market?
- Who is offering support, is it offered by the developers of the platform or by an external company?
- Are you able to maintain the open source in case things break or the open source is taken off market?
While commercial license is not immune to the acquisition scenario, having a business model makes it more future proof and ensure support is provided by the developers of the platform rather than by an external company providing services on top of it.
On premise, public cloud and anything in-between
Once you have opted for licensing technology and self-managing the deployment a new debate kicks in. Should you host the service in a public cloud or maybe in your own private cloud?
The decision to opt for one option over the other is mainly based on 3 criteria:
- What geographic distribution is required and can you find an answer to it through public clouds
- Is the SLA you provide such that you can’t even count on the uptime offered by public clouds. While downtime of public clouds is pretty rare, it does happen once in a while and when it does, it happens big time
- Are there any regulatory constrains or customer requirements that prohibit you from using public clouds
There is another option that combines public and private/real on premise. There are cases where a large part of the users joining sessions are base in a specific location, a large enterprise or organization, and others are joining the sessions from remote locations. In such cases, it might make sense to combine between the two approaches, place the service in the public cloud while deploy servers in specific customer locations.
In SwitchRTC we have encountered this case where an enterprise wanted to reduce the outgoing and incoming traffic when consuming WebRTC SFU enabled services. Using the server and session cascading capabilities in SwitchRTC, we are able to split the traffic of sessions and by that minimize the traffic between the enterprise and the internet. This is done by serving users joining from the enterprise local network from a local SwitchRTC SFU media server that is defined as secondary to the main SwitchRTC SFU media server.
This architecture is beneficial for many-to-many video conferencing scenarios and to a greater extend, in cases of few-to-many broadcasting scenarios.
Regulatory and security considerations
Security is a concern to many enterprises and service providers. WebRTC is probably the most secure VoIP technology as traffic is encrypted but when deploying a service, there is more to it than just the VoIP traffic. There is private information of users, history of sessions, recordings of sessions and other application specific information.
It is becoming a growing trend for countries to regulate services offered in their territory and mandate various restrictions including local hosting. A good example of such regulatory initiative is the EU GDPR that is going to be enforced on May 25, 2018. You can find a detailed beginners guide to GDPR here.
Being able to offer services locally in each country by deploying servers in local data centers is sometimes the difference between being able to offer service in that country or not.
Using licensed technology will clearly put you on the safe side of this.
There is no one good answer for the debate between on premise vs. managed and each of these options have sub-options to consider and decide on. The devil is in the details and in many cases a hybrid solution is required, using some functions (such as NAT traversal) from 3rd party hosted services while licensing (open source or commercial) other components of the solution.