A Challenge: WebRTC Screen Sharing v2

screenshare-composite-example2-300px.jpg

screenshare-composite-example2-300pxIt’s been a year or more that tools like Google’s Hangouts have supported the ability to share a host computer screen with the viewing audience. This was rightfully heralded as “a very good thing indeed.” However, it’s current incarnation is considerably less than ideal and seems to be stalled. I’d like to lay out a challenge to see if anyone is interested into taking this to the next level, which is something that we’ve tried to do with a few VUC calls earlier this year.

Here’s the fundamental problem; people use screen sharing to give demos of software and share documents, which includes giving presentations a la PowerPoint, Keynote, etc. Currently, Hangouts, Jitsi Video Bridge and the like show either the screen share or the camera. In the case of slide presentations there can be very little activity in view as the presenter speaks to the points shown on the current slide. This creates less than compelling visuals.

This reality is a little counterintuitive as most of us also know that an array of talking heads can also be less than visually compelling. When a formal presentation is involved the most genuinely useful visual is a combination of the slides and the presenter.

Hey, if it works for streaming big Apple product rollouts it has to be a good strategy, right?

AppleLaunchEvent2014

In the case of WebRTC– based services that implies a more evolved approach to the screen sharing browser extension. It’s not enough to just capture the screen. Capture the camera as well, then do some rudimentary compositing, sending the resulting stream onward to the media handling service and ultimately the audience.

This is admittedly more complex than just asking for the media stream from the desktop or the camera, but I think that it would a very valuable capability. I note also that commercial services, including WebEx and GotoMeeting, don’t mix the camera video stream and the desktop share, so they are already able to show both independently. The Apple example (above) proves that simple, 2D compositing is all that is required.

In those cases where a VUC guest has been willing & able to provide their slides in advance of the call we have leveraged the compositing capability of Wirecast to display the presenter and slides over a background. In these cases, the resulting output was streamed to a YouTube Live event.

Consider the example of VUC513 with Kerry Garrison of Teliax. He had some slides to present the capabilities of their new IVY customer service platform. However, there was a lot of talking around the points on each slide. Just looking at the slides would have been pretty boring. Just looking at him…well, ditto. Seeing Kerry along side the slides was a more optimal visual for the audience.

screenshare-compositing-example-600px

In that particular case, the PowerPoint slides were actually executed on a physically separate computer connected to my Wirecast system via HDMI. That arrangement allowed us to see the nifty animated transitions that Kerry had created in the slide deck. It also split the load of the PowerPoint and streaming activity between two computers, which is a good thing.

As a further example, consider VUC506, and in particular the part featuring Tim Panton. Tim had passed us a PDF containing some slides which we incorporated in a similar fashion. Since the slides were light on text, but long on underlying technical detail, presenting Tim along side the slides was dramatically better than the slides alone.

VUC506-Tim Panton with slides 600px

A major down-side to this approach is that it puts control of advancing through the presentation into my hands, where it would be better if controlled by the presenter.

Staging this kind of online presentation takes a considerable amount of resource. There are a number of things to setup and coordinate. Also, I have spent a considerable sum to have the hardware and software tools that make it possible. All of this is a burden that most other people would not withstand.

In truth, a very motivated person could get this sort of thing done using the free version of Wirecast that YouTube gives away. However, that version is constrained in what it can do. Moreover, even that much effort that takes out only some of the cost, leaving the entire knowledge burden in tact.

My challenge to the WebRTC community and it’s backers is straightforward; make this sort of thing possible from within the browser for anyone sufficiently skilled to create their own PowerPoint/Keynote/Google Docs presentation.

In so doing, let’s get past the “isn’t-it-great-I can-share-my-screen-but-now-I’m-hidden” stage to something more like, “please allow me to illuminate these points for you!” And since you can still see my face you’ll be able to know when I’m joking, even if the slides are deadly dull.

  • tsahil

    I think sending two videos should be the solution.
    The receiving end, be it Jitsi or Hangouts, should do the composition and understand which video stream is the presentation and treat it accordingly/differently.

    • mjgraves

      I don’t disagree, but I can see different use cases that might gain from a solution that is either entirely client-side or server-involved.

      Sending two streams increases the bandwidth requirements rather dramatically. Also the CPU overhead, with the attendant implication for battery life on mobile devices.

      Giving the audience the ability to select what they see, or potentially how they see it, is a really nice idea.

      If someone does a nice job of this sort of thing it can be spun into other applications. It will start to look like a software based distributed video production tool set.