Last week marked the second Kranky Geek Conference in San Francisco. This time the entire event was streamed live, and they’ve put both the recordings and slides online quickly. Kudos to those who managed to get that organized. Great work!
The lineup of presenters was first-rate, with solid representation from WebRTC leadership at Google, Microsoft & Mozilla. Of course, Emil Ivov was there talking about SFUs for Atlassian. Tim Panton build an app live on stage. Clearly, it’s worth watching the recording if you could not be in attendance.
I was especially interested in something Nils Ohlmeier, from Mozilla mentioned. He described rendering a video stream locally to a canvas, then using that canvas as a video source for the outbound stream. He further described using this capability to create an ad hoc MCU, with the browser compositing multiple video streams into one outbound stream.
Oh, the things that this could do! The most obvious thing that comes to mind is composite the camera stream along with the screen share, just like I was asking for back in December 2014.
That’s just a starting point. I can image folks like Vidpresso implementing a small video production switcher right in the browser. It’s exciting stuff. Very exciting indeed.