BLOWING UP HTML5 VIDEO AND MAPPING IT INTO 3D SPACE

BLOWING UP HTML5 VIDEO AND MAPPING IT INTO 3D SPACE

I’ve been doing a bit of experimenting with the Canvas and Video tags in HTML5 lately, and found some cool features hiding in plain sight. One of those is the Canvas.drawImage() api call. Here is the description on the draft site.

3.10 Images

To draw images onto the canvas, the drawImage method can be used. This method can be invoked with three different sets of arguments:
  • drawImage(image, dx, dy)
  • drawImage(image, dx, dy, dw, dh)
  • drawImage(image, sx, sy, sw, sh, dx, dy, dw, dh)
Each of those three can take either an HTMLImageElement, an HTMLCanvasElement, or an HTMLVideoElement for the image argument.

The api lets you take the contents of specific HTML elements and draw them into a canvas, and the 3rd element in that list is just begging to be abused. Copying video into a canvas element means opening up the ability to manipulate or process video frames at runtime. Two concepts instantly came to mind that seemed like fun to try and figure out, here they are below.

Blowing apart fragments of video

BLOWING UP HTML5 VIDEO AND MAPPING IT INTO 3D SPACE

Click around the video frame to blow up that part of the video, the exploded pieces will continue to play the video inside them. After a while they retract back to their original place. One feature I didn’t have time to figure out was adding depth to the explosion, so pieces that are closest to ground zero fly up into the air as they sail outward. With full shadow effects this could look really cool.

3D Video

BLOWING UP HTML5 VIDEO AND MAPPING IT INTO 3D SPACE

This demo in particular runs really well inside webkit based browsers, but not so much in Firefox. Firefox doesn’t appear to have any hardware acceleration for Ogg decoding so I had to drop the video size in half in order to run at acceptable framerates. Even still, Firefox chokes pretty badly on my Macbook Pro.

*Update* – I’ve changed the ogg video to be 640 x 360, prepare to see firefox weep

Lessons learned

There’s a couple hints I found out along the way that are good to know if you want to play around with drawing video. First, you need a bit of hackish code to get this to work effeciently and it flows like this.

[Video playing] -> [Draw Video onto Canvas 1] -> [Draw fragments of Canvas 1 onto Canvas 2]

Don’t ask me why, but copying pixel data out of a video tag is expensive, so expensive that drawing it into a temporary canvas, and then drawing pieces of that temp canvas onto a final canvas is faster then just referencing the video tag repeatedly within the same loop. That’s why you’ll see 2 Canvases in the source code for the demos. I’m sure there’s a technical reason for this duplication process, but it’s a lazy reason.

Secondly, don’t try copying individual pixels around. You can still see the remnants of my first code attempt inside the explosion demo with getPixel() and setPixel(). This turned out to be horribly slow and completely unnecessary. Canvas.drawImage() + matrix transforms on the canvas space is far more efficient then handcrafted pixel pushing. On the other hand, pixel manipulation allows you to do things like runtime chroma keying, get ready for a new wave of “clippy” style videos with full transparency popping over websites to help you out.

Lastly, I’m learning very quickly that not all browsers are created equal when it comes to performance, it’s a crapshoot when it comes to heavy video+image manipulation. Safari and Chrome work well with h.264, Firefox slogs along with Ogg Theora, and Opera is somewhere in the middle.