Skip to content

A decent experiment for taking user-provided content and syncing it with live-action video footage in the browser.

Notifications You must be signed in to change notification settings

conrad-vanl/decent-video

Repository files navigation

Decent Video

This project is an experiment for compositing user-provided content and syncing it with live-action video footage in the browser. It was used in a viral marketing site for Mad Decent's 2014 Holiday album for an Elf Yourself inspired parody.

The Ember app that is in this repo is a stripped down version of the site that ran with Mad Decent, with 4 of the videos from the original campaign for demonstration purposes. The app does run in a mobile browser with decently performing playback. Although, the site is currently not responsive as certain parts of the process relies on precise pixel sizes, however I'm working on improving that.

I also plan to continue exploring how <canvas> and in-browser javascript can be used to combine user-provided content and video to create interactive (and hopefully in the future more immersive) experiences.

Demo: http://convan.me/decent-video/

How this works

  1. User selects a headshot and crops/rotates their photo within a head-shaped mask
  2. An image Blob is generated in-browser and then uploaded to Cloudinary (so I can generate a shareable URL)
  3. User can watch a video that super-imposes their headshot on top of a Mad Decent Twerkin' dancer

Decent Player Component

The crux of this project is the decent-player component, which combines pre-calcuated head-tracking data, a png image sequence (more on that later), an audio track, and the user's headshot to create a simple video player. It is a somewhat complex process but can be broken down into a few parts:

1. JSON Head Tracking Data

There is a frames.json file for each scene in the public/scenes/ folder. Each frames.json file contains an array of frame objects that specify the scale, position and rotation for our "head" image we will be overlaying on each frame.

These files were generated by copying and pasting tracking data from After Effects, and then formatted by hand.

2. PNG Image Sequence

Each scene is exported as a PNG sequence, each PNG frame ran through tinyPNG, then all bundled up inside of a single uncompressed zip file for each scene. The browser then downloads the zip file, extracts it using zip.js (gildas-lormeau/zip.js) which gives us a Blob reader that we can use to read out data-uris for our image frames in order. The decent-player component renders out these image frames into the <canvas> element.

Why an image sequence? Because the "other" method of using a <video> element and drawing each frame from the <video> on to the canvas results in mixed-performance on mobile and there is no way to guarantee frame-accurate animations, as you can not determine what frame number a <video> is on, you can only "guess" by running a clock and doing the math. Also, PNGs allow for transparency and we can easily draw the video frames on-top of our headshot, which results in a cleaner cut-out.

Why a zip file? Why uncompressed? Fewer network requests. Uncompressing a compressed zip file is very taxing on a browser, would take too much time to uncompress, and since the PNGs have already been run through tinyPNG, zip compression didn't result in any smaller of a file.

3. Super-imposing Overlay Image

Super-imposing the overlay image now is rather simple - we can simply draw it out on the <canvas> using the tracking data from the JSON file to position. We also need to make sure the positioning data is properly transformed to consider the current scale of the <canvas> compared to the scale that the JSON positioning data is based on.

Also, technically I am underlaying the image - it is being drawn onto the <canvas> before the video frame, so that we don't have to worry about cleanly masking out the headshot image.

Other un-organized notes

  1. This project needs tests
  2. The project that launched with Mad Decent's XMAS campaign included a server-side Node.js process that generated on-demand image previews that was used for social sharing and also generated H.264 MP4 video files of their custom video that users could download. This process ran in an on-demand worker queue on Heroku (only costing roughly $0.0005 per video!) and used most of the same code that is currently in the decent-video component. I also plan on eventually releasing portions of that code (however it's a complete mess right now).

Prerequisites

You will need the following things properly installed on your computer.

Installation

  • git clone <repository-url> this repository
  • change into the new directory
  • npm install
  • bower install

Running / Development

Code Generators

Make use of the many generators for code, try ember help generate for more details

Running Tests

  • ember test
  • ember test --server

Building

  • ember build (development)
  • ember build --environment production (production)

Deploying

Specify what it takes to deploy your app.

Further Reading / Useful Links

About

A decent experiment for taking user-provided content and syncing it with live-action video footage in the browser.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published