I thought that enough time had passed since the the Diesel Days to Live project launched that I really should get round to writing up some notes and thoughts about some of the thinking behind it.
The brief from the client was to create an online film that gave the impression of time ‘glitching’ or fracturing, to tie in with a new campaign for Diesel Watches. We started with a week sprint at Pulse Films offices, with Pulse Director Anthony Dickenson shooting the watches with a Canon 5d and a motion control rig, whilst James Bridle and I experimented with ways to make the films interactive and playful.
Building a “filmstrip”
This bit is fairly easy and straight forwards. In HTML we create a long horizontal “filmstrip” <div> element that holds all of the image “frames”. That filmstrip div is placed into another single frames sized div with overflow:hidden set. Then by jumping from one frame to another you move the filmstrip left or right by the amount needed to bring your desired frame into view, to create the animation. Something like… left offset = frame number * -frame width …so if each frame was 640px wide, frame0000 would be 0 * -640px left, frame0001 would be 1 * -640px, frame0001 is 2 * -640px and so on. This is very similar to CSS sprites, however in our case we were easily going to have over 200 frames with a width of 1024px per filmstrip, to stick them all together would make a “sprite” of about a quarter of a million pixels wide.
Solving the problem of loading in a lot of frames.
One thing we needed to be very careful about was how we went about loading frames in. The Diesel project was going to have several scenes one after another, some having up to 280 frames (around 28Mb total) each and we wanted the user to be able to enter and experience the scene as soon as possible. We approached this in a few ways.
1. non-sequential frame loading.
The simplest scene we had was one where the user moved the mouse left and right across the screen and it would “scrub” through the filmstrip. If we loaded in the frames sequentially then by the time we’d loaded in 140 frames, or 14Mb worth of images we’d still only have all the frames needed for the first half of the scene. So we did a simple trick of only loading in every 32nd frame, then every 16th, 8th, 4th, 2nd and finally all the missing frames. By doing this we found that a scene was perfectly “playable” by the time every 4th frame had loaded and sometimes still ok at every 8th frame. Suddenly we could get away with starting a scene with only 70 frames (7Mb), we’d cut the load time down to 25% of the original. The rest of the frames would continue to download while the user was in the scene.
2. Key framing.
However we wanted a little more smarts going on. In some of the scene there were certain key moment, a close up view of a watch, an event (candles becoming lit/extinguished in one scene) and so on that we really wanted to be loaded before the user entered the scene. So in each scene’s definition we specified an array of keyframes which needed to be loaded in before we fell back into our 32, 16, 8, 4, 2, 1 loading pattern
3. Compressing images.
The key framing gave us another idea, along with transitions that I’ll cover in a moment. For most of the time the scene was going to be moving, either glitching around or running a sequence of frames from one point to another. As many of our frames were only going to be on the screen for a split-second or between other frames we could knock the compression of those frames right down… …another thing we had in our favour what that we were attempting to simulate video to a degree and people were kind of used to seeing compression artefacts on YouTube and other buffering video. Which means we could get away with having jpeg compression effects all over the place on some of the fast moving frames, it wasn’t as though we had a gallery of photos that stood up to inspection on their own. We just needed to not compress the product shots or frames too much. This allowed us to apply different tuned compressed to the images and shave a huge amount off our final image sizes. The three steps taken above was enough to make the idea of loading in 280 separate frames not quite so scary and into the realms of do-able
We also wanted the 3D glitching effect, with a GIF you can get the 3D effect a couple of ways. Either use a proper 3D camera that has 2, 3 or 4 lenses, or use a single camera and shoot several frames as you pan to the side. In our tests we found that in places where the camera was moving sideways, or looking into views with extreme depth (like down the stairwell at the office) we could get the 3D effect by jumping back and forwards around the current frame. Anything that involved panning worked well. As well adding some “glitching” code to our filmstrip engine we had to add a small bit of code to check to see if a frame had loaded yet, and jumping to the closest valid frame if it hadn’t.
Enter the scenes, and even smarter filmstrip.
We’re nearly at the point where we could wrap everything up and focus on just making the code work. The almost final thing was that we had not just one scene but 7 scenes (the final project had just over 2000 frames in it) and we needed a way to get from one scene to the next. To make the move from one scene to the next as smooth as possible we also prioritised “transition frames”. A scene’s description would contain the keyframes, the frames for a transition in and transition out. The engine would load in the keyframes, then all the frames for a transition in (we wanted the user to have a good experience starting a scene) and every 4th frame for a transition out, to make sure a move from the current scene to the next would actually exists before we allowed the user into that scene. The engine would also attempted to load in the frames for the next scene while the user was still interacting with the current scene. Which means if we could keep the user playing with the current scene then we could sneakily load in the frames for the next one. Sometimes a user wouldn’t hang round long enough for us to even get started on loading in the frames for the next scene, meaning we had to create inter-scene pre-loaders. We’d immediately stop loading in frames for the current scene, start the next scene’s frames loading and play the transition out frames, of which we knew we’d have at least every 4th. Then hold the user at the mini pre-loader while sucking down the next scene.
There’s a couple more tricks we threw in to try and make the experience faster.
1. Estimating bandwidth speed.
Because we knew the average size of a frame and scene, right from the start we’d start recording the average download speed of a frame and therefor the time remaining for the current scene and the estimated time for the next scene. If we detected that loading in the scenes may take a while then we could tell the engine to only load in every other frame, i.e. not to do the final pass of loading in frames, before moving onto loading in frames for the next scene. This way for slow connections a 240 frame scene could become a 120 frames scene (give or take a few for keyframes and transitions). We also had plans for an extreme fallback which was to have single high definition frames for each scene, which are loaded in right at the start and which we also measured the download time for. If the user appeared to be on a very slow connection then we would just show the single frames with the questions over the top. Their experience would be one of just going through a gallery of images answering questions along the way. We ended up not having time to implement that feature but will probably add it into future versions of the engine.
2. Minimum frames needed tweaking.
Each scene had a suggested minimum percent of frames that needed to be loaded before we’d allow the user into the scene that we could also tweak. Some scenes we felt that we could allow the user into with only 16% of frames loaded but others needed at least 75% of the frames for a good experience.
The wrap up.
Needless to say this was a fun and interesting project with various challenges. And I haven’t even got into how we managed the assets as we got frames in from the shoots, first the quick rushes then un-colour corrected frames and so on. We had to devise an identifying keyframes and compression management system for that too. Fortunately that didn’t have to be too pretty Of course having an awesome team to pull it all together it what make all the crazy theories work, so huge thanks to the Storythings team on this project – Natalia Buckley, Pete Fairhurst, Dean Vipond, Rob & Al at Green Shoots Design, and Adrian Bigland (iOS app).