Interactive
September 23, 2020

Camo - Building a Virtual Cinematography Toolkit in Unity

Camo - Building a Virtual Cinematography Toolkit in Unity

With Camo, the plan wasn't really to make a film, at least not initially. Edward and I wanted to see how we could integrate Unity and real-time tools into our workflow to speed up animated video production. If you haven't seen the film already, you can check it out below:

Most of my prior experience was in live-action film-making, and Edward's in 2D animation, so our goal was to see if we could find a process--or the signs of one--that allowed us to leverage existing skillsets and creative approaches. The traditional 3D animation pipeline is particularly involved, from modeling to rigging to offline rendering, the idea of going through all those steps for a short experiment like this felt prohibitive and almost incompatible with the way I enjoy making films. Fortunately, for this project, Edward used Quill, a VR illustration and animation tool with a visual style that blends 3D and 2D, and allowed us to skip most of the traditional pipeline.

From a cinematography standpoint, I wanted to have the flexibility to create handheld shots in 3D environments. Although you can achieve a similar look by adding noise or a fake camera shake effect to the captured footage, I wasn't going for that result. The goal was to film and capture images in 3D the same way I do in live-action, being able to move around in physical space and have that mapped into virtual space.

To achieve this, you would need a sensor and a tracked object and then a way to feed that data into whatever 3D software you plan to use. In an ideal world, I would have used HTC Vive Trackers for this, but I didn't have access to them at the time. However, I did have access to an Oculus Rift S, which already had sensors built into the headset for tracking the controllers. As such, I would have to wear the headset to film, and my range of motion would be limited to the headset's tracking area. Still, that was better than nothing.

Base footage of the 3D animation in Quill. The colours were flat so we could handle lighting in Unity.

Once we imported Edward's animations and assets into the Unity scene, we had a clearer sense of what we would need to make this work. We set out to make a camera system, calling it 'Nolan', which, in hindsight, made no sense, as Christopher Nolan isn't even a cinematographer. The idea was that we would eventually build a fully featured package of virtual camera utilities that we could drop into any project. For now, however, we just wanted a basic version: a simple camera that followed one of the controllers and allowed us to change the field of view to maximize our space.

We integrated SteamVR's Unity Plugin to handle the inputs from the headset and controller, creating a cube game object to represent a controller in the scene, such that wherever you moved your hand, the cube would follow. The next step involved creating a camera game object and making it a child of the cube game object so that our hand movement also controlled the camera's position in space. This was all great and good, but we needed the ability to preview what the camera was capturing. 

Looking and walking around the set in VR.

In theory, this should've been as easy as creating a Render Texture--a type of Texture that is updated at run time--, mapping the Camera's output to that texture, and then applying it to the surface of a plane or cube using a material. In fact, it was that easy, but we had made a slight mistake when we set up the project that prevented this from working.

We wanted to take advantage of Unity's new (at the time) High Definition Render Pipeline (HDRP) to push the real-time visuals even further. We were aware that HDRP was in beta, but decided, "Pfft, it's fine, Unity folks probably have to put that beta tag there for legal reasons" and decided to use it anyway. What we didn't know was that there was an obscure bug on the particular version we used that prevented Render Textures from working in VR. While we eventually figured this out and found a fix, thanks to one other person who posted about it on Unity's forums, we had already lost a day of production.

After commenting out an offending line of code or two to fix the bug, we were back. We added the screen above the cube, and even the ability to zoom in and out using the analog stick on the controllers. Things were looking good, but the camera was shaky. We wrote a script to smooth out the motion by interpolating between the camera's position and rotation in LateUpdate. Finally, we realized our limited space was restricting our ability to properly capture footage, so we added smooth locomotion to allow us to use the analog stick on the non-camera controller to move around in the virtual set.

Using the camera to record the opening shot.

Overall, I would call this experiment a success, as we filmed the entire video using this method. It felt like a natural extension of the way I already make films. After this project was released, Unity released a virtual camera mobile app that has all the functionality we planned to implement in Nolan. We tested it out on the clip below, and it worked well without the need for a VR headset.

In the future, our virtual production pipeline will likely include a mix of this mobile app, tracked objects, and mocap. I'm excited to see where it goes.