The Making of 'Gravity' - A Case Study Report for the Virtual Production Committee

on Tuesday, December 31, 2013
2013 has been a busy year - and it is almost one year ago to the day since my last post!


I would be very remiss not to to post about the making of the film Gravity noted for the use of groundbreaking techniques particularly with regard to cinematography.

I am constantly learning about new production processes as a member of the Virtual Production Committee.  Chaired by David Morin and Co-Chaired by John Scheele this group is a joint technology subcommittee of the following organizations :
            The Art Directors Guild (ADG)
            The Visual Effects Society (VES)
            The Previsualization Society
            The Producers Guild of America (PGA)

Here is my report for VPC Meeting no. 8 that was held on October 3rd, 2013 at Warner Bros Studios:

CASE STUDY: The Making of “Gravity”

Chris DeFaria, Executive VP Digital Production and Animation at WB and our host for the evening, told of how 4 years ago when introduced to Gravity,  he suggested to director Alfonso Cuaron a compositing technique developed by Alex Bixnell, VFX Supervisor on Little Man, in which they shot the head of the character independently of the body and comped them together in a low-tech way, but for Gravity only the faces would be real and the rest of the movie animated.

Alfonso had already created a 7½-minute pitchvis for Universal with previs artist Vincent Aupetit, formerly of Framestore. The project was deemed cost-prohibitive by Universal, but this pitchvis and DeFaria’s confidence in coming up with an affordable solution got the production green lit at Warner Brothers.  DeFaria credited William Sargent of Framestore for undertaking the pitch from the beginning and brought Framestore on board to lead the team(s).

He presented the 16-minute long opening shot of the film that gave the sense of the movie, and added that the concentric graphic (from 5D- see below) shown by David Morin during his opening remarks, is very relevant. For Gravity, all phases of production—previs, postvis, editorial, even script were concurrent, intersecting and informing each other, unlike any film he had ever worked on.
Production Design Mandala 
David next introduced Chris Watts, VFX Supervisor, also brought on by Chris DeFaria, as a consultant to work with Framestore in exploring techniques to achieve total realism for the zero gravity shots.

Watts showed short clip from the voluminous NASA footage provided by WB for reference. He noted several key challenges; one would be to get the lighting right – there is no atmospheric perspective in outer space. Photographic tricks such as scale do not work. Another challenge was to figure out a way to free up the actor from cables and uncomfortable rigging so they could focus on acting. Actors on cables are very difficult to shoot which would be compounded by the length of the shots that the director was looking for.

Alfonso initially wanted the capability to re-light and re-photograph the actors in post, which would require doing 3D performance capture, so they tested a technique using the Light Stage developed by Paul Debevec at ICT.
ICT Light Stage
The results were very good, but the shoot was problematic. They were able to shoot only 5 minutes a day, creating a lot of footage that took 4 hours to download. In addition, Alfonso was not comfortable shooting in a dome of blinking lights. Alfonso decided he must commit to shooting one angle and lighting one way, meaning he would need to previs the entire movie.

Watts knew their approach would be to shoot some form of stationary person with a moving camera, but he was not a fan of motion control, since it is not an easy process for zero gravity performance.  Then Chris DeFaria forwarded him an article in Wired Magazine about Bot & Dolly (formerly Autofuss), a San Francisco based company that was repurposing automotive assembly robots to shoot movies.  

Watts contacted Bot & Dolly Creative Director Jeff Linnell, who sent him a clip of the robots in action proving they had the technology needed to get the movie made.  He went with Chivo, the DP for Gravity, to San Francisco for a tour and demo.  Watts told how he gave the B&D team a Maya file of the previs before lunch, and to his surprise, by the time they came back, found the B&D crew ready to shoot a test.

For the test an actress stood in front of green screen on a plexi block and shifted in mid-air, grabbing onto invisible weightless objects. The camera, lights and reflectors, all mounted on robots, moved in sync, orbiting the actress as she moved and turned.  The objects she was reaching for would be CG and comped later in post, but she was live-comped on set directly into the previs Maya file.  Seeing the live-comp convinced Chivo, Alfonso and Chris DeFaria that this was the way to go.

Jeff Linnell, Creative Director at B&D spoke next about the tools they created for Gravity. The B&D team spent a year doing software development for the motion control rig. Linnell emphasized the democratization of their system. It was not practical to have proprietary software that only his team could program.  They developed Move software that sits on top of Maya.  Anyone who can operate Maya can operate the robot. Move enables a robot in the real world to be moved in Maya, and conversely, the robot can be moved in the real world and that movement recorded back into Maya.  

This standardized tool optimized collaboration by allowing the team to operate the robot from their Ipads.  Framestore was able to animate all the camerawork for the movie in advance, while the B&D team validated the physics on set. Every shot planned in previs was achieved. The toolset they created allowed flexibility and creative control that would not compromise the demands of the most particular director or cinematographer.

The robot on Gravity was a 6-axis industrial arm capable of lifting 300 lbs. and moving at 4m/s.  It was mounted on a 7th axis track and had a pan-tilt roll head attached to it (for 10-axises) Additionally, the actors were in a gimble rig that was also on a pan-tilt roll axis, totaling a 13-axis robot on set. .  Precision was paramount. The repeatability is .05mm in accuracy at full speed. Every shot was a synchronized programing of movement of actors, camera and lights.

LED Light Box with Bot & Dolly Camera Rig
During shooting, using an Ipad, the director had creative control of time through motion speed. The lighting was slaved to the robot camera. When the camera was slowed down everything else slowed down with it for complete synchronicity with lighting and cues for all other departments.  Any and all changes could be repeated and the timings sent to post.

David then introduced Chris Edwards, CEO of Third Floor Inc. the company responsible for the entire previs of Gravity. Edwards noted “Gravity is an auteur film created by a director who envisioned the script in three weeks by writing it down, and he wanted to complement the story with very innovative, almost revolutionary camera work.“ Alfonso wanted extremely long shots that ended up on average 12 minutes long.  Most films are over 2000 shots. Gravity was 192 shots total. Within the first 30 minutes of the movie there are only 3 seamless cuts. And he wanted a high degree of accuracy regarding the geography of the earth and its orbit.

Edwards showed the original pitchvis (one continuous shot) Alfonso had created 5 years ago for Universal by Vincent Aupetit, now Previs Supervisor at Third Floor’s new facility in London.

Third Floor worked another 11 months in collaboration with Alfonso and Framestore to create the previs Maya files that Chris Watts later took to Bot & Dolly.  Edwards described how Alfonso developed a close working relationship with each previs artist, using physical models and lipstick camera to explore shots and encourage collaboration. These meetings got into so much detail that they were video recorded as a way to keep everyone involved on track with his vision.

All 192 shots were first conceived in previs as a hand-animated creative pass to get a clear idea of all the elements in a scene. For many of the shots they also did a simulated pass in Motion Builder.  They took advantage of the Motion Builder’s physics engine feature and by changing some of code, allowed the animator to fire the jet packs with an xbox controller and control the tethering.

The Art Department, led by Production Designer Andy Nicholson, generated 3D set models for use in previs.  They worked without direct assistance from NASA, were able to be very faithful to the research found on the internet. In addition to 3D set designs, concept art, key frames, and 3D set dressing, they provided infinite image and video-based detail early on that aided the previs, including textures, movement of fabrics, and studies of weightlessness.  

Edwards noted the advance in virtual lighting. The previs files were passed over to Framestore where DP Chivo hand-lit every shot, which drove the lighting decisions on set.  He acknowledged how Framestore pushed the limits of their resources with great dedication to make the film possible.

Chris Lawrence, Framestore CG Supervisor on Gravity skyped in from London. He worked on the project the whole way though to delivery. Echoing Chris DeFaria and Chris Edwards, Lawrence credited the success of this production in part to William Sargent’s investment in the project and for having fostered the creative collaboration between all departments from within Framestore.  

He noted how having the Art Department working in house during the previs phase afforded them the opportunity to integrate fully designed assets for a more complete vision very early in the production.  And because all departments were digital and could access data storage at Framestore, the iterative nature of the process was enhanced. 

He mentioned the special effects harness rigs they developed for Sandra Bullock to aid her performance of weightlessness.

He explained how VFX Supervisor Tim Webber combined 3 shoot methods: traditional camera and crane with traditional lighting, motion control camera and LED box, and a motion control wire rig and motion control camera.  These 3 methods were easily blended together because of the problem solving they did through previs. The work they did with Chivo to figure out the lighting in previs also paid off in the end when it came time to reassemble and comp the CG bodies with the real faces.

Lawrence presented a short video that elaborated on the LightBox and Tim Webber’s idea of projecting a virtual environment on the LED screens surrounding the actors. They provided an animated version of the movie as seen from the actor’s point of view. This visual information was low res, but enough to give the actors a feeling of where they were in space, helping them to do their job. At the same time it was a dynamic lighting source that gave subtleties in color, atmosphere and reflection for every moment of every shot in the LightBox, making the film feel that much more real.

Q&A

What was the resolution of the 6 panels in the LightBox?
Each panel was made up of several 5” square tiles @ 64 pixels x 64 pixels
It was low res but a lot of effort went into the of color calibration of the panels.

Was there any digital facial performance or digital doubles?
Yes, we worked with a company called Mova that specializes in capturing facial geometry. We had a rig with genesis cameras to augment with textures that could be projected onto the geometry.

Can you talk about safety features of the robots?
We had numerous layers of safety protocols on set that started in previs.
We ran the robots on the set without a human at variable speed, and then did the same with a stunt double.  Health and Safety had to sign off on every shot and there was an onset protocol for that.  We also had safety operators in place, if struck, they could stop the robots immediately.

What was the hierarchy of the management structure for this film?
It was standard VFX management production structure but larger.
The Art Dept. was right there with no physical barrier and that aided the short deadlines.

What was the translation of Alfonso’s use physical models and lipstick cam to the previs?
It was a visual translation used as reference footage for the artist. The models were used throughout the whole process and were pretty battered by the end.

How were you dealing with scale in space?
Everything was to correct scale, in coherent 3D space. We tracked the NASA photos and built to them. Chris DeFaria could not distinguish the CG asset from the original photo reference.

When did you start working with the Art Department?
It was the around the time that previs got underway till about 3 months after the shoot. We were able to block out the world early on with low-res schematics.
As we got a little further along the Art Dept. provided very accurate models, which Alfonso preferred.

What kinds of motion capture systems did you use?
Framestore purchased a Vicon stage system specifically for Gravity.
Motion capture was used to previs some portions of the film.
We also carefully tracked the motion of the helmet with motion capture camera attached to the lightbox. This was key to matching subtleties of Sandra’s head to whole body performance later in animation. Almost all of the animation is hand key-framed because it was difficult to do motion capture of zero gravity movement.

How long did you shoot on stage?
The live action shoot was about 3 months.

What was the resolution of the final render? 2K or 4K?
It was 2K stereo. We invested a lot in rendering. We converted from Pixar Renderman to Arnold (Framestore’s proprietary engine). Chivo loved simulated bounce light and we had a lot of heavy models that added to the render time.

What was the full budget of the production?
Around 80 million dollars, maybe more with the reshoots. There were a few moments in the beginning that were needed a year after the first shoot wrapped.

How did you integrate editorial into the process?
The Editorial Dept. was also based at Framestore and integrated very tightly with previs. Editorial for very long shots was convoluted so it was broken down to story beats instead of shots. Numbering the actions gave us a way of managing it. The editorial process was very complicated in both pre and postproduction.

Was there and edit of previs that really was the movie?
Yes with sound and voiceover. When we got Sandra’s voiceover the movie became real without a single shot.

How was the 3d Conversion done?
The 3d conversion of the faces was done from a flat left eye. Everything else in the frame was rendered in stereo.

How long was Chivo involved in the previs phase?
Chivo is a key collaborator of Alfonso’s and brought a lot to the table. He was not there all the time but was at Framestore for about 3 months. He came back during postproduction for some full CG scenes that were not prelit and gave feedback for that and other stuff we did after the shoot.

How was it to have the Director and DP with you in house for that long a time?
Did it change the way you worked with the client?
The client relationship was sometimes a little frustrating, but overall it was a very positive experience and we were able to do something we would not have been able to do without that direct collaboration.

How much involvement did Sandra have in previs?
We mostly worked with Alfonso if Sandra had any notes they came through him.

Was there Sets built or were they all digital?
Yes the 2 smaller capsules were fully built, and there were a few proxy sets as well.

end of report.

A great in-depth discussion can be found at  fxguide http://www.fxguide.com/featured/gravity/

Happy New Year!

JC

0 comments:

Post a Comment