LONDON — It’s everyone’s worst nightmare, or at least it should be — a zombie pandemic! And Brad Pitt, playing a former UN employee, seems to be the only one who can stop it. He travels the world and takes on the undead in order to save humanity.
As you can imagine, there are tons of effects, digital cities, vehicles, helicopters, planes and thousands of motion-captured zombies hell-bent on world domination.
Cinesite London (www.cinesite.com) provided about 440 shots for the film; many of which take place in the first half, which is set in Philly, New Jersey, Korea and on a military vessel.
MPC (www.moving-picture.com), also in London, handled about 450 visual effects shots, including the zombie crowd work, Jerusalem, the zombie pyramid, zombies being shot climbing over a bus, zombies attacking helicopters, the plane being destroyed with zombies flying out, the Wales sequence and epilogue. Jessica Norman was VFX supervisor for MPC. For more on their work for the film, please see our Website.
For this particular piece, we did a Q&A roundtable with a few Cinesite artists. Enjoy…
POST: You captured the movement of the zombie characters with mocap? Can you describe the process?
ANTHONY ZWARTOUW: (CG Supervisor) “Led by the film’s animation supervisor Andy Jones, the production did three mocap sessions, each two days long. Audio-motion, in Oxford, was the company used, and their Vicon Motion 160 camera system enabled the recording of multiple performers.
“During preproduction, Cinesite animation lead Peter Clayton and the rest of the team also created dozens of test animations to help the director, Marc Forster, visualize how the zombies should move. Hundreds of different movements were captured, from walks and runs to complicated vignettes which featured multiple characters for specific action in specific shots. The zombie mocap was then augmented with keyframe animation to create the arms back, stooped forward run that the director was looking for.
“Although mocap was used extensively for crowd scenes, a lot of pure keyframe animation was used for hero digi-double shots for actions, which were impossible to achieve in mocap because they were too dangerous. This includes the zombie take-downs, inspired by Israeli attack dogs latching onto their prey. The head first, arms back attack was impossible to achieve for real without injury to the stuntmen.”
POST: An Israeli attack dog!?
MATT JOHNSON: (Cinesite VFX Supervisor) “The Israeli attack dog reference brought up a lot of interesting issues. However tough you are, if you launch at somebody leading with your teeth, you will always try to cushion your fall with your arms; in this movie, the zombies don’t do that. This meant that even the bravest stuntman could not accurately get the force of attack that we wanted, in many cases.
“Fantastic work was done by a team of experimental dancers who were able to move and contort their limbs to an almost terrifying extent. Their actions were motion captured and combined with bespoke animation to create the unique movement characteristics of the zombies.”
POST: What about crowd shots?
ZWARTOUW: “Our crowd system needed to accommodate both distant crowd shots and those where zombies are much closer, so it was decided to create a tool which enabled the Massive TDs to export portions of the simulated crowd as animation rigs to import into Maya, in order that the animation could be tweaked, the geometry upgraded to a higher resolution and if need be, high-resolution cloth and hair simulations could be used.”
POST: What about the city environments. How much was real and how much was CG?
THOMAS DYG: (Environments Supervisor) “The most elaborate of the environments we created was the New Jersey rooftop sequence, which was shot entirely on a greenscreen stage. The rooftop itself and the helicopter were the only set builds. Our VFX supervisor Matt Johnson and VFX photographer Aviv Yaron went to New York and New Jersey to shoot reference stills of various buildings from street level, as well as panoramas from various rooftops.
“From this photography we created simple geometry, which we projected the photography onto. This was used to make up all the foreground and mid-ground buildings. For the background we used two sets of panoramas. The closest part of the background was Manhattan and the more distant background was a panorama of New Jersey. Manhattan was mixed in because it has a higher density of tall buildings, which visually looked more interesting from the top of our roof.
“Panoramas were created in PTGui. Simple building geometry was started in Sketchup and refined in Maya. All the elements were brought into Nuke, where the buildings were laid out, textured, panoramas set up, sun with flares and other atmospheric effects added. A number of mattes and layers together with a template script were passed on to our compositors. This template provided some control over the layers and made it easy for compositors to add extra smoke plumes etc.”
POST: What about the Philly shots?
DYG: “For the Philadelphia street shots nearer the beginning of the film, the technique was somewhat similar, except that it was shot on location in Glasgow. Many of the buildings were topped up from the second floor upwards with well-known Philadelphia buildings. Again, these were created as relatively simple geometry, with photographs projected on to them. Nuke was used to layout the placement of the buildings and integrate them into the plate. A number of passes were rendered out to support the compositing stage.”
POST: You created many digital vehicles, did you build all the models?
JOEL BODIN: (Lead Lighter) “Maya and Mudbox were used to create a wide variety of models from scratch. The long list of assets we created includes an entire military camp, 20 different types of car, the Benjamin Franklin Bridge, several buildings for 2.5D projection, several different types of ship for a flotilla sequence, an RV camper van, the cab of a garbage truck and various helicopters.”
JOHNSON: “The Philadelphia sequence at the start of the film uses several CG assets. In many shots we added cars, helicopters and other required vehicles as well as CG zombies and people. We needed to create a convincingly gridlocked and chaotic environment.”
POST: What about the smoke, fire, hair and water sims? Were they written in-house?
ZWARTOUW: “The TD FX, which we created for WWZ, included boat wakes, rain and puddles, smashing glass, bullet hits, blood splatter, smoke, atmospherics and digi-doubles with cloth and hair simulation. For the different types of effects we used both Houdini and Maya, and the team of FX TDs was led by Jan Berner.
“For cloth we used Maya nCloth. We built on top of this using in-house tools which helped us to manage the hundreds of different cloth combinations for dressing the characters, control the behavior of the cloth and also our own CSWedge tool, which allowed us to simulate and manage hundreds of cloth simulations.
“For shots where the camera was too close for the Massive set-up to work, a generic cloth simulation set up was used. We filmed video reference of all types of different clothing. Even so, we got a lot of detail using our own simulation tools and our CsSculpt tool allowed us to add even more detail and address specific direction from the client within a really short turn around.
“For hair, we built our own pipeline around the Yeti fur plug-in, with the simulation driven by Maya’s nHair. Again, we developed a wide range of tools, which gave us a faster turnaround and higher level of control. Our CsCache tool was used to cache out and modify the simulations at different stages.
“Our set-up was much like the cloth simulation, where hero characters and crowd had their own individual grooms, sculpted to impressive detail by lead groom TD Tarkan Sarim. The hair team tackled a multitude of shots, including a full frame close-up of simulated long hair.”
POST: Can you point to what you consider the most challenging scene?
DYG: “Personally, I think the Jersey rooftop was the most exciting and challenging scene. Large parts of this environment are entirely CG, but it is all based on real-world photography. The original photography obviously helped immensely in achieving the final, photoreal look. From that standpoint, it’s much easier to begin art directing the environment to fit the brief and thus the film.
“I find that a lot of energy can be used simply in making environments look real, taking away valuable time from making them look great. Using our system, we were able to just jump straight in and start tweaking and playing around with the scene. This work on World War Z was the first time we used this technique to such an extent. Its success has led to us defining a better process for how we create environments in the future.”
POST: Can you talk about using LIDAR scans?
BODIN: “LIDAR scans and surveys are an essential part of building CG sets and they were used for various sets and locations as well the people on the streets in the Philadelphia scenes. They were used to help us place crowds within shots.”
POST: You also called on Nuke a lot?
DYG: “We now use Nuke heavily for most of our environment work. Of course it’s always used in compositing, but we have used it for environments as well.
“Sometimes our environments are created using only Photoshop and Nuke, projecting onto simple geometry or cards. Other times more complex renders and render passes are created through Modo, Cinema 4D or Maya or more elaborate textures/matte paintings are painted in Mari. Environment models are increasingly built using photogrammetry.
“But common to all environments is that they are going through Nuke before being passed on to compositors. This is to avoid spending compositing time on a shot-by-shot basis to reassemble layers and render passes into a finished environment. Instead, an environment artist is in control of the environment on a scene-by-scene basis, making the process and ability to iterate a shot easier and faster.”