Do you need to combine two totally different performances of the same actor — shot at different speeds and frame rates, playing back in slow motion and in realtime — into the same scene? Or, are you trying to blow just half the head off a live-action actor without using a greenscreen or cutting away during a single dolly shot?
If you’re a visual effects pro, these types of challenges can become commonplace and directors are relying on you for your input and problem-solving expertise.
This month, we check in with a handful of studios that have been presented with unique visual effects challenges for both television and film projects. Here’s what they had to say about working with directors, troubleshooting effects that haven’t yet been attempted, and creating solutions that work within an established workflow.
UNFORGETTABLE
Lesley Robson-Foster is a long-time independent visual effects supervisor who last winter worked on the Gary Marshall romantic comedy New Year’s Eve, and more recently has been on-set VFX supervisor for the new CBS series Unforgettable.
Produced by Sony Television Studios in association with CBS Television Studios, Unforgettable stars Poppy Montgomery as Carrie Wells, a former police detective with a rare memory condition that doesn’t allow her to forget — good or bad.
Robson-Foster was brought on board by Sony. “I like to be hired by production rather than the post house if at all possible,” she notes. “I can make a decision about how to shoot something so that it is right for production and the show, rather than what software there is at a particular post house.”
The most notable effect that appears often in Unforgettable involves Carrie Wells’ ability to revisit her past, particularly crime scenes. “She can go back and visit her own memories, and the writers have chosen to demonstrate that by having her in the frame twice,” Robson-Foster explains. The two performances are described as her past self and her traveler self.
“Every week we have a big motion control rig,” she explains. “I hire the right rig for the gag, and it depends on what the director wants — either a MoSys system or a Techno dolly, or whatever I decide is right for the shot and we shoot the layers I need accordingly.”
The show shoots with Arri Alexa cameras, and Robson-Foster’s time on the Gary Marshall film allowed her to experiment with that camera, and others, while capturing background plates.
“Alexa is a game changer,” she says. “It doesn’t have such a noisy blue and green channel as the other cameras have. Up until now, although the digital cameras were pretty fine, it was hard to get great keys off of them without perfect exposure. The Alexa seems a little more forgiving; I am really happy they are using it.”
When we caught up with Robson-Foster, the show was shooting Episode 8 in what is expected to be a 13-show season. Each episode is shot in eight days — four on stages at Silver Cup in Long Island City, NY, and four on location, mostly in Queens where the storylines take place.
There isn’t a predetermined number of flashback sequences per episode, but Robson-Foster says they have been averaging at least two per episode. “It’s not new having the same person in the frame twice — it’s been done a lot — but, the new thing is doing it on an episodic schedule for both production and post.”
One challenge comes from a change in directors from episode to episode; many of whom aren’t effects experts. But they don’t need to be, reports Robson-Foster. “That’s why they hire a person like me.”
Encore in Hollywood (www.encorevideo.com) has been handling the show’s visual effects since the pilot. One unique aspect to this signature VFX sequence is the use of different frame rates, which are then combined in the scene. The “past self” performance is shot at 60fps, while the “traveler self” is shot at 24fps.
“The past is in slow motion, and the memory visitor is in realtime, so we shoot at 60fps for the first move when everyone is in the scene. We do the math and make the rig do the same move, but now we shoot at 24fps, so the difference is two-and-a-half times as slow.” A switcher allows Robson-Foster to see both shots in a semi-transparent fashion.
In the episode being shot at press time, the acting director decided that there was no need for a slow-motion effect, and instead the sequence is being shot at 48fps. “Unless you have got something in the background to prove that it’s slow motion, like people walking, there’s not really any point,” Robson-Foster explains of the decision. “He’s chosen to do them all at 48fps, and his plan is to go quick instead of slow.”
In addition to the signature effects for the show, Robson-Foster also shoots inserts with the second unit and photographs crime scenes. “I have a little station in the back of my Prius to do all sorts of stuff,” she notes. “I shoot inserts with the 5D, close-ups of hands and bullets and blood and guts.”
She even used an HD Flipcam to shoot footage representing video of a crime that was captured by a security camera. “I have a bag full of cameras. We decide what would be good for authenticity purposes, so we shot on an HD Flip, and that appears in the show.”
BREAKING BAD
VFX supervisor Bill Powloski’s has a background in cinematography. He’s worked as a motion control operator for IMAX, has created miniatures for movies, and has contributed to animated projects, including The Simpsons. More recently, he’s been working in television, on shows such as the now-cancelled Pushing Daisies and, currently AMC’s Breaking Bad.
Powloski got involved in the visual effects for Breaking Bad somewhere in the second season, when he was called on to create an exploding, severed head. Breaking Bad centers around a chemistry teacher in New Mexico who lives with his wife and teenage son. The lead character’s cancer diagnosis has given him a new outlook on life and a desire to secure the financial future for his family via the dangerous world of drugs. The series explores how a typical man transforms into a drug kingpin, and brutal scenes, like the aforementioned, underscore the violence. The show is produced by Sony Pictures Television and airs on AMC. At press time, Season 4’s finale had already aired. Sixteen episodes are planned for the next production block.
“I had worked with Diane Mercer, the co-producer of the show, before and she brought me on,” recalls Powloski, who heads up Velocity Visuals (www.velocityfx.com) in Los Angeles. The show is shot on 35mm film and the visual effects for the series vary from episode to episode. Powloski estimates that 90 percent of the effects he works on are invisible to the viewer.
“We are either doing matte paintings to create an environment or doing something invisible, or painting out something that was in production,” he explains. But, each season affords him the opportunity to create a number of big visual effects shots. Last year involved the lead character Walter White (Bryan Cranston) running down two people with his car. “They wanted it to look photoreal and wanted to do it without any stunts and without cutting away,” Powloski recalls. “We ended up using digital stunt men for that sequence.”
The Season 4 finale, titled “Face Off,” featured another ambitious effect — this one particularly gruesome. Powloski recalls meeting with co-producer Mercer and series creator Vince Gilligan back in January. “Vince came into the office and haphazardly said, ‘I want to do a shot where someone’s face gets blown off, but the person is still alive for a few seconds.’ He wouldn’t give me any other information or say who it was going to be — just that it’s coming up and that [he wants] to do it in a single shot.”
The scene would involve a practical explosion, a camera on a dolly, a real actor wearing prosthetics, and digital effects added in post. “We probably had a couple of months to figure out how to do it and to do it in the signature Breaking Bad style — they usually don’t cut around the visual effects. They just try to do things straightforward. They use the effect to tell the story, but it isn’t about the effect.”
KNB (www.knbefxgroup.com) created a mold of the actor’s head and sculpted what it might look like if half of it was missing. “They built a maquette that was fully painted for Vince to approve, and once he was happy with it they started creating a make-up appliance for the actor. When we shot the scene in June, we had dots on it used for CG tracking purposes so we could replace part of the head with a CG copy. What that allowed us to do was have the same design that the makeup effects people had put together. Most makeup is — by its very nature — additive. You are putting something on the actor’s face. What Vince wanted to do was remove mass.”
The maquette was scanned in 3D. “So once we got the footage back for post production, we could determine how far into the skull we were going to go. How we wanted the skin layers to look, and know that it was going to match perfectly with the makeup. Hopefully, when someone sees the final product, they don’t realize where the seams are.”
Velocity Visuals used NewTek LightWave to create the CG model of the actor’s head. Secondary animation was applied to the inside mass, allowing the jaw to move as well as the optic nerve that was once connected to an eyeball. “[It gives] it more life than something that would just be a mask.”
Using a dolly provided a stable shot that allowed for coverage of the actor and background tracking.
“Most of Breaking Bad has a hand-held feel to it, so after we finished the shot, we put a hand-held look on top of (cinematographer) Michael Slovis’s dolly shot, so they would match with the surrounding shots.”
Velocity uses Nuke for compositing.
BREAKING DAWN — PART 1
Berkeley, CA’s Tippett Studio (www.tippett.com) has a long history in feature film work, including visual effects for the Twilight saga. According to visual effect supervisor Eric Leven, the studio first got involved with the series by creating the photoreal wolf pack that appeared in the second film, New Moon, and has focused on the creatures ever since. Tippett recently completed effects shots featuring the wolves for the latest installment, Breaking Dawn, which hits theaters this month. Breaking Dawn — Part 2 is scheduled for release in 2012, and will feature the studio’s handiwork as well.
“Every movie has had a different production team, a different director, a different visual effects supervisor, so there is always a different attack to it,” says Leven of Tippett’s Twilight work. Breaking Dawn – Part 1 features one scene where the wolves are having a conversation with one another — an effect that had yet to be attempted. Another scene involves a large fight between the wolves and the Cullen family of vampires.
“There’s just a lot more action on the screen,” he notes, when comparing it to their past work. Leven describes the Breaking Dawn project as “the absolute best possible scenario,” because director Bill Condon and VFX supervisor John Bruno allowed Tippett to bring their expertise to the table.
“We started with the script, and started drawing our own storyboards, and went through rounds of that with John and Bill,” says Leven. “They would tweak it here and there, and then we’d work on animatics, and set it up so that by the time we were ready to shoot, we knew exactly what we were shooting and were able to do it really quickly and cost effectively.”
Leven likens it to the studio’s past work with Steven Spielberg on Jurassic Park, whereby previsualizing everything prior to the shoot allowed for an incredibly efficient production process. “I find that what we did on this movie — in terms of having the facility draw the storyboard — do the animatics and drive the sequences — is becoming increasingly rare, and we think it’s for the worse,” says Leven. “The more you know when you go to the set, and the more the facility that is actually doing the work can contribute ahead of time, the better the product is and the easier it’s going to be for everybody. We find that a lot of the movies we work on, that’s not the way it works.”
The studio’s animatics tend to be rough. “You don’t need to see cloud textures or textures on the rocks or the bits of grass,” notes Leven. “For our purposes, that’s not what the animatics are for. The animatics are going to determine how the scene plays, and most importantly where the cameras are going to be on that day so you know how many set-ups you have and how long it’s going to be to shoot. We tend to air on the side of not looking very polished, but it’s serving a very specific purpose.”
For the final wolf pack, Tippett Studio was able to draw on assets from past films. The wolves were modeled using Maya, ZBrush and MudBox, and were animated in Maya. The studio’s proprietary tools were used to create the characters’ fur, and RenderMan was used for shading.
BATTLESHIP
Santa Monica’s Halon Entertainment (www.halon.com), which specializes in previs and post-vis, is currently working on the Peter Berg-directed feature film Battleship. Inspired loosely by the classic game, and adding an alien invasion to the mix, the film starts Liam Neeson, Taylor Kitsch, Brooklyn Decker and Rihanna, and will be released in May of 2012.
Justin Denton, a previsualization/VFX supervisor with Halon, explains how previs can help streamline production, while post-vis can aid in storytelling: “I’ve been on this project for two years now,” he explains. “I started out working on previs long before it was shot, and now we are responsible for doing action scenes with the film and working heavily with the director, the DP and the stunt coordinator to make sure everything that we were doing would be shootable.”
The Halon team began previs on the film approximately nine months before principal photography began. “At that point, we were put in charge of helping [director] Peter Berg figure out the action scenes,” Denton explains. “You read one paragraph and that might be a five-minute action scene. So you have to help convey what that action is.”
Surprisingly, much of the of the battleship imagery was shot practically, so by previsualizing the scenes, the production team was then able have a good understanding of what needed to be captured come the day of the shoot. “A lot of the work I did with previs was to make sure the way that we shot it with our CG cameras was a way that the DP could shoot it in live action as well. If I put a camera up really high, it was a place that a helicopter could get to. Or if it was a shot they weren’t sure was possible, we had to make sure it was something we felt comfortable we could do as a full visual effects shot.”
Halon has a library of previs assets that it has been developing over the past eight years, but each job requires custom elements. “We did have some assets, but they were nowhere near the quality level we needed,” notes Denton. “Even though this is a movie with actors and characters in it, these ships are in a lot of ways actors in the film. Because of that we knew they were going to be heavily spotlighted, even at the previs phase, so we went ahead and really detailed them out.”
Today’s previs imagery can be quite detailed, and Denton feels the trend is to go beyond a simple rough animation or composite. “It seems like the trend has been to create more detail and try to give it a polished level and feel. That’s what we did on Battleship. You are sitting at a table with creatives, and they respond very well to visuals that are appealing. Even though you can do it in a rough way, the more polished you can make it look in the time that you have, the better the reaction will be to it and the more excited everyone will be to work on it.”
Halon also provides strong schematic overheads during previs for what Denton calls the “tech vis pass.”
“What it does is tells the crew how high the camera was, its distance to a giant ship out in the ocean and what the speed of the camera travel was. We can give all of that information to the helicopter pilot and DP and crew so they can see if they can match what you’ve done. That’s definitely something that is not discussed as much and is definitely one of the more valuable parts of previs and post-vis alike.”
Post-vis takes place after the principal photography has been shot and can help editors in the storytelling process by providing elements that help with pacing and action well before visual effects elements are available. “Post-vis ends up being the first pass version of visual effects, but the purpose of it is more to help the editors and the director make sure they picked the right plates for the story to be told properly,” Denton explains. “Some visual effects shots take months. With us, a single visual effects shot will take half the day. The idea is that we can churn through a lot of this and hand it to the editor and they can see if it’s working. Then production will hand it off to a visual effects studio.” ILM in this case.
Halon is an Autodesk Maya house, which it uses for building assets. The studio relies on Adobe After Effects for compositing and performs camera or plate tracking in SynthEyes. The studio works at HD resolution for previs and at full 2K resolution for post-vis.
“What we will do is hand ILM our Maya files and the QuickTime that we created from those Maya files so they have the actual assets as a starting point.”
For editorial, they provided Peter Berg’s team with DNxHD files for their Avid workflow.