Advertisement
Current Issue
September 2014
Post Blog » 2012 » August » SIGGRAPH: Noisy Boy, You're My Hero
« SIGGRAPH: Pixar's 'Brave' Work | Main  | SIGGRAPH: Custom Is The New...Custom »

SIGGRAPH: Noisy Boy, You're My Hero

Tuesday at Siggraph started out as a normal day for me,
cup of coffee and a slow drive on the freeway.  This is
always my time to think about the plan for the day and
figure out how I'm going to accomplish the effects du jour
that the current project requires at work.  Unfortunately,
my autopilot mode took me down the wrong freeway, heading
into work.  It wasn't until I was half way there that I
realized I was supposed to be driving to the convention
center, so after some mild cursing, I made "immediate
u-turn when possible" and finally arrived at the proper
destination.

After chatting with a few friends, I decided to see the
production session on the making of Real Steel with some
of my old partners in crime from my days at Digital
Domain.  Upon entering the room, my attention was
immediately drawn to the full scale model next to the
presentation panel of Noisy Boy, the purple and yellow
Asian robot from the movie, built by the wizards at Legacy
FX.  I recalled seeing the full size Adam puppet at Legacy
a while back when I was there for a project we were
working on at Brickyard, and as always, the various
practical rigs never cease to amaze.  Unfortunately, a few
too many people were more interested in taking a picture
with the prop than sitting down for the presentation, but
this was quickly overcome with some tempered humor by the
moderator.  The panel was chaired by veteran Michael Fink,
and featured Eric Nash, Ron Ames, John Rosengrant, Dan
Taylor, and Swen Gillberg.  I had the pleasure of working
with Swen for a few years on Stealth, and it was nice to
see him up there (bunny suit not included...if you're
reading this Swen, we will never forget!).

The presentation was well paced and pretty standard fare
for a making of.  DD had the benefit of starting very
early on in the project, and they were able to get
everyone in production on board from the get go, which
provided for a higher level of collaboration than is the
norm on large fx films like this.  They spoke about their
use of the Simulcam system for the robot-on-robot fight
sequences (allowing for realtime in-camera overlay of
pre-rendered animation, generally motion capture or previs
quality, allowing for live camera operation while viewing
non-existent cg actors, negating the need for pretending
or the old tennis ball on a stick trick).  This virtual
production technique received a lot of media converage
during Avatar, though I recall using a similar system both
on Beowulf and Open Season back at Imageworks.  They also
went into the robot design process, various motion capture
tracking techniques (volumes when possible, optical when
necessary), and their image based lighting methodology.
Unlike some of the other studios I discussed yesterday,
DD shoots their HDRs identically to how I do mine (3
angles, 7 exposures each).  They too took their stitched
environment balls and projected that geometry, using Nuke,
onto a reconstructed 3d set model, and then, using VRay as
their renderer, mapped those textures back to that
geometry to use as their lighting/reflection.  There are a
number of ways to accomplish this, some using third party
software, others using techniques in comp packages like
Nuke, and yet others using spherical projection directly
in Maya.  As opposed to using a straight environment ball,
this method provides much more accurate placement of
lighting in relationship to the surrounding environment
(such as in front of windows and other light sources) and
gives significantly better results.  One small tidbit of
interest that Eric brought up (that I thought I would
share here) is the magic formula used for determining the
gravity scale factor when shooting miniatures (or
oversized items as in this case).  The simple formula is:

Gravity Scale Factor = Square Root of (Size of Performer
divided by Size of Character).  To better explain this,
when you shoot something that is smaller than reality (for
instance a 1/10 scale airplane or spaceship), it's natural
movement due to gravity will seem to be far too fast
because it's mass is far less than reality, but gravity
itself in the real world where this is filmed does not
change.  To compensate for this, you apply this formula,
which tells you how much to speed up or slow down the film
to get a more realistic rate of speed (though other things
need to be taken into account because, as I mentioned, the
real gravity it was filmed in is still the same
regardless).  As applied to CG, this means that with a
slower rate, some motions will still need sped up during
animation, such as (in the case of Real Steel) punches.
If you're wondering why this was an issue in this film,
as the models were built to their proper nine foot scale,
the reason is that, since motion capture was done as a
starting point for the animators, that motion was still
captured by live action humans, who are generally not nine
feet physiologically, regardless if they are standing on
stilts for proper positioning and eyelines.  When the
presentation ended, everyone once again flocked Noisy Boy,
so that was my cue to get out of dodge.

After lunch, my next stop was the course on Character
Rigging and Creature Wrangling in Game, Feature Animation,
and Visual Effects Production.  As I've done my fair share
of the latter two, I was only particularly interested in
the game portion since I've never really worked on that
type of project.  I took a few tidbits from this,
including some which seem obvious but take on a different
meaning when actually presented to you.  The first was in
relation to the notion of "real time game rendering",
which again does exactly what it says.  What isn't obvious
about this is that the game system is actually not only
performing the rendering, but running animation on a joint
skeleton and using various logic operations to drive
these, all of which are being solved first before being
augmented with additional effects such as dynamics, and at
THAT point finally being rendered with a realtime shader.
On a standard 30 frame per second refresh game, this
means that each frame must be fully computed and rendered
in 33 milliseconds (16 milliseconds for a 60 frame per
second refresh) in order to update for fluid playback.
This of course is impressive in its own right, especially
when you consider the system is constantly taking user
input and running all of this through an AI (artificial
intelligence) engine, and optionally a physics solver with
cloth as well).  The presenter covered various schemes for
culling this data for speed optimization, such as skeletal
LODs (the same idea as a geometry level of detail, but
instead stopping certain joints from solving/animating
based on their distance to camera) for finger and facial
joint reduction, animation sampling reductions (running
keyframes on 2s, 3s, etc.), and lowered update rates of
the animation depending not only on distance, but on
importance of character and placement.  And of course, the
reminder that unlike in film and commercial production,
where most offscreen elements can be mostly ignored (aside
from shadow casters and ray reflectors), in a game the
entire environment and all characters must be done
completely as the user playing the game decides what will
be visible in many cases.  It was definitely an
interesting talk, and to be honest some of these
optimization tricks can definitely be applied to non-games
specific work, especially in the land of crowd work.

At this point, I took a break to peruse the expo floor
which opened today.  Honestly, I was a bit uninspired by
what I saw, as there didn't appear to be any new or
groundbreaking technology present.  Instead, there was
just more of the same thing from many shows past,
including the requisite motion capture booths, rapid
prototyping machines, new graphics cards, and seemingly
more show floor bookstores.  I also took this opportunity
to check out the emerging technologies area before heading
to the last talk for my day.

I arrived a few minutes after the 25th Anniversary Rhythm
and Hues presentation began, and I must say that what
followed was a well paced, fun talk by at least ten people
about the history of the company and various achievements
and techniques employed there.  Moderated by the legendary
Bill Kroyer, they talked about the 1999 merger with VIFX
as well as their multinational expansion from LA into
Vancouver, Mumbai, Hyderabad, and Kuala Lumpur, bringing
their total workforce to over 1,400 worldwide employees.
I particularly enjoyed some of the older footage they
showed from the early days, and of course seeing some of
their recent work from Snow White and the Huntsman was a
nice contrast.  It was great to see Markus Kurtz
presenting, now Vice President of Production Technology,
whom I had the pleasure of working with back on Stealth as
well.

Well, that about covers today's fun.  As always, if you
have any thoughts on some of the topics I brought up,
please don't hesitate to drop me a comment or an email.
And now for some sleep...goodnight all!

David Blumenfeld is with Brickyard VFX. Check out their Website at: www.brickyardvfx.com.

Posted By David Blumenfeld on August 08, 2012 06:15 am | Permalink 
Post a Comment
Register for an account
Or if you have an existing account login below.
Username:
Password:
Comments: