SIGGRAPH: sunday, Sunday, SUNDAY!
Posted By
David Blumenfeld on
July 26, 2010 12:00 am |
Permalink
I arrived today at the show at 1pm, and was lucky enough to get a parking spot right in front of the door to the West hall.While the show was by no means packed, it was definitely busier than I had expected for a Sunday.Of course being the first day, and with the expo hall still being prepped for Tuesday, there weren’t a ton of things to do.I found the media registration easily enough and grabbed my badge as well as a pocket guide.Wifi at the event was working fine, and having the iPhone app was helpful for checking my schedule.
After browsing the halls for a few minutes to see what was posted up, I made my way to today’s course, “Physically Based Shading Models In Film and Game Production.”I felt this course would have some practical application for my facility as we move forward with further development.I’ll spend the remainder of tonight’s blog discussing some things I personally got out of this course, as well as some ways in which these ideas can apply to those of you also involved in commercial visual effects production.While everything here also applies to both feature film production, and increasingly to high-end game production as well, I have never worked in the game field, so I won’t attempt to speak intelligently about that, and as for films, most studios working on projects of that size have multiple dedicated departments responsible for developing custom in-house solutions for this.With commercial production, development time and resources are usually quite limited (unless the company is part of a feature effects facility), and only slight customizations are usually possible combined with primarily off-the-shelf software.When custom development is practical, it is either small scale, or so long term that it is more broad scoped and facility oriented instead of production specific.
Before diving into it, I’ll give a bit of background as to our current shading pipeline and setup. Being a Maya/Renderman Studio facility, our general workflow is based around a raytraced global illumination shading model using the provided Renderman Delux shader, allowing us to add components as needed to obtain the desired look without relying on custom shaders.On set, we’ll capture HDR images using a Canon 1D Mark IV with an 8mm fisheye lens mounted to a roundabout with click stops set at 120 degrees (for three sets of photos to guarantee nice image overlap for stitching).For each angle, we’ll shoot a bracket of seven exposures in raw format, each usually around 1-1.5 stops apart (more on this in a bit based on the talk today).Of course, whenever possible, we shoot an 18 percent gray ball (that I need to replace after a recent breakage), and ideally a chrome ball (though this doesn’t always happen).Back at the shop, we’ll merge the brackets into single radiance files in Photoshop CS, and then stitch the three spherical images into a single lat-long map using Realviz Stitcher.This yields a roughly 8k image (slightly smaller due to the camera’s resolution) in floating point radiance format.
We recently acquired a Canon 5D at the facility, which will allow us to shoot larger resolution images, but I haven’t played with its exposure bracketing yet, and am not sure if there are any limitations with it.From here, we’ll take our image back into Photoshop and paint out the mount base as well as perform any cleanup on the image that seems necessary.Finally, we’ll save out a flopped version of the image as the environment ball in Renderman Studio inside of Maya has a flopped coordinate system, so if we don’t want our image to be backwards, this is required.At this point, if we have images of our gray ball, we’ll set that up in one of our scenes with the unwarped plates (working with reverse gamma corrected images at gamma 0.565 since we render with a lookup of 1.77) to obtain a lighting match using the environment and a single spot for shadow casting.From here, we’ll typically create a second environment light so that we can easily separate the specular contribution of one from the diffuse contribution of the other, and then add additional studio lighting environment lights as necessary for rim lighting etc.While this process will usually give us pretty nice results in a short amount of time, there are a number of drawbacks to it.I will discuss some of these below as I recap the presentation.
Now for the course overview.The presenters (in order of presentation) were Naty Hoffman from Activision (games), Yoshiharu Gotanda from tri-Ace (games), Ben Snow from ILM and Weta (film), and Adam Martinez from Imageworks (film). Naty headed off the talk with an overview of how surface shading is calculated, as well as a brief recap of the BDRF (Bidirectional Reflectance Distribution Function) calculation.He spoke about the notion of an optically flat surface (where the perturbances in a given surface are smaller than the wavelength of the visible light interacting with it, such as at the atomic level), as well as microfaceting (where small surface imperfections will play a part in the directional scattering of light, as well as how that ray is shadows or masked by adjacent imperfections.He also discussed the notion of importance sampling, whereby different methods can be utilized to help speed the raytrace calculations by focusing on areas of the scene where the lighting makes a large contribution to the calculated pixel result while culling out less important areas.Finally, he touched on the importance of not only rendering in the correct gamma, but painting your textures with the same compensation, something which we currently do at our own facility.
Yoshiharu spoke about some specific situations at his studio and how switching from the ad hoc shading model they had previously used (whereby their lighters were basically trying to compensate for shading inconsistencies manually) to a custom physically based shading model they wrote in-house improved the overall look of textured elements and lighting response, specifically for use on some of today’s console gaming platforms such as the Playstation 3.
Next up was Ben with a discussion of how ILM used to light and shade their scenes, and how they have been doing it since Iron Man (including Terminator Salvation and Iron Man 2 as well in his discussion) using a physically based shading model.There were a number of ideas and tips I took away from his talk, which I will share here.Sadly, he had to leave before the Q&A session at the end of the talk, as I would’ve liked to ask him a question or two.Some interesting aspects of building physically based shading models involve the idea of importance sampling and calculating with normalized values (especially the specular contribution in relation to surface roughness). Of particular interest was his continued use of a chrome ball on set. While we are in the habit of shooting an 18 percent gray ball (a known quantity object for matching in our lighting setups), we don’t shoot a chrome ball since we obtain our HDRs through fisheye lens multiple-exposure bracket photography. Of course, when we create our lighting setup, we build a chrome ball in the digital scene, but have nothing to match this to. Shooting the chrome ball would give us that match object, and though it seems quite obvious, this is something I intend to start doing from here on out for that reason alone. Another interesting tidbit he shared had to do with the brushed metal on the Iron Man suit.In their look development phase, they painted displacement maps of the brushed metal streaks, as we do when we want to recreate that look. Of course, this invariably produces sparkling artifacts, requiring us to turn up the sampling to a high level, as well as having to set this up at different scales based on each shot. In their scenario, they did away with the painted maps and instead created the UVs for each surface running in the direction of the brushing. They then set up their shader to be able to adjust the roughness (if I recall correctly) with separate values in U and V, thereby creating a brushed effect that would not produce sampling artifacts and work correctly from any distance. The next interesting item was that, while they shoot their HDRs much the same as we shoot ours, they are capturing their images at 3 stops apart, while we tend to do between 1-1.5. Of course, using larger gaps in exposure will definitely crank the shadows and the light sources, but I tend to find stitching problems in this range, so I definitely need to do some more experimentation along these lines to see if using 3 will give better results. He also demonstrated their use of a standardized environment for lookdev, using the same plate and a well controlled environment light only for the basic lookdev of an object. This seems to me that this would only be applicable using a physically based shading model, since non-physically based ones would end up with setups which will work fine in one environment but produce substandard results in another. I am hopeful that if we switch over to a physically based model, we can implement something along these lines as well, as this seems to be a much more time-efficient way to work.
Finally, Adam presented the new advances Sony has made implemented a physically based raytracing model into their Arnold renderer, and how they have used that in conjunction with area lighting, texture mapped geometry (Ben also demonstrated a few sets with HDR mapped geometry behaving as set lighting) for lighting, and ensuring that all lights have decay preset on them.
Overall, the notion of a real-world physically based shading model is a fantastic development. Being able to work in a manner which allows not only for quicker, easier lookdev, but with a material behavior which will work correctly in multiple lighting situations is incredibly appealing. Of course, there is naturally a tradeoff in this sort of approach when it comes to render times. During the QandA session, Adam was asked at one point about the render time for one of the images he discussed, which turned out to be roughly 14 hours. While render times like this may be acceptable (even though not entirely desirable) at a large facility, smaller facilities like mine simply cannot deliver shots with those kind of times. In fact, for our size and the required turnaround, render times exceeding one hour are generally unacceptable except in rare cases. Moving forward, we will be looking into either new custom development in our rendering pipeline, or perhaps adding other renderers which currently take advantage of these features off the shelf to see if we can obtain a better workflow with the results we are looking for. Taking this course and better understanding what this was all about definitely opened my eyes to some fantastic advancements in this realm, and I intend to definitely find out some more knowledge and information in this regard during the rest of the show. I am also curious to see how the new GPU rendering (using some of the newer graphics cards with CUDA acceleration) might be able to help us along these lines as well.
In conclusion, it was a nice first day of the show, and hopefully you’ll also find this information useful for your own facility.If you have the opportunity to check some of this out during the rest of the conference, it’s definitely worth your while, and if not, I would highly recommend looking this up on the web to learn how this type of shading can improve the final look of your images.
For tomorrow, I’ll start the day off with a presentation about Avatar, followed by an illustration class using the new painting tools in Photoshop CS5 (we’re planning to upgrade from CS3 shortly). After lunch, I’ll attend a demo of a LIDAR scanning session, see a session on the Making Of Avatar, hear a panel discussion with Ed Catmull about the early days of CG, and possibly check out the Electronic Theater if I’m not too worn out by that time. Check back tomorrow for a summary of the day’s events. I’ll try to keep that post a bit less technical! And now to bed for a bit of recharge before 5am rolls around.