Outlook: Depth mapping is the next evolution in color grading
Ian Vertovec
Issue: November/December 2024

Outlook: Depth mapping is the next evolution in color grading

Over the past year, the advancements in machine learning have been impressive. Auto-tracking, auto-roto and AI-powered facial recognition tools have streamlined workflows and enhanced color-grading capabilities. For example, Baselight can now automatically detect and track faces, allowing precise corrections, such as removing blemishes or wrinkles, that can be applied consistently across multiple shots. However, the most groundbreaking advancement that we’ve seen in a decade or more may be depth mapping. 
 

Photo: "The Old Man"

The power of depth mapping 
 
Depth maps are used to separate the foreground and background of a scene so that different color enhancements can be applied to each. In a depth map, the distance of objects from the camera is visualized via grayscale. White pixels indicate objects closer to the camera, while black pixels represent objects further away. 
 
Depth maps can help cinematographers achieve more realistic lighting effects with their colorist by allowing us to work with the scene in a truly 3D way. Currently, we simulate depth in 2D, but depth maps provide a genuine z-axis, allowing selective color corrections and effects based on distance from the camera. This precision eliminates the need for manual masking, allowing you to automatically select and enhance specific areas based on distance. For example, we can enhance a subject’s face by darkening the background and brightening the foreground, giving us much more realistic results. 
 

Photo: "The Old Man"

The future of AI and color grading 
 
I don’t see a future where AI takes over the creative work that happens in the color grade. Navigating the nuances of a day exterior with varying cloud cover, for example, will continue to require a human’s expertise and creativity. AI’s contributions are going to be in the ligaments and bones, rather than the skin, in giving us a 3D model of the scene in which the cinematographer and colorist can then play.
 
As AI continues to progress, we can expect more sophisticated depth-mapping techniques, realtime processing for smoother blends from frame to frame, and improved accuracy. From there, it’s possible that a movie could be shot entirely in deep focus, and then, in post, the cinematographer could work with their colorist to manipulate the depth of field and add bokeh, even varying the effect by keyframing within a single shot. The creative possibilities are going to be exciting to explore.
 
Ian Vertovec is a Supervising Colorist at Light Iron, the creative services division of Panavision (www.panavision.com).