Krishna Prasad joined FutureWorks (https://www.futureworks.in) as its CTO in 2022. An industry veteran, Prasad has conducted audits for the Motion Picture Association of America (MPAA) on six major studios, and joined FutureWorks to share his expertise, expanding its infrastructure and R&D efforts.
Can you tell us about your day-to-day role as CTO of FutureWorks and what your responsibilities are?
"As the chief technology officer (CTO) at FutureWorks, my day-to-day role is multifaceted, involving a range of responsibilities aimed at ensuring the optimal functioning of our technological infrastructure and addressing the unique challenges faced by our company. In addition to the traditional CTO responsibilities, such as efficiency mapping, system upgrades and training for both floor and tech staff, my role extends to donning various hats, including those of chief information officer (CIO) and chief security officer (CSO). At times, it even requires embracing the role of an artist to creatively tackle the issues confronted by our creative professionals."
What emerging technologies in VFX and post production do you believe will define the industry in 2025?
“I think the key developments that will define the industry lie in technologies that are already advancing. These include GenAI, Universal Scene Development (USD), GPU advancements and, of course, virtual production.
“Virtual production: Virtual production is set to play a transformative role in improving planned visualization and optimizing overall market costs. VP enables large production houses to experiment with multiple creative looks for their projects. To achieve this, some production houses may adopt parallel VP workflows across two or more VFX studios, allowing for deeper insights during the pre-production content creation process.
“AI in storyboarding: Directors and cinematographers are increasingly collaborating more closely with pre-production teams, fostering a dynamic studio environment. For this reason, it's essential for VFX studios to establish robust AI-driven pipelines at the storyboard stage. This approach will encourage early project tie-ins between production houses and VFX/post production setups, streamlining workflows and enhancing creative outcomes.
“USD adoption: Universal Scene Description (USD) is poised to be a game-changer for VFX studios, especially those managing high-speed, multi-project environments. Its adoption will greatly enhance efficiency and standardization across workflows.
“GPU computing for hyper-realism: GPU advancements are expected to bring hyper-realism closer to reality by 2025. This will depend on application developers having made significant progress in GPU integration over the past two years, enabling more efficient computing.
“Addressing change resistance: On a lighter note, addressing ‘change resistance’ within teams is crucial for future growth. In larger studios (500-plus employees), even a handful of resistant individuals can slow progress significantly. For example, having just five such individuals could result in operations functioning as though it’s 2005 instead of 2025. Studios should prioritize fostering adaptability and openness to change as a key focus area.”
How are advancements in AI and machine learning reshaping workflows in post production and VFX?
“AI and machine learning are currently impacting several key areas of the VFX and post production pipeline, including:
“Storyboard and pre-visualization: Generative AI tools are being used to establish the look and feel of projects during the early stages, providing creative teams with a strong visual foundation.
“Shot and scene execution: Applications like Nuke are integrating AI and ML to accelerate the execution of shots and scenes, streamlining time-intensive processes.
“Performance and issue indicators: At FutureWorks, we are developing AI-driven “performance indicators” and “issue indicators.” These tools aim to identify potential inefficiencies and reduce downtime, allowing studios to enhance their overall efficiency and productivity.
“Despite these advancements, the widespread adoption of AI in mainstream production workflows remains a significant challenge. Many of the primary tools used in VFX and post production do not yet fully support AI and ML capabilities. Tool developers face the immense task of integrating AI into these tools over the next few years to create seamless, production-ready solutions.
“Additionally, deploying advanced generative AI within VFX and post production requires substantial infrastructure, such as large in-house or cloud-based GPU farms. These resources are not only expensive, but also scarce, posing further barriers to adoption at scale.
With virtual production becoming more mainstream, what challenges and opportunities do you see for studios?
“The primary challenge lies in meeting the increasing demands for variations and modifications from clients — both internal and external. The flexibility of virtual production often results in an expanded scope of options, which can significantly increase computational load and space requirements.
“This heightened complexity may also create a cascading effect, requiring additional time from both clients and virtual production artists. These delays can stretch project timelines, especially if client availability becomes a bottleneck.
“As the saying goes, ‘Every challenge is an opportunity.’ These challenges also present unique advantages. By cultivating a team of highly-skilled and communicative VP artists, studios can engage clients early in the production process. Early involvement fosters better alignment, strengthens collaboration and positions the studio as a trusted partner, ultimately benefiting the business.”
Finally, where do you see the industry in five years in terms of technological advancement?
“Given the scaling limitations of CPUs, I foresee a significant shift towards GPUs playing a more critical role in computing and processing tasks. This evolution should be supported by seamless integration of core applications with enhanced CPU and GPU power, aligned with operating systems like Linux and Windows. Currently, however, these components are not synchronized — each developing at different speeds and directions. It’s crucial for application developers and hardware teams to collaborate in a unified forum, addressing real-world studio challenges in a coordinated manner.
“While companies (with products) like Houdini (and) Katana, and Autodesk have initiatives to ‘meet the customer,’ these processes are often slow. Instead, an open forum where customers can directly share feedback with architecture teams would accelerate progress. If such collaboration happens, the next five years could deliver remarkable advancements in scalability. Otherwise, AI development risks remaining focused on superficial tasks, rather than driving innovation in key areas of VFX and post production.
“It’s essential for OEMs and application developers to prioritize collecting relevant data over the next two to three years. This will enable groundbreaking innovations in the fourth and fifth years — something the tech market hasn’t seen much of since the introduction of advanced GUI interfaces. I also anticipate broader access to GPU-based high-computing power for rendering within this timeframe as application porting improves.
“AI penetration into core applications could automate 25 to 40 percent of repetitive tasks, depending on the quality of VFX shoot planning and departmental workflows. This would reduce costs, shorten delivery timelines and enhance floor efficiency.
“Finally, we’ve observed that some applications remain single-threaded and have significantly slower file-read speeds — five to ten times slower — when accessing files from central storage compared to direct access on the same hardware and OS. With ongoing innovations, application efficiency in this area is expected to improve dramatically, enhancing overall computing performance in the next five years.”