NBCUni 9.5.23

GTC Conference: Advances in Virtual Production

By Mike McCarthy

I usually attend GTC in San Jose each the spring, but that was interrupted this year due to COVID-19. Instead, I watched a couple of sessions online. This fall, Nvidia normally would have held its GTC Europe, but instead made it a global online event. As a global event, the sessions are scheduled at all hours, depending on where in the world the presenters or target audience are — hence the tagline, “Innovation never sleeps.” Fortunately, the sessions that were scheduled at 2am or 3am were recorded, so I could watch them at more convenient times. The only downside was not being able to ask questions.

Nvidia is building a supercomputer for UK’s healthcare research using artificial intelligence.

While the “G” in GTC stands for GPU, much of the conference is focused on AI and supercomputing, with a strong emphasis on health care applications. Raytracing — the primary new feature of Nvidia’s hardware architecture — is graphics-focused, but that is limited to true 3D rendering applications and can’t be easily applied to other forms of image processing. Since I don’t do much 3D animation work, those topics are less relevant to

In-Camera VFX
The one graphics technology development that I am most interested in at the moment —the focus of most of the sessions I attended — is virtual production. Or more precisely, in-camera VFX. This first caught my attention previously at GTC, in sessions about the workflow in use on the show The Mandalorian. I was intrigued by it at the time, and my exploration of those workflow possibilities only increased when one of my primary employers expressed an interest in moving toward that type of production.

Filmmaker Hasraf “HaZ” Dulull using his iPad on a virtual production.

There were a number of sessions at this GTC that touched on virtual production and related topics. I learned about developments in Epic’s Unreal Engine, which seems to be the most popular starting point due to its image quality and performance. There were sessions that touched on applications that build on that foundation — to add the functionality that various productions need — and on the software-composable infrastructure that you can run those applications on.

I saw a session with Hasraf “Haz” Dulull, a director who has made some shorter films in Unreal Engine. He is just getting started on a full-length feature film adaptation of the video game Mutant Year Zero, and it’s being created entirely in Unreal Engine as final pixel renders. While it is entirely animated, Haz uses his iPad for both facial performance capture and virtual camera work.

One of my favorite sessions was a well-designed presentation by Matt Workman, a DP who was demonstrating his previz application Cine Tracer, that runs on Unreal Engine. He basically went through the main steps for an entire virtual production workflow.

There are a number of different complex components that have to come together seamlessly for in-camera VFX, each presenting its own challenges. First you have to have a 3D world to operate in, possibly with pre-animated actions occurring in the background. Then you have to have a camera tracking system to sync your view with the 3D world, which is the basis for simpler virtual production workflows.

To incorporate real-world elements, your virtual camera has to be synced with a physical camera in order to record real objects or people, and you have to composite in the virtual background. Or, for true in-camera VFX, you have to display the background on an LED wall in the background. This requires powerful graphics systems to drive imagery on those displays, compensating for their locations and angles. Then you have to be able to render the 3D world onto those displays from the tracked camera’s perspective. Lastly, you have to be able to view and record the camera output, presumably, as well as a clean background plate to further refine the output in post.

Each of these steps has a learning curve, leading to a very complex operation before all said and done. My big take away from all of my sessions at the conference is that I need to start familiarizing myself with Unreal Engine. Matt Workman’s Cine Tracer application on Steam might be a good way to start learning the fundamentals of those first few steps, if you aren’t familiar with working in 3D.

Lenovo P620

Lenovo P620 & GPUs
Separately, a number of sessions touched on Lenovo’s upcoming P620 workstation based on AMD’s Threadripper Pro architecture. That made sense, as that will be the only way in the immediate future to take advantage of Ampere GPU’s PCIe 4.0 bus speeds for higher bandwidth communication with the host system. I am hoping to be able to do a full review on one of those systems in the near future.

I also attended a session that focused on using GPUs to accelerate various graphics tasks for sports broadcasting, including stitching 360 video at 8K and creating live volumetric renders of athletes with camera arrays. As someone who rarely watches broadcast television, I have been surprised to see how far live graphics have come with various XR effects and AI-assisted on-screen notations and labels to cue viewers to certain details on the fly. The GPU power is certainly available; it has just taken a while for the software to be streamlined enough to use it effectively.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.


Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.