By Sam Nicholson
Virtual production (VP) has captured the imagination of the film and television industry. The convergence of advanced LED technology, motion tracking and real-time GPU rendering offers a new way of bringing remote locations onto a soundstage.
You can see your results in real time. You can finally avoid the sensory deprivation of shooting on green and the time and extensive post costs associated with compositing. Sounds great, right? Faster, better and perhaps cheaper, virtual production has enticed hundreds of studios around the world to jump into VP. There are, however, some serious limitations to the current virtual production volumes that must be overcome in order to achieve the full promise of VP.
No matter how large your volume is, there is an edge to the LED wall where the visual illusion simply stops. Hybrid virtual production techniques now allow us to blend both LED and greenscreen simultaneously with the use of Megapixel’s Helios processor or Brompton GhostFrame. Epic Games and Blackmagic are rapidly providing the tools to increase our creative options on-set, with fully integrated cameras, lighting and software that make it possible to create a photoreal blend of the virtual world and the real world in real time.
While virtual production offers many remarkable advantages over shooting on-location, like moving the sun or stopping time, VP is ultimately limited compared to shooting on a location. Up angles, down angles and shooting off the wall are realistic boundaries that anyone who has shot on an LED volume is very familiar with. Therefore, how can we combine the dependability, repeatability and convenience of shooting on a soundstage with the flexibility and creative versatility of a location shoot?
The answer is not in what we see, but in how we see it. Professional digital cameras have advanced considerably in terms of resolution, dynamic range and color depth. However, they continue to capture a finite, flat slice of reality. Current motion picture images are a simple, two-dimensional interpretation of our three-dimensional world. When motion picture cameras can accurately sense and recreate not only the light and textures of a scene but the dimensions of that space, then depth matting will move virtual production to the next level. This will require a combination of multi-camera arrays, sophisticated edge detection and parallax algorithms most likely assisted by real-time AI to calculate depth from a cinematic perspective.
Once depth information becomes available, we will be able to augment reality in real time, eliminating the need for cumbersome LED screens and moving the compositing of both foreground and background live-action elements into one composite image. Thus, the current boundaries of LED walls will be eliminated, and unlimited camera movement similar to shooting on a real location will be possible.
Another major benefit of real-time dimensional image capture will be the creation of dimensionalized content from the individual motion picture images. For instance, if each image contains depth information, it is a relatively simple (although heavy) calculation to extract 3D models from the original photographic data. Promising results in this direction are currently available with lidar, photogrammetry and NeRF imaging. This will solve another considerable challenge facing virtual productions — creating economical, photoreal assets for on-set playback.
In the next few years, virtual production will advance rapidly through many iterations that blend advanced camera technologies, computer horsepower and AI-assisted software. Ghostframe-ing, high-speed frame interleaving and LED/greenscreen hybrid shooting are all rapidly developing. The primary challenge ahead of us will be a creative one: How will we use this new, powerful technology to tell better stories around the world?
Captions: Images from the set of HBO’s Our Flag Means Death
Sam Nicholson, ASC, founder/CEO of Stargate Studios