NBCUni 9.5.23

Roundtable: Virtual Production

By Randi Altman

Virtual production has been around in some form for years, but what we’ve recently come to know as “virtual production on the LED volume” has evolved to offer more creative and technical options. These workflows are now more flexible and more attainable for a variety of budgets. But VP isn’t for every shot or every movie. As one person we spoke to said, “It’s not one-size-fits-all.”

Let’s find out more from those who make the gear and those who make the tools sing…

Impossible Objects’ Luc Delamare

This virtual production studio provides production, visual effects, animation, VAD and visualization using realtime technology and LED volumes.

Which aspect of VP are you working in now versus what you’re planning for the near future?
We’re currently spread across a few areas of virtual production, including ICVFX (in-camera visual effects) and LED volume shoots, virtual art department, previsualization and final pixel CG animation straight out of Unreal Engine. In the past we’ve had success with hybrid VP and greenscreen workflows, and in the future, we’re hoping to merge our visual effects pipeline in the real-time world.

Are you using VP to save money? Work more efficiently? Both?
We find ourselves constantly asking which tool is right for the creative. Sometimes it’s ICVFX, letting us condense the post schedule and skip out on longer VFX workflows. Sometimes it’s fully in-engine visualization for automotive work, letting us operate an entire CG pipeline out of one software package. No matter what the work is, if we’re engaging with any of the virtual production tools at our disposal, it’s because it allows our teams greater creative efficiency, which tends to yield time and budget savings for us and our clients.

You mentioned using LED volumes or greenscreen. Can you elaborate?
At Impossible Objects we are leaning more and more into LED volumes over greenscreen with hybrid VP workflows, although there’s always a time and a place. With ICVFX, the industry has discovered that the advantage of having key creatives, clients and artists all working together in a real-time environment is often too great of an excuse to engage in the “virtual production” that the greater public associates with the term.

How much of your live filming relies on real-time 3D versus prerendered 3D or video content?
We have an in-house virtual art department, so the majority of our projects are built in real-time 3D for us to be able design and sculpt the creative as close to the source as possible. We pride ourselves on using these tools efficiently and not forcing them when they aren’t necessarily the best solution. We’re also familiar with plate photography being displayed for ICVFX, which has become increasingly common for automotive productions.

What is the hardest part of the VP workflow for you and your company?
When we’re able to show clients and agencies that we can essentially eliminate the need for heavy post work by engaging in ICVFX, it’s always difficult to convey that there is a balance to be had in what then becomes a more extensive preproduction period. Even though designing virtual environments has become a more efficient process, it is still important to allow for iterations and feedback in preproduction. With ICVFX, much of the work gets baked into the image captured during production. “Fix it in post” will be translated to “fix it in prep,” which, despite the appearance, is a difficult swap to make when you’re talking about a process that people are already familiar with.

What software packages are you using in your workflows?
Our virtual production and animation pipelines are rooted in Unreal Engine, and our visual effects are in SideFX Houdini and Foundry Nuke.

Puget Systems’ Kelly Shipman

Puget specializes in high-performance, custom-built computers, and they provide  personal consulting and support. 

What tools do you offer for VP workflows?
We offer a wide range of workstations and rackmount servers for a variety of virtual production workflows. Whether it is content creation systems for the VAD, multiple servers to power large LED volumes or anything in between, we have systems that will allow people to bring their visions to life.

We’ve also partnered with Vu in making their Vu One LED packages. These are an all-in-one product that comes with all the hardware and software needed to be up and running with ICVFX.

Are these workflows part of the higher end or available/affordable for all?
Our products are available in a variety of price points and can be customized to fit the end user’s goals and budget. Our consultants will work with you to ensure you get the best product for your needs.

What are you doing to make workflows align with standard practices?
While we don’t engineer new practices or workflows, we work closely with hardware vendors to be able to offer and validate uncommon parts needed for cutting-edge workflows, such as network cards for SMPTE ST 2110, sync cards, etc.

Also, as mentioned before, we’ve partnered with virtual production expert Vu Studios in bringing its Vu One to market. This is a plug-and-play product that allows users to get started filming on LED walls with ease. It supplies all the necessary hardware and software so that users don’t need to reach out to a dozen vendors and make sure everything is compatible.

What standards are missing or incomplete/not generally supported to allow for better integrations?
The biggest standard that is not completely supported yet is SMPTE ST 2110. While that is a standard across much of the broadcast industry, it is not easily added to many ICVFX workflows. Much of this is due to hard-to-acquire hardware, such as Nvidia’s BlueField DPUs and ConnectX SmartNICs as well as supported LED processors. The DPUs and NICs are a low-volume product across many industries, and with the rise of GPU-based AI servers, there is more demand for them.

How are you working with NeRF — or do you plan to — and what does that offer the industry?
We work hand in hand with our customers that are working with NeRFs to identify their specific workflows and perform testing to determine the best hardware for them. At this point a single, widespread industry standard workflow has not emerged. Our goal is to support our users as they investigate this workflow and to be ready as it becomes more standardized.

What haven’t we asked that’s important?
As virtual production has grown and matured, the need to address the specific workflows of different aspects is also growing. The hardware and software needs of ICVFX, VR, previz, mocap, greenscreen, asset creation, etc. are all different. VP touches on every aspect of filmmaking and is used outside of LED volumes.

There are also numerous misconceptions about getting started in this field. Many believe it takes a massive amount of specialized hardware to even begin, but there are many ways a person can learn the workflows with minimal investment while still achieving impressive results.

Pixotope’s David Dowling

Pixotope offers content owners and producers a sustainable end-to-end virtual production platform. 
What tools do you offer for VP workflows?
Our primary goal is to streamline and simplify virtual production tools and workflows, allowing creators to focus on creating captivating and immersive visuals without the burden of technical challenges. We possess a unique expertise in this field: In the early stages, Unreal Engine lacked crucial broadcast capabilities, such as key and fill functionality, which we developed from the ground up and integrated into the engine.

It’s essential here to note that we’re not a plugin. Rather, we’ve crafted a virtual production ecosystem that harnesses the power of Unreal Engine while maintaining full control over our solution. The user experience we’ve designed atop Unreal Engine ensures that creators find it user-friendly, dependable and practical.

Pixotope offers a comprehensive, real-time virtual production platform that encompasses all essential components for augmented reality, extended reality and virtual sets. This includes comprehensive camera- and talent-tracking capabilities. We provide the graphics engine responsible for generating virtual elements in real-time virtual production, seamlessly integrating them with physical elements. Our camera-tracking technology enables the seamless blending of real camera movements with virtual camera movements, ensuring perfect alignment between the real and virtual worlds.

Are these workflows part of the higher end or available/affordable for all?
Our mission is to make virtual production accessible to all creators — both in terms of cost and technical complexity. Pixotope seamlessly integrates with existing workflows and technology, uncomplicating implementation and adoption, as it is specifically designed to combine with partner technologies and external data sources. Our solutions have been developed with the aim of reducing operational costs with reliable, easy-to-use solutions that require fewer people and support to operate.

We also enable accessibility through the Pixotope Education Program, an industrywide initiative that addresses the current talent shortage while training the next generation of virtual production storytellers. This initiative provides students and educators with access to Pixotope software and tools to grow their skills and connects them with industry experts from the global Pixotope community. We work with the institutions to adapt the program to better support, train and inspire virtual production talent.

While the Pixotope Education Program gives students access to virtual production tools within educational facilities, we recognize that many students do not have the opportunity to develop and practice their skills outside of those studios or labs.

Enter Pixotope Pocket. Launched in conjunction with the Pixotope Education Program, the Pixotope Pocket mobile app provides aspiring virtual production creatives with what they need most: easy, unfettered access to augmented reality and virtual studio tools and workflows — outside of the classroom. Students only need their phones and a PC to be able to engage with virtual production tools and workflows and create powerful and immersive content wherever the inspiration strikes, even when it’s in their dorm rooms.

What are you doing to make workflows align with standard practices?
There is an argument to be made that virtual production workflows are simply too different from standard production workflows to align with them. With this in mind, it’s important to acknowledge that (in some cases) new workflows or elements are required to make virtual production “work” in media and entertainment. However, this also offers an opportunity for accessibility. Let’s take lighting design as an example. By providing an exceptional user experience and seamless integration, virtual production tools simply become another tool in the lighting designer’s toolkit. The same approach can be applied to the virtual environment as to the physical set. By focusing on making virtual production tools easy to understand and use, we enhance, rather than change or disrupt, existing workflows and practices.

What standards are missing or incomplete/not generally supported to allow for better integrations?
Something that both the SMPTE RIS OSVP initiative and others are working on is a standardized approach to carry and represent the data needed to align the physical camera output with the virtual world, alongside the video signal(s). This data includes elements such as lens data (focal length, aperture, focus) and camera position. A standardized approach for this would not only simplify integration but also potentially reduce the amount of hardware needed, especially on-set.

How are you working with NeRF — or do you plan to — and what does that offer the industry?
NeRFs ability to capture complex surface details based on 2D images makes it particularly handy for mapping out real environments that we want to blend with virtual graphics. For example, in a studio where you want to use augmented reality or blend physical objects with virtual objects against an LED volume or greenscreen. Or in outdoor environments where augmented reality is used.

Pier59’s Jim Rider

Pier59 Studios is a NYC-based virtual production studio offering an LED volume featuring a main 65-foot curved LED screen and a 40-foot articulating ceiling.

How do you work in VP? What are some examples of your VP work?
Since launching our large LED volume in February of this year, Pier59 Studios has hosted a variety of productions for fashion and luxury brands, including Neiman Marcus, DKNY and Tom Ford. HBO’s Succession and several other innovative productions have also taken place on our virtual production stage. It is an amazingly flexible and capable production tool, allowing creative control over the location, the weather, the season and the time of day – all while reducing overall production costs in an ecologically conscious way.

Pier59 Studios’ LED volume also serves as a canvas for content during live events. Keynotes, product launches, panels and more are elevated by the integration of the LED wall, providing our guests with a completely immersive experience.

Do you find people are using VP to save money? Work more efficiently? Both?
Virtual production can be a highly effective solution for a range of production and creative styles. For example, if a project involves multiple locations and/or multiple sets, the benefits become obvious: Reduced travel, smaller physical sets and faster changes allow for more camera setups per day.

If a production is set in a location that is physically or economically inaccessible, whether that’s Mars or the trading floor of the New York Stock Exchange, a 3D representation of that environment can be ported to the LED wall. Overall, a faster pace on a set tailored to a production’s needs represents the financial and workflow benefits of virtual production.

You offer on an LED volume, no greenscreen?
Our Stage C features a 65-foot by 18-foot LED volume with an articulated ceiling and four wild walls. Feeding content to our main wall and filming directly against it results in an in-camera shot with little or no need for post. We often call this “capturing final pixel.”

Using the LED wall as a greenscreen is certainly possible, but it has substantial drawbacks (green spill, etc.) and requires substantial post. Shooting greenscreen takes away one of the key benefits of virtual production: getting the shot in-camera. However, because of the robust color space of our volume, if a client were to use the wall as a greenscreen, it would be an easier and more precise key than a traditional greenscreen.

What is missing to make workflows more efficient and cost-effective?
Virtual production requires in-depth technical and creative planning. For example, failing to account for the time it takes to blend the virtual with the practical can bottleneck a production schedule. To ensure the technology and workflow are best leveraged, Pier59 Studios’ virtual production team acts as an extension of a client’s crew, providing expertise and consultation regarding virtual production techniques and practices.

What is the hardest part of the VP workflow for you and your company?
There is still tremendous technical innovation happening in the field of virtual production. But in the past few years, the key processes necessary to make it work have been sorted out. LED panels with tight pixel density, accurate camera-tracking systems and powerful game engines have matured to a point of high functionality and reliability.

The biggest challenge now is education and adoption. Many clients are not familiar with the process, which has many elements of the traditional production workflow but functions on a different, more front-loaded timeline and introduces technical challenges that new users may not anticipate. For that reason, Pier59 is hugely active in educating our potential clients and providing essential consultative work so that production crews are comfortable and confident using our technology.

Pier59 is also eager to introduce potential clients to virtual production by offering test shoots. We have found that once creative agencies step into our LED volume, they are openly excited about embracing it for future projects.

What haven’t we asked that’s important?
One of the main questions our clients ask is, “Where do I source our content from?” We can create custom 3D environments to suit a project, but that does require lead time to get that work done (work that traditionally would have been done in post).

One of the technical advancements that will have a major impact on virtual production is generative AI to assist in creating environments. This is starting to happen now, but within a few years, we could very well be able to generate custom environments in a fraction of the time that they take to create now.

Assimilate’s Jeff Edson and Mazze Aderhold

Assimilate offers streamlined virtual production solutions for projection mapping, image-based lighting and live compositing with Live FX and Live FX Studio.

Jeff Edson

What tools do you offer for VP workflows?
Jeff Edson: Assimilate has developed Live FX and Live FX Studio for virtual production workflows. Live FX is mainly targets at greenscreen studios, where it acts as the live keyer and compositor. It also features an extensive toolset for image-based lighting (IBL), which got a significant upgrade in version 9.7.
Live FX Studio targets LED-wall based virtual production scenarios, where its main job is to do projection mapping of 2D, 2.5D, 3D as well as 180/360 content onto LED volumes of any size, including virtual set extensions. Additionally it can take care of IBL (image-based lighting) tasks and even be controlled from a lighting console simultaneously, while playing out content to the LED volume.

Are these workflows part of the higher-end or available/affordable for all?
Mazze Aderhold: It’s a good mix. We developed the more affordable Live FX specifically for indie studios that shoot predominantly on greenscreen. LED volumes by nature require a higher budget, which is why Live FX ships with more sophisticated functionality to address the media server requirements on these productions. That being said, Live FX Studio is still incredibly cost efficient — not only in terms of cost for the software, but also in its hardware requirements.

Mazzer Aderhold

Where other solutions might require multi-node rendering clusters, stuffed with high-end hardware, Live FX Studio can drive even the biggest LED volumes practically out of a single box. This keeps the cost for required hardware and the resulting power consumption down by a lot. Being a fairly lightweight software application, which can essentially be installed on any Windows or macOS workstation, we offer a monthly subscription plan and annual licenses next to perpetual license options. All of this makes Live FX very accessible and suitable for many studios.

What are you doing to make workflows align with standard practices?
Edson: One thing that helps is our history in film and television workflows. Assimilate has been developing Scratch for color grading, finishing and dailies transcoding for the past two decades.
Having done that, we know the language of both technical and creative on-set professionals — it’s a language that our UI speaks.

Live FX is not a programming environment, but a creative tool to realize complex virtual production scenarios quickly and easily. Our straight forward UI helps new users to learn Live FX within very short periods of time and get great results really fast.

Also, because of our background in post, Live FX allows to capture metadata on-set — from live SDI metadata through camera tracking information down to user-input metadata, like scene, take and annotations, and prep everything for post (VFX) down the line — this is an important part of the workflow that is often only an afterthought in other VP-solutions.

What standards are missing or incomplete/not generally supported to allow for better integrations?
Aderhold: Standards are great — that’s why we have so many, a wise man once said. Generally, it really depends on which standards are accepted most in our industry. USD (Universal Scene Descriptor), an open format for 3D environments, is getting some traction these days, and ST2110 promises to simplify image pipelines on-set a lot, especially for LED volumes. Camera tracking solutions embrace the generic FreeD protocol more and more.

What is currently missing is a standardized format that allows for metadata — camera tracking metadata in particular — to travel from on-set into post.

How are you working with NeRF – or plan to – and what does that offer the industry?
NeRFs are a fantastic way to capture real-world environments. For Live FX it mainly depends on how NeRFs are delivered. We love Notch Blocks and USD as carriers for 3D environments  —with that we’re ready to embrace NeRFs of any fashion.

What haven’t we asked that’s important?
The biggest difficulty for us as software developers is to accommodate different workflows.
Since virtual production is still very much a pioneering field, there are gazillions of different workflow and workflow requirements. Getting them all under one roof with a UI that streamlines things for all of them is truly a challenge.
That being said, the growing Live FX community has been great in providing feedback and insight, and we were happy to accommodate their feature requests. This greatly helped shape Live FX toward the virtual production market and meeting demands of current creatives and technical challenges.

SociallyU‘s Andre Dantzler

SociallyU helps clients communicate their message through recorded videos and live streams by creating content with photorealistic keys, advanced virtual production and real-time editing.

Which aspect of VP are you working with now, versus what you’re planning on in the near future?
Currently, we are shooting everything on our three-wall greenscreen volume using a combination of Unreal Engine virtual backgrounds and custom plates shot on-location with multiple cameras. Our workflow involves live compositing using eight Blackmagic Ultimatte 12 4K keyers and eight Blackmagic Studio Camera 6K Pros. Six cameras are tracked using two Vive Mars systems, and the remaining two are tracked with Axibo PT4 sliders. Everything is edited live on the ATEM Constellation 8K.

Looking ahead, we are very interested in new techniques, like 3D Gaussian splatting. It’s an emerging technique we feel could revolutionize virtual plate photography. It would allow us to scan environments and bring them into Unreal and then enhance with Unreal. It will give us a lot of flexibility, with amazing rendering speed and a very photorealistic look.

Are you using VP to save money? Work more efficiently? Both?
Our primary motivation is efficiency. With virtual production we can digitally create environments versus building physical sets for each production, which wasn’t sustainable due to how many productions we shoot each month. Efficiency lets us take on more projects and grow. It also unlocks creative possibilities, like easily placing talent in realistic virtual worlds.

While efficiency drives adoption, VP also reduces costs related to traditional production, like set construction and location shoots.

How much relies on real-time 3D versus prerendered content?
Currently 40% leverages Unreal Engine’s real-time 3D. The other 60% uses custom, on-location plates. We invested in a solid real-time infrastructure, with a dedicated Unreal machine per camera.

We love the realism of actual plates, but we also love the incredible versatility of Unreal engine. Again, we hope new technologies like 3D Gaussian splatting will help us merge the best of both worlds together.

What is missing to make workflows more efficient?

  • Unreal sets that are photorealistic and optimized for real-time virtual production.
  • More automated tools, like AI-assisted set generation.
  • Faster GPUs.
  • Finding talent with both the technical and creative skills for virtual production.
  • Improved interoperability between the various software tools involved.

What software packages are you using?
Our core packages are Unreal Engine for primary real-time rendering; DaVinci Res

olve for color grading plates and compositing; Photoshop to customize plates, including AI-generated fill; and OBS for adjustable playback of plates on-set, including the ability to adjust the amount of blur on the background.

What else is important that I haven’t asked about?
A key enabler is our ChromaLight painted greenscreen studio. ChromaLight has superior color uniformity and spill suppression. Combined with our Blackmagic Ultimatte 12 4K keyers, it enables incredibly clean composites. ChromaLight takes our virtual production quality to the next level.

Silverdraft’s Hardie Tankersley 

Silverdraft makes supercomputing architecture to address the computational needs of high-end rendering, VR, visual effects, virtual production and more.

What tools do you offer for VP workflows?
High-performance computing to drive faster and better rendering. Fully integrated compute stack from storage to network to compute to management for more capable VP stages and effects. The Demon series desktop and rackmount workstations and render servers for high-performance computing to drive faster, better rendering. And the Demon’s Lair, which is a fully integrated, prebuilt and tested compute stack from storage to network to compute to management for more capable VP stages and effects.

Are these workflows part of the higher end or available/affordable for all?
We can scale from small, indie productions up to complex feature and event production. It’s all based on the same pipeline but with bigger sizes and more flexibility at the higher end.

What are you doing to make workflows align with standard practices?
We collaborate deeply with our customers on each installation and with our technology partners like Nvidia to ensure that we leverage common standard practices. We are an integrated part of a community that is working together to develop standards, consistency and predictability for VP workflows.

What standards are missing or incomplete/not generally supported to allow for better integrations?
We are very focused on SMPTE ST 2110 and learning how to build and optimize completely digital pipelines even as vendor support continues to emerge. We believe the future is clearly IP-based networking for all data flows.

How are you working with NeRF — or do you plan to — and what does that offer the industry?
NeRF is just another technique for generating 3D assets, another tool in the toolbox. We are constantly generating volumetric assets in so many different ways. NeRF just gives us a way to generate an asset when all you have is video. But you would still be better off with traditional volumetric capture with lidar and photogrammetry if you can get it.

What haven’t we asked that’s important?
Virtual production as a topic is pretty much endless. You can spend days just talking about network architectures, storage, cloud workflows and, of course, high-performance rendering. It’s impossible to cover everything. But for us, what’s most important is performance. If your compute is fast enough, everyone will get to work a lot less. High-performance computing generates efficiencies in all parts of the workflow because you don’t have to spend so much time on optimization and testing.

The Studio-B&H’s Michel Suissa

B&H’s The Studio has been providing video solutions to major film studios, TV networks, production companies, post facilities, VFX studios and more.

What tools do you offer for VP workflows?
A lot. LED tiles, traditional lighting and image-based lighting solutions; camera-tracking systems; media servers; render engines; high-end workstations; cameras and lenses; signal management and distribution; monitoring; networking; structural hardware; and power distribution.

Are these products targeting the higher end or available/affordable for all? Or both?
We are targeting both ends of the market with solutions in performance and price.

What can be done to make workflows align with standard practices?
SMPTE RIS, with its OSVP task force, is a fantastic initiative whose goal is to standardize workflows and best practices.

What standards are missing or incomplete/not generally supported to allow for better integrations?
Color integrity through the virtual production pipeline. Also, interoperability, technical project management and metadata integrity.

How do you see the industry employing NeRF? What do people need to know?
The use of NeRF is still a novelty, but it will integrate very well with real-time game engines. It is the beginning of the democratization of creating three-dimensional environments through basic real-life photography.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 25 years. 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.