NBCUni 9.5.23

A Look at SMPTE’s 2022 Media Technology Summit

By Tom Coughlin

A few weeks ago, I attended the 2022 SMPTE Technology Conference in Hollywood. There were fascinating talks on virtual production; direct camera-to-cloud video; using machine learning to reduce the size of archives for film dailies; using blockchain technology for distributed production; and a concept for creating a lower-cost, true holographic light field display using microlenses and high-frame-rate video.

There were also tours to three Los Angeles-area virtual studios with LED volumes. I visited Orbital Studios, who have a very high-resolution, 1.5mm-pixel-pitch, curved LED wall that allowed a camera to come within 3 feet of the display before artifacts and pixelation could be seen. Orbital uses an OptiTrack Prime 41 tracking system and had an ARRI Alexa 35 and a Sony camera for demonstration. Pixera was used for video playback. There was also a 1.9mm-pitch Planar CLI VX ceiling LED display.

The stage at Orbital Studios, including an image of the camera frustum on the left. The Silverdraft servers that render the content with Unreal are on the right.

Virtual production requires careful planning to be most effective — more than classical digital production where things are fixed in post.

An important element in virtual production is to get the degree of complexity in the image and the motion right up front to reduce the need to “fix it in post.” In addition, the LED wall does not meet most lighting needs, so lighting of scenes requires auxiliary lighting. The overhead required to render images in real time drives server and memory performance. Also, rendering tends to require IO requests for data to come in bursts. Current virtual environments require 20Gbps to 50Gbps communication over 100Gbps networks. As a consequence, a significant amount of fast memory (generally DRAM) is required.

There were some panel discussions in the SMPTE sessions on the first day of the show that also explored storage requirements, such as those for cloud storage or collaborative work. DRAM is usually used for active data, which then goes to disk. Even the disks in use are often local. Organizing local storage, shared storage and content archives is often very customized to individual facilities. This has led SMPTE to develop its Rapid Industry Solutions (RIS) initiative. The On-Set Virtual Production initiative is the first of these. This RIS will address interoperability and connectivity difficulties and ways to handle a lack of resources related to training media professional with these new tools.

Frame.io’s Michael Cioni demonstrating sending Red raw files directly to the cloud.

In a keynote talk, Michael Cioni from Adobe’s Frame.io and producer/DP Graham Sheldon (see main image) gave a live demonstration of direct camera-to-cloud video over a wireless network, including access to an application that let the audience see camera content in real time. They said that they are working with many partners, including major camera companies such as Red, to implement their technology. They said they have done up to 4TB projects with this technology and that camera companies are looking into incorporating this technology into their pro cameras. A speaker from the Video Service Forum (VSF) also spoke about content movement direct from the camera to the cloud.

Corey Carbonara from Baylor University Film and Digital Media and Jim DeFilippis, representing 6P Color, spoke about new ways to enable the broadest color range on displays by using additional primary colors beyond just RGB. This expands the color gamut and color volume. Their colorimetric method (Yxy) provided an approach they said had minimal impact on current workflows, signal transport or image storage. It also doesn’t require white to have the highest brightness, which opens the possibility of super-saturated colors. They had a demo at a booth in the conference exhibit hall running on RGB and higher primary-based displays.

There was another panel discussion on using machine learning in M&E. This included a talk by Ha Nguyen of Warner Discovery and Nile Wilson of Microsoft about using machine learning to archive only the action portion of video dailies, identifying the start and stop of each scene. They worked with 45 hours of footage from an action movie split across 191 video files. They pointed out that many frames have a lot of image redundancy that could be eliminated to reduce storage. They also talked about using utterance detection as another method to reduce content storage. They reported a 44% reduction in storage costs with 96% valid recall using those methods.

There was a panel session on sustainability that included some discussion on the impact of data storage technologies on the environment. Speakers advocated storing archive content on magnetic tape because of its lower energy consumption. They also mentioned the possible use of DNA for archive storage in the future.

Some talks on Day Three explored volumetric video with point clouds and mesh encoding. Point clouds can create engaging VR video images, but the data rate requirements are an issue, as shown in the image below.

There was also a talk about wireless multi-camera video transport by 5G-Records. It included wireless camera demonstrations with a media gateway using a Jetson Xavier ARM development kit and a GPU camera interface for 5G connections.

Sessions on the last day looked at the future of video production. David Stump said that he is working on a master glossary to help create common terminology for virtual production, and there was a presentation on secure remote asset creation using blockchain technologies.

Tim Borer spoke on a possibly revolutionary method to create light field holographic displays. A light field display would be a display of volumetric video that allows samples of all light rays to be visible from a displayed object, giving a stereoscopic image within the need of separate images displayed for each eye.

Pinhole arrays can allow many light rays from an object through, but with great reduction in light intensity. An array of microlenses can make it possible to display more light rays at higher intensity. If these lenses can be made small enough, then a high-resolution light field display could be possible with much less cost than current approaches. The microlens and its pixels are referred to as a hogel.

A practical holographic light field display needs a good depth of field to create a true and useful 3D image. This is possible with a microlens display by distributing the light rays across hogels and by distributing ray angles over frames. Thus, higher frame rates are an important element in this approach. Borer said that this could be implemented best with high-frame-rate displays, say 120Hz or higher.

There were a few storage vendors at the SMPTE Summit. These included iXsystems with its TrueNAS hardware/software platform. There is a free version of the company’s NAS software that is widely used, including in the M&E industry. For its HW/SW offerings, HDD storage is available for less than $80 per TB and flash for less than $395 per TB. iXsystems also has hybrid HDD/SSD hardware.

Seagate Technology also had a booth. Among the company’s showing was its Advanced Distributed Autonomic Protection Technology (ADAPT) data protection technology. Seagate said that ADAPT is based upon intelligent parallel architecture that employs smart learning to respond to potential or active failures, and that ADAPT allows for physical storage by using the space on each drive rather than the minimum space across the drive group. The company claims that this technology provides 95% faster drive rebuild times than traditional RAID solutions.


Tom Coughlin of Coughlin Associates is a digital storage analyst and business and technology consultant. His company consults, and publishes books and technology reports, including The Media and Entertainment Storage Report.

 


Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.