NBCUni 9.5.23

Category Archives: 4k

BBC and 3 Ball Media: Storage for Unscripted Series

By Alyssa Heater

Unscripted reality series and the sheer amount of footage captured for each episode poses unique challenges, especially when it comes to storage.  

To find out more about how these shows manage all that data, we first spoke with BBC Worldwide post producer Chris Gats, who works on the unscripted series Life Below Zero and its multitude of spinoffs. The remoteness of the series’ shooting locations is a driving factor in its storage needs, and Gats details how tripling the storage plays into the workflow.

We then sat down with 3 Ball Media Group (3BMG) EVP of post Neil Coleman and director of production and operations Scotty Eagleton to learn about storage solutions for high-volume footage series — including one with 28 cameras running 24/7. 3BMG is the force behind reality series including Iron Chef: Quest for an Iron Legend and College Hill: Celebrity Edition. 

BBC Worldwide

It seems like the bulk of your work has been in the reality TV/unscripted realm. What inspired you to go this route?  
I kind of fell into it. I started as a production assistant for the company [Stone & Company Entertainment], that which produced game shows and docu-reality shows like Loveline, The Man Show and Shop ‘Til You Drop. I just grew up through post in that company, and it became my niche.

Tell us about the typical workflow on an episode of Life Below Zero, from shooting to post to delivery. 
Reality shows have a tremendous amount more of everything. A scripted show might go out with one camera and one microphone. We go out with several cameras all filming the exact same thing. One hour of footage in the scripted world is several hours in the reality world. That’s the main difference… more cameras and more microphones on-set. We film it, we archive it to hard drives, then the hard drives make their way down to Los Angeles. The post team take it from there.  

Specifically on Life Below Zero, because the show is so remote, and it is very difficult and expensive to get the crews in and out, we have them make three copies of their footage. It’s on in triplicate, just in case FedEx loses one, a producer’s bag falls into a river, or whatever it could be.

The producer will hold onto one copy, then the two others go to our production hub in Anchorage. Once the footage is handed off, they immediately send one of the hard drives down to post in Los Angeles, and the other one is archived onto their hard drive. So now we have the footage flying to Los Angeles on a hard drive, it’s in Anchorage on a hard drive, and the producer has their a copy. And if we lose one, we’re fine because we have two back-ups.  

Once the footage gets to Los Angeles, we archive it to an LTO tape right away. Once we get it on LTO, we tell production up in Anchorage and then they can wipe their two drives and send them back into the field. It’s always on a constant rotation. In Los Angeles, we also copy it into our computer archive system and send the drives back up. Throughout the whole run of post, Anchorage has a copy of our media, it’s in Los Angeles in our edit system, and we also have it archived on an LTO tape.  

What about storage for the post process?  
For most of the shows I run post for, we use the post facility FotoKem. They supply our offline equipment, and we go to them for some online and audio mixing. They invented an all-in-one, on-set color correction and archival machine called nextLab. It allows us to go out in the field, upload our footage, add our LUT to see what the shots are going to look like, automatically sync the audio, and even archive to LTO right on-set. It can also timecode and distribute dailies.  

It also serves as storage by ingesting and archiving full-resolution media. We have at least a petabyte of 4K media in it now. From there, we take the footage, down-rez it, and shoot it over to an Avid Nexis for editing. So, we have two things: the HD/UHD archive in the nextLab, and the low-res on a Nexis.   

Did COVID impact your storage process with hybrid and remote workflows?  
We kind of lucked out with the Life Below Zero shows. It takes a year to film these. We film three episodes at a time, in part, because of hunting seasons and crazy weather. Also, when we send humans out to these remote locations, we don’t want them to be there too long. The show features four people an episode, so we film at four different locations. It takes a while to film, but as we start to film and stockpile, we then send to post. 

We were in the process of shooting Season 9 when production was shut down. At this that time, FotoKem and BBC got together to figure out a solution. Both bought computers to help facilitate a remote post workflow. We were only down for one calendar week. We kicked everybody out of the office, then the following week, I was calling everybody to come back at a scheduled time frame to grab their computer and anything else they needed to take home. We got everyone set up, and then I got together with FotoKem and we started the remote workflow. The next Monday, post was up and running entirely from peoples’ people’s homes. By the time we were about to run out of footage to edit, the government let us go back to filming.

We were one of the first to get to go back to work because of our remoteness. We sent four people up to film the episodes: a DP, a producer, a DIT/PA-type person, and then we a “safety manager.” The safety manager handles the cooking, and shelter and brings a gun in case of bears or any danger. It’s crazy what they do.

Will you touch on why security played a factor in selecting the right storage solution for Life Below Zero?
Nat Geo is now owned by Disney, and they adhere to the highest security standards. FotoKem dealt with the highest level of security possible from Disney and did whatever they needed to dowas necessary to make our network secure. When editors are working from their homes, they log into FotoKem’s closed network, and are, in a way, using FotoKem’s internet connection to access the footage. Several years ago, we were exploring how to work remotely, and then we expedited it due to the pandemic. Now we’re remote, and it’s working great.

3BMG

Tell us about your experience in the reality TV/unscripted arena and what inspired you to explore this niche within the industry.
Neil Coleman: My background is as an editor. I started editing commercials and music videos at a post house, and one of our clients was the talk show, The Jenny Jones Show. We used a linear editing system, the Grass Valley CMX, and that’s how I learned. I went from there to The Oprah Winfrey Show where I spent 16 years, starting in editorial then moving to project management. When The Oprah Winfrey Show ended, there was not much television, media or entertainment work in Chicago, so I moved out to Los Angeles and started working on The Jeff Probst Show. The post supervisor for that show had worked at 3 Ball Media Group (3BMG) previously and they were looking for somebody to take over their post department. I’ve now been at 3BMG for about nine and a half years. Prior to that, I never worked in unscripted reality, just talk shows, music videos and commercials. The nomenclature is different, but the work is all very similar.

Scotty Eagleton: When I first got into the industry, I started in the music video space but felt it was quickly going nowhere, so I accepted a job with 3BMG. I worked on a few reality shows, then left 3BMG and shifted my focus to scripted and I bounced around as a project manager on various sitcoms for a long time. I wanted to have a role with more longevity, so I went back to 3BMG, helping with deliverables and working with the production teams.

Once Neil and I started working together, and then our production services business, Warehouse Studios, came about, I started learning more about post workflows. That knowledge has really helped me collaborate with Neil and find the types of people required to produce content and ensure our clients’ needs are met.

3BMG works on big reality titles, including Iron Chef: Quest for an Iron Legend and College Hill: Celebrity Edition. Tell us about the typical workflow on an episode of an unscripted TV show.
Coleman: Both of those shows shoot a lot of content, both on fairly concentrated timelines. Iron Chef: Quest for an Iron Legend shot eight episodes in eight days — one episode per day. And they had something like 28 cameras, which was very intimidating for us because the turnaround was very tight. We can absolutely handle that kind of footage, but turning it around that quickly… there is a lot going on. College Hill was also just four weeks, and it had around 28 cameras as well, but they were running 24/7 because they’re embedded in a house, and it’s more documentary-style. There’s just a huge volume of footage.

Our typical workflow, regardless of whether it’s local or across the country, starts with how we receive the footage, which is typically electronically. We use Signiant’s Media Shuttle, which allows us to receive the footage much faster than the traditional way, where everything is offloaded by the DIT onto an external hard drive and then FedEx-ed back to us. For argument’s sake, if you shoot on a Monday, the footage will be ready on Tuesday, and because you’re FedEx-ing it, that means you don’t receive it until Wednesday. If you’re transferring electronically, and your speeds are high enough when you finish shooting on a Monday, then you start receiving the footage right away.

In the unscripted world, media is edited on Avid, then we use AMA (Avid Media Access) to connect to the raw source footage and then transcode for the proxy media. It’s a tried-and-true workflow, but it requires a large volume of people because it’s a linear, single-file workflow. If you have one machine, it does one file at a time.

We use something that’s more enterprise-level: a transcoding tool called Telestream Vantage. Our setup transcodes up to 20 files simultaneously — much faster than an assistant editor’s Avid would. For example, we got 28 cameras in for College Hill on a Monday. By Wednesday morning, the footage was already grouped, prepped and ready for our story producers and editors. So it took around 36 hours total, and those are huge shows.

When we go to online, mix and color, we’ll AMA and consolidate the media. When we are finished with the low res, we’ll create the final 4K or HD media. One of the advantages of using Telestream is when we’re creating the proxy media, we’ll bake a LUT into the log footage so that it looks correct as a proxy. If you AMA it, you have to rely on the Avid to add the LUT. If you’re grouping multiple cameras, that LUT requires processing power just to be able to display it in Avid, whereas in Telestream, it’s baked in, and you don’t need the Avid to do anything.  

What is the collaboration process like with other creatives and stakeholders? Do you have anybody that you’re working with outside of your team, like on the production side? 
Coleman: We’re preparing to begin a show for Hulu after the first of the year. As soon as the show is green lit, we’ll start speaking with the executive producers about what their needs are, how they envision creating the show, what types of cameras they want to use, etc. Those are all very creative conversations about how the DP likes to work. The linchpin of that whole dynamic and interaction is the DIT — the digital imaging technician — who is the point of contact between what is acquired in the field and what ends up being transferred to us. It is imperative to have strong communication and a good relationship with that person, and also for that person to have strong communication with the camera team and the producers out in the field. 

Communication is key for everything. Once we’re in post, we have conversations with the post teams about their needs, how they like to work with graphics, music and all those creative elements that require a technical foundation. We like to cater to those needs while still within our existing infrastructure because we have multiple shows going on simultaneously, and we have a lot of shared resources. We want to make sure that everybody is moving in the same direction yet have some control of their own individual shows. 

Neil Coleman

Do you use the same storage systems across all the projects that come through 3BMG, or does it vary per project?  
Coleman: It varies a little bit per project, and that often has to do with restrictions or mandates that come from the network. For instance, we have to go through a security audit for Amazon when we work on their shows. They have, in a way, set the bar for what we provide for everybody: security in the facility, encrypted drives, ensuring that we have the correct IT infrastructure and beyond. Regarding storage, we primarily use Synology Network Attached Storage (NAS). We also use 45Drives’ NAS and Storinator. We also use the Nexis for Avid, as well as archive to LTO. Then some networks, like Amazon, have us upload directly to their S3 in the cloud.  

Was security a big factor in determining what kind of storage that you use?  
Coleman: With the Amazon security audit, we made sure that everything we are using hit their specs and met their requirements in order to be approved to work on their show. For the most part, everything that we already had hit those specs, we had to just make sure they were configured according to their requirements.  

Has the evolution of hybrid and remote workflows affected your storage needs at all?  
Coleman: I don’t think so. The way we do it here, and the way I understand it, it’s fairly similar to the other unscripted companies. People will remote in from their home locations or wherever they’re working into our facility, into their edit bay here at our facility, and they will work as if they were here in person. Our infrastructure on-premise is the same as if you were in-person or remote.  

Scotty Eagleton

All of our worlds changed when COVID hit and everybody did work from home, but that’s basically remoting in. We really don’t use cloud for much of our real day-to-day editing. We primarily use the cloud or high-speed internet for transferring footage from one location to another and then for MediaSilo or Frame.io or review and approval. 

Do you have anything on the horizon that you’d like to talk about?  
Eagleton: 3BMG owns a company called Warehouse Studios, which Neil and I oversee. It was founded because independent producers who are selling content have a need for reliable production, post and accounting services. Often, a producer who sells a piece of content to a network won’t have anywhere to take and produce it. That’s where we step in to assign a whole team to ensure that they’re taken care of every step of the way. It’s another layer to the services that we offer.  


Alyssa Heater is a writer and marketer in the entertainment industry. When not writing, you can find her front row at heavy metal shows or remodeling her cabin in the San Gabriel Mountains.

Ultimatte 12

Blackmagic Intros New Ultimatte 12 Models and Software Control App

Blackmagic has introduced four new models of Ultimatte 12 real-time compositing processors, designed for creating next-generation broadcast graphics, and a new Ultimatte Software Control app for Mac and Windows that lets users control all Ultimatte 12 models without an additional hardware control panel.

The new models — Ultimatte 12 HD, Ultimatte 12 4K, Ultimatte 12 8K and Ultimatte 12 HD Mini — allow users to take advantage of a processor designed for their current television standard. Specifically, the Ultimatte 12 HD Mini lets ATEM Mini owners use broadcast-quality keying to build fixed camera virtual sets. Each new model is designed at a lower cost while retaining the same processing for edge handling, color separation, color fidelity and spill suppression.

All Ultimatte 12 models include built-in frame stores, allowing users to key using stills for backgrounds and eliminating the cost of external equipment, which means all compositing can be done in the Ultimatte itself. Each model also produces identical quality compositions by having the same image processing algorithms and internal color space. The processing automatically generates internal mattes so different parts of the image are processed separately based on the colors in each area. This means users get fine edge detail where it’s most needed, like on hair, and smoother transitions between colors or other objects in the scene.

Ultimatte 12 HD
Ultimatte 12 features one-touch keying technology that analyzes a scene and automatically sets over 100 parameters so users get keys without extra work. One-touch keying works faster and helps customers accurately pull a key with low effort.

The improved flare algorithms in Ultimatte 12 can remove green tinting and spill so users can fix shadows or transparent objects with reflections. Ultimatte 12 automatically samples the colors, creates mattes for walls, floors and other parts of the image, and then applies the necessary corrections.

The Ultimatte 12 4K and Ultimatte 12 8K models feature 12G-SDI connections so users can operate with current HD video formats as well as future Ultra HD and 8K video formats. 12G-SDI gives users high frame-rate Ultra HD via a single BNC connection that also plugs into all their regular HD equipment.

Ultimatte 12 4K
The Ultimatte 12 HD Mini model allows conversion of SDI camera control to HDMI, meaning an ATEM SDI switcher can control an HDMI-connected Blackmagic Pocket Cinema Camera. All ATEM switchers send camera control over SDI, then the Ultimatte 12 HD Mini can translate it to HDMI for the camera. Users can add a camera number in the Ultimatte utility to get control of the camera color corrector, tally and even remote record trigger.

All Ultimatte 12 models include the Ultimatte Software Control for Mac and Windows. The main window has menus arranged in sections that perform different functions.

According to Blackmagic CEO Grant Petty, “From the Ultimatte 12 HD Mini model up to Ultimatte 12 8K, all models feature the same advanced processing and algorithms for broadcast composites and even cinematic virtual sets.”

Ultimatte 12 8K
The Ultimatte Software Control app is available to download for free. All new Ultimatte 12 models are available now. The Ultimatte 12 HD Mini will be priced at $495, Ultimatte 12 HD is $895, Ultimatte 12 4K is $2,495 and Ultimatte 12 8K is $6,995.

NBCUni 9.5.23

Teradek Serv 4K Targets Real-Time 4K HDR Workflows

Teradek has a new 4K HDR hardware/software production-streaming solution called Serv 4K. Serv 4K integrates cloud and local-network platforms in a flexible, streamlined workflow, simplifying real-time creative collaboration and decision-making on- and off-set.

“Serv 4K unifies on-set and remote streaming setups with a 4K HDR-ready device that’s easy to set up, manage and access,” says Greg Smokler, VP of cine product at Creative Solutions, Teradek’s parent. “It’s part of an end-to-end streaming solution that removes barriers to creative collaboration by providing incredible image fidelity, simplified remote viewing and stream management, and double the local streaming capacity.”

Serv 4K combines and expands the local-streaming functionality of Serv Pro with the cloud-streaming tools of the Teradek Cube to offer a 4K HDR hardware/software ecosystem that addresses post-2020 production realities.

Serv 4K boasts a higher bit rate encode using H.264 or H.265 with 256-bit encryption and HDR (DCI-P3, PQ2084) throughput, allowing 4K60p 10bit 4:2:2 images to be securely streamed live or as instant recordings on-set and to remote viewers. Setup is straightforward for camera teams because the workflow, the device and web UI, and mobile app management are all intuitively designed for the simplest experience. Serv 4K also supports HEVC, delivering high-quality media in smaller file sizes.

Up to 20 devices can stream on a local network using Teradek’s Vuer app, with iOS, Android, PC, MacOS and Apple TV support. Unlimited devices can access cloud-based live streaming and upload instant recordings to Teradek Core and to third-party platforms like Frame.io. If internet connection is lost, recordings are saved to an SD card for secure upload once connection has been restored. Streams and recordings are compatible with MacOS, iOS, Android and web browsers.

“Serv 4K is more than a stand-alone hardware device. It’s part of an ecosystem that brings together cloud solutions like Teradek Core, Core TV, Vuer and third-party integrations for a seamless streaming experience on- and off-set,” says Colin McDonald, cine product manager at Creative Solutions. “It’s an all-in-one solution, offering content visibility at every stage of production for creative stakeholders and decision-makers, whether they’re on set, on location, in post or at home.”


Colorfront Updates Streaming Server and Streaming Player

At the 2022 HPA Tech Retreat, Colorfront demonstrated new capabilities for its Streaming Server and Streaming Player, cost-effective, live-streaming systems that deliver secure, sub-second latency and reference-quality pictures and audio for remote synchronous review. Streaming Server/Player now enables the end-to-end review of reference-quality, frame- and color-accurate, HDR Dolby Vision material via public internet.

Using footage shot by DP Claudio Miranda, ASC, at 8.6K on the new Sony Venice 2 camera, Colorfront demonstrated the ease with which 4K Dolby Vision HDR material can now be streamed and reviewed with color accuracy at various target luminance levels on different professional and consumer displays, including a Sony BVM-HX310 professional reference monitor, Apple Pro XDR display, the latest M1 iPad Pro and MacBook Pro M1 Max notebooks with Liquid Retina XDR screens, and the iPhone 12/13 Pro with Super Retina XDR screens.

The company also previewed its forthcoming Streaming Server Mini, a software-only solution that can be easily installed and run on the same workstations that creative artists use to perform editorial, compositing and color-grading tasks. Using Streaming Server Mini, creatives will be able to easily live-stream work-in-progress content to production stakeholders wherever they are in the world.

“Remote collaboration on HDR projects has been a real, global challenge,” says Aron Jaszberenyi, managing director at Colorfront. “However, as our HDR Dolby Vision remote streaming demonstrations at HPA 2022 showed, Colorfront’s latest innovations completely remove the roadblock to high-end collaboration by delivering spectacular end-to-end picture quality and all-important color fidelity on a convenient display device of choice — LED wall, cinema projector, broadcast monitor, notebook, tablet or smartphone. Our initiative with Streaming Server Mini software will open up new and more efficient ways for creative artists to engage with their clients.”

Launched at HPA in 2021, Streaming Server is a 1RU device that can simultaneously stream up to four channels of 4K 4:4:4, 256-bit, AES-encrypted, reference-quality video and up to 16 channels of 24-bit AAC or PCM audio to remote clients anywhere in the world over readily available public internet. On the remote client/review side, Streaming Player enables color-accurate viewing and QC of HDR materials emanating from Streaming Server on professional 4K reference displays, prosumer screens and HDR-capable notebooks, tablets and smartphones.

Warner Bros., Disney, Fox, HBO, Netflix, Light Iron and Streamland Media, among others, have adopted the solution for their high-end streaming needs across different locations, countries and time zones.

 

 


Autodesk Acquires Moxion’s Cloud Platform for Dailies

Autodesk has acquired Moxion, the New Zealand-based developer of a cloud-based platform for digital dailies. The solution has been used on such productions as The Midnight Sky, The Marvelous Mrs. Maisel and The Matrix Resurrections. According to Autodesk, the acquisition of Moxion’s talent and technology will expand Autodesk’s own cloud platform for media and entertainment, “moving it beyond post into production, bringing new users to Autodesk while helping better integrate processes across the entire content production chain.”

Moxion’s platform enables professionals to collaborate and review camera footage on-set and remotely with the immediacy required to make creative decisions during principal photography in 4K HDR quality and with studio-grade security. Moxion ensures data security with features like MPAA compliance, multi-factor authentication, visible and invisible forensic watermarking and full digital rights management.

Founded in 2015, Moxion has been awarded with an Engineering Excellence Award from the Hollywood Professional Association (HPA), a Workflow Systems Medal from the Society of Motion Picture and Television Engineers (SMPTE) and a Lumiere Award from the Advanced Imaging Society.

“As the content demand continues to boom with pressure on creators to do more for less, this acquisition helps us facilitate broader collaboration and communication and drive greater efficiencies in the production process, saving time and money,” says Diana Colella, SVP Media and Entertainment, Autodesk. “Moxion accelerates our vision for production in the cloud, building on our recent acquisition of Tangent Labs.”

Aaron Morton, a cinematographer who has worked on projects including Orphan Black, Black Mirror, American Gods and Amazon’s new The Lord of the Rings series, used Moxion for several projects. “It’s never fun when decisions are being formed about your work if the dailies aren’t the way you wanted them to look,” explains Morton, NZCS. “With Moxion, it’s what I see on the set, and the decisions I make with the dailies colorist always play out so that production people and producers are seeing what I want them to see. The images are very true to what we see while we’re shooting.”

 


Review: Adobe MAX Brings Premiere Version 22

By Mike McCarthy

The Adobe MAX creativity conference is taking place virtually for the second year in a row, and with this event comes the release of new versions of many of Adobe’s products. One interesting note relating to this is that Adobe’s versioning of each video application is now Version 22, regardless of the tool’s previous version. This will make the version numbers consistent across the different applications and match the year that the release is associated with. Last year, Premiere Pro 2021 was released, but it was Version 15.0, while After Effects was Version 18.0. Unlike Adobe’s move to redesign its applications icons to all look the same (so you can’t easily tell the difference between an AEP file and a Premiere project), this broad consistency change seems like a good idea to make it easier to track versions across time.

The application I am most interested in is Premiere Pro (although at the end of this review, I touch on After Effects and Photoshop). Last year’s Version 15 release added a new approach to captions, which Adobe has continued to flesh out with more automatic speech-to-text tools and better support for new titling options. Other improvements to Version 15 introduced through the year included more control over project item labels and colors in collaborative environments, HDR output on UI displays via DirectX and automatic switching of audio devices to match the OS preferences.

Adobe Premiere Version 22 Updates: HDR and More
HEVC and H.264 files are now color-managed formats, which means that Premiere now correctly supports HDR files in those codecs. This had been a huge hole in the existing HDR workflow because Premiere could export HEVC and H.264 files of HDR content but couldn’t import or view them. The issue is now resolved, opening a host of new HDR workflow options.

Adobe also added support for hardware-accelerated decoding of 10-bit 4:2:2 HEVC files on new Intel CPUs, which is a new format for recording HDR on high-end DSLRs that is not currently accelerated on Nvidia or AMD GPUs. This should allow processing of HDR content on much smaller and lighter systems than are currently required with the existing ProRes-based HDR workflows. Adobe also added color management for XAVC files in SLog color space and better support for Log files from Canon and Panasonic as well.

One other feature Adobe has announced for Premiere Pro 2022, that hasn’t been released to the public version, is fully redesigned import and export windows, which consume the entire UI, for no apparent reason, and do not include all of the functionality of the previous approaches. I believe it might be more consistent with Premiere Rush’s UI, and may be similar to Resolve’s export options.

The main thing I am missing is the source settings in the export window, which previously allowed you to crop and scale the output in different ways. These results can be achieved by adding export sequences that include the content are trying to output, but this is not as simple to do on a large scale, and can’t be included in presets. Obviously I am not a fan of these changes, and see no upside to the new approach. Currently the older import and export UI controls are still available in Version 22.0, and are still available in the Beta versions, if you send your sequence to Media Encoder. Hopefully these functions will be included in the new approach to exporting before it comes out of beta.

The Lumetri scopes have also gotten some attention as they become more significant for HDR processing. The vectorscope is now colorized, and you can zoom in to any section by double-clicking. The histogram is much more detailed and accurate, offering a more precise view of the underlying content. The Lumetri Curves effect UI now scales horizontally with the panel for more precision. I would prefer to be able to scale it vertically as well, but that is not yet supported. Adobe has also implemented a more powerful AI-assisted Auto Tone function that sets all of the basic controls based on an analysis of the content.

Another new feature coming out of beta is the Simplify Sequence functionality. This creates a new, cleaned-up copy of an existing sequence. The clean version can remove inactive tracks and clips, drop everything to the lowest available track and be further fine-tuned by locked layers.  This is a great tool that was implemented in a well thought out and nondestructive way.

Also arriving in the beta version is a feature called Remix. Originally introduced in Audition, Remix will adjust the duration of music tracks while using AI to preserve the tempo and feel of the original asset. I believe it does this by attempting to remove or loop repetitive sections, and it visually displays where the automatic edits are being made right on the clip in the sequence.

After Effects & Other Apps
After Effects is another application I use, although less and less over time as Premiere gains many of the functions that used to require jumping over to AE. But the big news there is that Adobe is introducing multi-frame rendering to help users tap into the potential processing power of multi-core CPUs. On my high-core-count systems, I am seeing a 3x speed increase when rendering the composited scenes for my Grounds of Freedom animated web series. My main 5K composited scenes used to take 3 to 5 hours to render, and that looks like it will be cut to 1 to 1 ½ hours, which is fantastic.

After Effects is also getting a speculative render feature to try to prepare for smoother playback when your system is idle. Because of the type of work I do, I wouldn’t use this feature much, but I am sure it will be great for some users. I tested out GridIron Nucleo Pro for AE7 15 years ago, and Adobe was playing with both of these functions back then. The old multi-frame render options got bogged down managing that much data, but Adobe seems to have sorted that issue out by now because a 3x increase in real-world speed is nothing to scoff at. Adobe has also added a composition profiler that tells users how long each layer is adding to the render time, with that info available right in the layer stack.

Adobe also just completed its acquisition of cloud collaboration tool Frame.io, and as an existing Frame.io user, I am eagerly waiting to see what develops from this. But there are no new details to announce yet.

Photoshop is also getting a number of new features, mostly centered on AI-powered tools and collaboration with iPad and web users. The power of Photoshop for iPad will soon be available directly in a web browser for collaboration through the new Creative Cloud Spaces and Canvas. Users will be able to share their work directly from Photoshop, which will generate a public link to the cloud document for browser-based feedback or editing.

 

The AI-based object selection tool has been improved to show users what object boundaries have been detected wherever they hover their cursor over the image. There are also improvements in the interoperability between Photoshop and Illustrator, allowing Illustrator layers to be pasted into Photoshop while retaining their metadata and even vector editability. Illustrator is also getting an AI-enhanced vectorizing tool to better convert bitmap imagery to vector art.

Lots of new functionality is coming to Creative Cloud, and you can learn plenty of tips and tricks from the various sessions that are available throughout the free event. Anyone can sign up to attend online, so be sure to check it out.


Mike McCarthy is a technology consultant with extensive experience in the film post production. He started posting technology info and analysis at HD4PC in 2007. He broadened his focus with TechWithMikeFirst 10 years later.

 


Autodesk Updates Flame

Autodesk Updates Flame, Flare, Flame Assist for 4K/UltraHD and HDR

Autodesk has updated Flame, Flare and Flame Assist with new features and streamlined workflows for managing high-end 4K/UltraHD and HDR content. Tool enhancements include HDR-enabled color curves in MasterGrade, improvements for faster conforms and upgraded video preview options of up to 4K/UltraHD content via single-cable 12G-SDI connectivity.

Autodesk Updates FlameFlame Family improvements help to increase production speed and creativity for artists across the media and entertainment industry.

New Flame Family features include:

HDR Advanced Hue Curves: These allow artists to quickly manipulate specific colors in a shot – without any keying – while preserving the fidelity of imagery. Flame’s MasterGrade tools also now include an expanded set of eight new hue modifiers and additional curve manipulation modes, providing smoother transitions in regions where hues are compressed and further streamlining color correction tasks for artists.

Improvements for Faster Conforms: A new Auto Link Matches function saves artists time when conforming edits, in addition to new speed optimizations when scanning media – specifically when indexing MXF files.

Video Preview Enhancements: Improved live video preview options introduce new 12G-SDI video output support (up to 4K/UltraHD content via a single cable) for video I/O cards from Blackmagic and AJA Video. Compatible video cards include Blackmagic’s Ultra Studio and Deck Link, as well as AJA’s T-Tap Pro for Thunderbolt 3-enabled 12G-SDI and HDMI 2.0 monitoring and output to Flame workflows.

Podcast 12.4
Red Intros V-Raptor 8K

Red Intros Next-Gen DSMC3 Camera System with V-Raptor 8K VV

Red Digital Cinema has introduced its new V-Raptor 8K VV camera, the first offering for its next-generation DSMC3 platform. The company reports that this camera offers the highest dynamic range, fastest cinema-quality sensor scan time, cleanest shadows and highest frame rates of any of its existing lineup. It is designed for a variety of shooting scenarios.

Red Intros V-Raptor 8K Priced at $24,500, a white ST version of the camera is available now. The black version will be available in larger quantities before the end of 2021. Red also announced a forthcoming XL camera body will be released in the first half of 2022. The XL will be ideally suited for studio configurations and high-end productions, based on feedback to the Red Ranger body style.

V-Raptor features a multi-format 8K sensor (40.96mm x 21.60mm) with the ability to shoot 8K large format or 6K Super 35. Joining its predecessor, the Monstro 8K VV sensor, this in-camera option allows shooters to use any of their large-format or S35 lenses with the push of a button and always get over 4K resolution.

The V- Raptor exceeds previous sensor capabilities, offering users the option to capture 8K full sensor at up to 120fps (150fps at 2.4:1), 6K at up to 160 fps (200fps at 2.4:1) and 2K (2.4:1) at 600fps, while still capturing over 17 stops of dynamic range.

V-Raptor, as with the other cameras in Red’s ecosystem, uses Red’s proprietary RedCode Raw codec, allowing users to capture 16-bit Raw and leveraging Red’s latest IPP2 workflow and color management tools.

The DSMC3 camera is built on a newly integrated and modernized form factor while featuring a professional I/O array that includes two 4K 12G-SDI outputs, XLR audio with phantom power capability via adapter and built-in USB-C interface, allowing for remote control, ethernet offload and more. All features are packaged in a compact, rugged, water- and dust-resistant design that measures 6 inches x 4.25 inches and just over 4lbs.

Red Intros V-Raptor 8K

Other highlights include an RF lens mount with locking mechanism; wireless control and preview via Wi-Fi; phase detection autofocus; and a newly designed and easy-to-navigate integrated display located on the side of the camera, which allows for comprehensive controls, including in-camera format selection, customized buttons, status updates and more.

As with Red’s most recent camera, the Red Komodo 6K, V-Raptor uses the updated and streamlined RedCode Raw settings (HQ, MQ, and LQ) to enhance the user experience with simplified format choices optimized for various shooting scenarios and needs.

Additional features include data rates up to 800MB/s using Red branded or other qualified CFexpress media cards; integrated micro V-Mount battery plate; a 60mm fan for quieter and more stable heat management; and wireless connectivity via the free Red Control app, which is available now for iOS and Android devices.

Also announced today from Red is a comprehensive array of first-party and co-designed accessories. The engineering team at Red worked closely with industry-leading partners such as SmallHD, Angelbird, Core SWX and Creative Solutions to create and produce purpose-built products to work with V-Raptor.

Available accessories include:
• DSMC3 Red Touch 7-inch LCD monitor
• V-Raptor wing grip
• Red Pro CFexpress 660GB and 1.3TB (available soon) media cards
• Red CFexpress card reader
• RedVolt Mircro-V Battery Pack
• Red compact dual V-Lock charger

Red is also launching a pre-bundled V-Raptor Starter Pack option that comes with:
• DSMC3 Red Touch 7-inch LCD
• Red Pro CFexpress 660GB card
• Red CFexpress card reader
• 2x RedVolt Micro-V battery pack
• Red compact dual V-Lock charger
• 2x V-Raptor wing grips
• Ext to T/C cable

Later in 2021, along with the launch of the black V-Raptor, an additional pre-bundled production pack option will be available. It will come with an accessory package that includes the DSMC3 Red Touch 7-inch LCD monitor, two Red Pro CFexpress 660GB Cards, Red CFexpress USB-C card reader, four Red Mini V-Lock 98Wh batteries, Red V-Lock charger, V-Raptor tactical top plate with battery adapter; V-Raptor expander module; V-Raptor  top handle; V-Raptor Quick Release Platform Pack; Red production grips; V-Raptor side ribs; and a DSMC3 Red 5-pin to dual XLR adaptor.

Podcast 12.4

Dalet AmberFin: Faster Performance and New Browser Interface 

Dalet has made updates to its Dalet AmberFin transcoding platform. Significant performance gains and a new browser interface have been added. Additionally, built-in support for audio normalization specialists (via Emotion Systems) ensures audio output is optimized for global delivery to multiple listening environments.

“The rapid evolution of digital cinema and video technology has given us images that are stunningly good. 4K resolution, HDR and immersive audio are not only appreciated by today’s viewers, but expected,” explains Eric Carson, director, product strategy, Dalet AmberFin. “Viewing screens, including smartphones, can reveal image quality — good and bad — with forensic precision. Consequently, it’s more important than ever that the original image quality achieved by a film director or TV producer is preserved for audiences to experience as intended. We are committed to keeping Dalet AmberFin on the forefront of media processing with continued improvements in speed and usability.”

Each feature of the Dalet AmberFin platform, including the improvements in the latest release (v11.9), is available both on-premises and in the Dalet AmberFin Cloud Transcoder Service. Dalet AmberFin customers can control their on-premises and cloud transcoding resources from the same workflow engine and API and natively use the same conversion profiles and workflows for both fixed and elastic capacity without the use of a cloud port or service team.

New features summary:

  • Enhancements to the Dalet AmberFin transcoding engine provide, on average, a 30% increase in throughput, offering users an even faster speed-to-market for high-quality conversions.
  • New web-based conversion profile editor provides engineers a modern interface for managing day-to-day configuration.
  • Emotion Systems’ advanced audio processing and loudness normalization capabilities enhance Dalet AmberFin’s video conversion capabilities with loudness correction and global delivery compliance requirements. 

“For many of our clients, media processing is only part of the equation. Packaging for mass distribution is often a requirement. Dalet AmberFin — combined with Dalet Flex and Dalet Galaxy five — brings a much-needed, high-quality transcoding and precision packaging solution for global distribution encoding requirements,” concludes Carson.

 

 

Review: AJA T-Tap Pro Output Device for 4K and HDR

By Mike McCarthy

AJA has released the T-Tap Pro, a new video output device for editors, colorists and VFX artists, targeting 4K and HDR workflows. Last fall, I wrote a series of articles about the various components of an HDR editing workflow. Covering software, workstations, GPUs, I/O cards and monitor options, I looked at the state of Adobe-based HDR post production at the time. But new things continue to be developed all the time. One of those things is this new device from AJA. It’s a hardware output solution that is much more tailored to the needs of most editors, who must output and view HDR content.

The Kona 5 is a great tool that supports nearly everything you can think of for both input and output from SD to 8K, but most editors are now using file-based workflows that have no need to input via SDI or HDMI, (or even output those ways, aside from monitoring purposes). And few users are viewing content in 8K. This means that what a majority of editors and other video professionals need is a solid and reliable way to output UHD and 4K content to their monitor or other device, one that offers them support for — and control over — HDR color settings.

This is where the new T-Tap Pro comes in. The original T-Tap was introduced in 2014 for $295 and offered HD-SDI and HDMI 1.3 output in a tiny Thunderbolt 1-connected package. The new T-Tap Pro is a considerable step up from that, supporting 12G-SDI and HDMI 2.0 with HDR simultaneously. It even supports 12-bit RGB for demanding color work while also adding analog audio output and a rotary volume control with visual meters. This necessitated a larger box with a separate power supply and a price tag of $795.

For most people, the T-Tap Pro takes just the features you actually need from a Kona 5 card or Io 4K Plus (single 12G-SDI out, HDMI out, analog out) and places them in a much more affordable package. I tested the T-Tap Pro on my Razer Blade, outputting to my Canon Reference Display, in direct comparison to the Kona 5 card in my workstation.

Being able to output HDR from my laptop to my reference monitor is great, and having full control over the output options from HLG or PQ in various color spaces is very helpful in ensuring that what I’m seeing on the display is an accurate representation of the images I am editing. The volume control is interesting in that it can control either just the analog headphones output or the audio being output and embedded in the SDI and HDMI signals. I can see use cases for both options, but it defaults to headphone control so as to wisely avoid unintentionally tampering with the main output levels.

AJA also released Version 16 of its drivers and utilities for Kona and Io devices. Among many new features, Version 16 includes more support and options for HDR workflows on a wider variety of AJA devices, including new support for HDR over SDI and the recognition of HDR metadata for both capture and playback of Movoe files. For Kona 5 users, the new drivers allow “fast switching” into 8K firmware without rebooting, which I have been benefitting from during testing. It also improves the integration with Premiere Pro, giving better support for existing HDR options and fixing a UI bug that got introduced by a recent Windows update. AJA also has a big push to support new remote workflows, a few of which it details in a new section of its website.

The other new development that has been announced — relevant to HDR monitoring for Premiere Pro users — is that Adobe is working to support native HDR output for PC users by replacing the existing OpenGL playback engine with a new DirectX 12 playback engine that natively supports HDR on Windows. I have been testing this in the public beta builds, and it is very promising, but it still needs some work before it is ready for prime time. This new engine allows HDR content to be viewed in HDR directly in the IU panels, as long as the UI monitor is HDR-compatible in Windows and/or output from the GPU to an external full-screen display happens via Mercury Transmit.

This approach only supports HDR10 output, as that is the only method by which Windows supports HDR content at the GPU level. But Premiere Pro can convert HLG content to HDR10 on the fly for playback on Windows HDR displays. This offers PC users a cheaper, software-based alternative to hardware I/O out, albeit one limited to HDR10, and with less control over the accuracy of the processing pipeline.

This will eventually remove the absolute requirement for a dedicated I/O card for Windows users, although those still offer certain advantages, including more reliable control over the output pipeline, audio sync, and support for SDI signals as well as offloading CPU/GPU tasks. It allows the project, source and program panels to display content in HDR and transmit full-screen output from the GPU, which will benefit HDR editors even if they already have and use a hardware output card. Adobe also added HDR viewing support to After Effects by adding DirectX 12 playback. This requires projects to use 32-bit color processing, set an HDR-compatible working space and enable display color management. It also allows editors or VFX artists on a laptop to connect an HDR TV to their system and start viewing and editing content in HDR.

Mac-based Premiere editors, on the other hand, will require a hardware output to view content in HDR, and the T-Tap Pro is a solid choice for those looking for a hardware video output solution. The device is also supported in FCP, Avid, After Effects and most other applications that support Kona and Io 4K products.

The device does run hot due to its size and capabilities, and the power connector is an unusual four-pin square that looks like an ATX connector, so don’t lose the included power supply. I’d like to see a PCIe version of this, even though that wouldn’t support the volume control and meters. Remember, this is also coming from a PC user who doesn’t have Thunderbolt on every system. AJA provides a list of tested PC systems if you want to ensure compatibility. Mac users will be well-served by the new T-Tap Pro whether they are on a MacBook, an iMac or a Mac Pro. I’d also love to see an HDMI-only version that is fully bus-powered — a true replacement for the original T-Tap since 99% of editors are going to be outputting to an HDR TV over HDMI, especially since Mac users have no alternative for getting HDR content out to their display. But this should meet the monitoring and output needs of most users who are working with HDR content in UHD or 4K, and for much lower cost than AJA’s other existing options.


Mike McCarthy is an online editor/workflow consultant with over 10 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

 

AJA Intros Two New 12G-SDI openGear Cards

AJA has released two new 12G-SDI openGear solutions. Featuring support for up to 4K/Ultra HD content, OG-12GM is a 12G-SDI to/from quad-link 3G-SDI muxer/de-muxer, and the OG-FiDO-TR-12G is a 12G-SDI/fiber transceiver. Both are designed for use in high-density openGear 2RU frames

OG-12GM is an openGear-compatible SDI transport converter that supports single-link 12G-SDI to/from quad-link 3G-SDI, two-sample interleave (2SI) to/from square-division (quadrant) pixel mapping and selectable distribution amplifier (1×4). It provides detailed timing analysis for validating alignment of quad-link SDI inputs via a unique timing analyzer that quickly helps to identify possible timing issues for quad-link signals.

Ideal for critical broadcast applications where high-quality conversion and reliability are required, OG-12GM features openGear’s high-density architecture and DashBoard support on Windows, macOS or Linux for monitoring and control over a local network or remotely.

OG-FiDO-TR-12G offers flexibility and cost efficiency for 12G-SDI to fiber conversion and fiber to 12G-SDI conversion with single-link LC connectivity, enabling long cable runs up to 10km for single mode. OG-FiDO-TR-12G is compatible with all certified openGear products and supports Ross DashBoard software for convenient remote control and monitoring over a PC or local network to further simplify production workflows.

“As demand for high-raster 4K/Ultra HD content increases, convenient 12G-SDI solutions are critical to simplifying cabling and transport of high-bandwidth content,” says  AJA president Nick Rashby. “In response to industry demand, we’re bolstering our lineup of 12G-SDI workflow tools with the new OG-12GM card and OG-FiDO-TR-12G transceiver, which feature industry-wide compatibility in openGear frames.”

OG-12GM and OG-FiDO-TR-12G are now available for $895, and $1,325  respectively.

Atomos Intros Neon 17-Inch and 24-Inch HDR Monitor/Recorders

Atomos has introduced the Neon 17-inch and Neon 24-inch HDR monitor/recorders for production and post. They will start shipping in November. Atomos says the 17-inch is ideal for focus pullers, gaffer wardrobe and as a production monitor for mobile laptop edit systems. The 24-inch is suited for the video village, DITs and cinematographers, providing an accurate and affordable monitor for an NLE and grading application outputs via industry-standard video I/O devices.

Atomos is considering adding Neon 31-inch and 55-inch models as it assesses feedback from users on how those sizes would fit in to the changing workflows for on-set virtual and remote productions.

According to Atomos, Neon 17-inch and Neon 24-inch offer accurate and consistent SDR/HDR monitoring and provide recording functionality for easy shot review or render-free output delivery to Apple ProRes or Avid DNx at up to 4K DCI 60p. Suited for in studio, in the edit or for remote workflows, all screens are factory-calibrated or can easily be user-calibrated with the optional USB calibration cable and X-Rite i1Display Pro Plus.

Neon 17-inch uses an FHD 1920×1080 panel with 10-bit display processing, 4K to HD scaling and option for 1-to-1 display pixel mapping. Neon 24-inch uses a 4K DCI resolution panel with true 10-bit fidelity and HD/2K to 4K UHD/DCI up-scaling that avoids interpolation methods. Both displays incorporate Full Array Local Dimming (FALD) backlight technology, offering deep blacks at 1,000-nit, full-screen HDR peak brightness. The combination of display uniformity, a super-wide viewing angle of 180 degrees H/V and a dynamic contrast ratio of 1,000,000:1 provide detail across both the shadows and highlights.  

The AtomHDR engine provides the ability to accurately manage input and display Gamma/Gamut. Selectable monitor modes allow users to work in either SDR or HDR settings that match both camera acquisition settings or defined delivery standards, including Rec. 709, Rec. 2100 HLG or ST 2084 PQ. Neon provides native accurate color with DCI-P3 coverage and wide color gamuts, such as BT.2020. These are accurately processed by the AtomHDR engine to deliver consistent representations. Built-in transforms allow users to convert Log to HDR EOTFs for display on the Neon or downstream to client monitors or to use 3D LUTs for SDR to monitor with a specified show look or exposure print-down.

The foundation of the Neon platform is a modular approach that ensures the I/O of the monitor is both easy to maintain, replace and ultimately upgrade without having to take the panel out of commission. The Master Control Unit (MCU) is the brain of the Neon, with support for HDMI 2.0 for both in and loop out, which provides support for video input at 4096×2160 4K DCI at up to 60p. 

The MCU firmware is easily upgradable and allows the upload and storage of up to eight 3D LUTs in its internal memory, or post pros can use a 2.5-inch HDD or SSD to store an unlimited library that can easily be uploaded to the Neon via the AtomOS app on iOS.

The MCU offers a second level of connectivity via the Xpansion port, and Neon includes the AtomX SDI module, which provides two 12G-SDI links. The configurable ports allow for input of up to 4K 60p video with backward-compatible support for single or dual-link 1.5G/3G/6G- SDI, or the ability to toggle between A and B cameras.  File naming is also supported for both Red and ARRI cameras, allowing for easy offboard proxy recording with accurate timecode and matched file names.

Built into Neon is LE Bluetooth, providing the ability for remote operation via AtomRemote OS from Apple iOS devices running Version 12 or above.    

 

Telestream updates Prism With Test Tools for UHD, HDR and Surround

Telestream has a new software update for its Prism waveform monitor. Built for post production, live production and engineering, the latest release supports enhanced test tools for UHD and 4K/8K with RGB 4:4:4 12-bit HDR/WCG and false-color tools as well as the latest immersive audio surround formats.

The latest release streamlines post mastering and compliance workflows for UHD, HDR and advanced audio. Experienced post production staff can often subjectively gauge video quality on a regular monitor for HD/SD content, but that is very difficult to do with UHD/HDR standards. Editors and colorists need to know how much of the image is in the HDR zone and where their lightest highlights and darkest shadows are. They also need tools that confirm the color in the mastered content is correct and error-free. Media services providers like Netflix, Amazon and Hulu will reject noncompliant content, causing post production delays and cost overruns.

Based on user requests, the latest Prism software introduces a new user interface designed to simplify the selection, use and configuration of applications available on Prism. The new UI provides a hot bar that can be customized to include the key applications needed for a given Prism project, whether for live or post, engineering or technical QC. Users can choose from 36 different presets to instantly reconfigure Prism and the hot bar layout for different applications. They can export those presets and download them remotely to multiple Prisms to ensure the whole team is working in a consistent fashion and with the proper configuration for their particular workflows. In addition, users can remotely access and control Prism itself through a web UI — an important capability now that teams are required to work and collaborate from a distance.

Other features in the new Prism release include four active trace tiles, the ability to capture and decode SCTE 104 messages and the ability to select and downmix stereo and 5.1 surround audio channels.

Prism is available now. Existing customers can upgrade to software v2.2 for no additional charge.

Sohonet intros ClearView Pivot for 4K remote post

Sohonet is now offering ClearView Pivot, a solution for realtime remote editing, color grading, live screening and finishing reviews at full cinema quality. The new solution will provide connectivity and collaboration services for productions around the world.

ClearView Pivot offers 4K HDR with 12-bit color depth and 4:4:4 chroma sampling for full-color-quality video streaming with ultra-low latency over the Sohonet’s private media network, which avoids the extreme compression required due to contention and latency of public internet connections.

“Studios around the world need a realtime 4K collaboration tool that can process video at lossless color fidelity using the industry-standard JPEG 2000 codec between two locations across a network like ours. Avoiding the headache of the current ‘equipment only’ approach is the only scalable solution,” explains Sohonet CEO Chuck Parker.

Sohonet says its integrated solution is approved by ISE (Independent Security Evaluators) — the industry’s gold standard for security. Sohonet’s solution provides an encrypted stream between each endpoint and provides an auditable usage trail for every solution. The Soho Media Network ( SMN) connection offers ultra-low latency (measured in milliseconds), and the company says that unlike equipment-only solutions that require the user to navigate firewall and security issues and perform a “solution check” before each session, ClearView Pivot works immediately. As a point-to-multipoint solution, the user can also pivot easily from one endpoint to the next to collaborate with multiple people at the click of a button or even to stream to multiple destinations at the same time.

Sohonet has been working closely with productions on lots and on locations over the past few years in the ongoing development of ClearView Pivot. In those real-world settings, ClearView Pivot has been put through its paces with trials across multiple departments, and the color technologies have been fully inspected and approved by experts across the industry.

Sony adds 4K HDR reference monitors to Trimaster range

Sony is offering a new set of high-grade 4K HDR monitors as part of its Trimaster range. The PVM-X2400 (24-inch) and the PVM-X1800 (18.4-inch) professional 4K HDR monitors were demo’d at the BSC Expo 2020 in London. They will be available in the US starting in July.

The monitors provide ultra-high definition with a resolution of 3840×2160 pixels and a brightness of all-white luminance of 1000 cd/m2. For optimum film production, their wide color gamut matches the BVM-HX310 Trimaster HX master monitor. This means both monitors feature accurate color reproduction and greyscale, which helps filmmakers make critical imaging decisions and deploy faithful color matching throughout the workflow.

The monitors, which are small and portable, are designed to expand footprints in 4K HDR production, including applications such as on-set monitoring, nonlinear video editing, studio wall monitoring and rack-mount monitoring in OB trucks or machine rooms.

The monitors also feature new Black Detail High/Mid/Low, which helps maintain accurate color reproduction by reducing the brightness of the backlight to reproduce the correct colors and gradations in low-luminance areas. Another new function, Dynamic Contrast Drive, changes backlight luminance to adapt to each scene or frame when transferring images from PVM-X2400/X1800 to an existing Sony OLED monitor.  This functionality allows filmmakers to check the highlight and low-light balance of the contents with both bright and dark scenes.

Other features include:
• Dynamic contrast ratio of 1,000,000:1 by Dynamic Contrast Drive, a new backlight driving system that dynamically changes the backlight luminance to adapt for each frame of a scene.
• 4K/HD scopes with HDR scales that are waveform/vector.
• Quad View display and User 3D LUT functionality.
• 12G/6G/3G/HD-SDI with auto configuration.

Storage for UHD and 4K

By Peter Collins

Over the past few years, we have seen a huge audience uptake of UHD and 4K technologies. The increase in resolution offering more detailed imagery, and the adoption of HDR bringing bigger and brighter colors.

UHD technologies are a significant selling point, and are quickly becoming the “new normal ” for many commissioners. VOD providers, in particular, are behind the wheel and pushing things forward rapidly — it’s not just a creative decision, but one that is now required for delivery. Essentially, something the cinematographers used to have to fight for is now being man-dated by those commissioning the content.

This is all very exciting, but what does this mean for productions in general? There are wide-ranging implications and questions of logistics — timescales for data transfer and processing increase, post production infrastructure and workflows must be adapted, and archiving and retrieval times are extended (to say the least).

With these UHD and 4K productions having storage requirements into the hundreds of terabytes between various stages of the supply chain, the need to store the data in an accessible, secure and affordable manner is critical.

The majority of production, VFX, post and mastering facilities are currently still working the traditional way — from physically on-premise storage (on-prem for those who like to shave off a couple of syllables) such as NAS, local storage, LTO and SANs to distributed data stores spread across different buildings of a facility.

With UHD and 4K projects sometime generating north of half a petabyte of data (which needs to stick around until delivery is complete and beyond), it’s not a simple problem to ensure that large chunks of that data are available and accessible for every-one involved in the project who needs it — at least not in the most time effective way. And as sure as death and taxes, no matter how much storage you have to hand, you will miraculously start running out far sooner than you anticipated. Since this affects all stages of the supply chain, doesn’t it make sense to have some central store of data for everyone to access what they need, when they need it?

Across all areas of the industry, we are seeing the adoption of cloud storage over the traditional on-premises solution and are starting to see opportunities where a cloud-based solution might save money, time or, even better, both! There are numerous cloud “types” out there and below is my overview of the four most widely adopted.

Public: The public cloud can offer large amounts of storage for as long as it’s required (i.e., paid for) and stop charging you for it when it’s not (which is a nice change from having to buy storage with a lengthy support contract). The physical infrastructure of a public cloud is shared with other customers of the cloud provider (this is known as multi-tenancy), however all the resources allocated to you are invisible to other customers. Your data may be spread across several different areas of the data center (or beyond) depending on where the provider’s infrastructure has the most availability.

Private: Private clouds (from a storage perspective) are useful for those needing finer grained control over their data. Private clouds are those in which companies build their own infrastructure to support the services they want to offer and have complete control over where their data physically resides.

The downside to private clouds is cost, as the business is effectively paying to be their own cloud provider and maintaining the systems over their lifetime. With this in mind, many of the bigger public cloud providers offer “virtual private clouds,” in which a chunk of their resources are dedicated solely to a single customer (single-tenancy). This of course comes at a slightly higher cost than the plain public cloud offering, but does allow more finely grained control for those consumers who need it.

Hybrid: Hybrid clouds are, as the name suggests, a mixture of the two cloud approaches outlined above (public and private). This offers the best of both worlds and can be a useful approach when flexibility is required, or when certain data accessing processes are not practical to run from an off-site public cloud (at time of writing, a 50fps realtime stream of uncompressed 4K raw to a grade, for example, is unlikely to happen from a vanilla public cloud agreement without some additional bandwidth discussions — and costs).

Having the flexibility to migrate data between a virtual private cloud and a local private cloud while continuing to work, could help minimize the impact on existing infrastructure locally, and could also enable workflows and interchange between local and “cloud-native” applications. Certain processes that take up a lot of resources locally could be re-located to a virtual private cloud for a lower cost, freeing up local resources for more time-sensitive applications.

Community: Here’s where the cloud could shine as a prospect from a production standpoint. This cloud model is based on businesses and those with a stake in the process pooling their resources and collaborating, coming up with a system and overarching set of processes that they all operate under — in effect offering a completely customized set of cloud services for any given project.

From a storage perspective, this could mean a production company running a virtual private cloud with the cost being distributed across all stakeholders accessing that data. Original camera files, for example, may be transferred to this virtual private cloud during the shoot, with post, VFX, marketing and reversioning houses downloading and uploading their work in turn. As all data transfers are monitored and tracked, the billing from a production standpoint on a per-vendor (or departmental) basis becomes much easier — everyone just pays for what they use.

MovieLabs’ “Envisioning Production in 2030” white paper, goes deeper into production related applications of cloud technologies over the coming decade (among other sharp in-sights), and is well worth absorbing over a cup of coffee or two.

As production technologies progress, we are only ever going to generate more and more data. For storage professionals, those managing systems, or project managers looking to improve timeframes and reduce costs, solutions may not only be financial or center around logistics. They may also factor in how easily it facilitates collaboration, interchange and fostering closer working relationships. To that question, the cloud may well be a clear best fit.

Studio Images: Goldcrest Post Production / Neil Harrison


Peter Collins is a post professional with experience working in film and television globally. He has worked at the forefront of new production technologies and consults on workflows, project management and industry best practices. He can be contacted via twitter via @PCPostPro or email at pcpostpro@icloud.com.