Tag Archives: Adobe Premiere

Precision

Review: Dell Precision 5480 Mobile Workstation

By Mike McCarthy

It has been a few years since I’ve tested and reviewed a laptop. Technology has progressed a lot since then, and systems are dramatically more powerful than they were just four years ago — and GPUs have improved more than CPUs by most measures.

Precision

I recently had the opportunity to test out the Dell Precision 5480. This is Dell’s highest end small-form-factor laptop. It is a 14-inch system packed with a 14-core, 13900H CPU; 64GB of DDR5 memory; and an Nvidia RTX 3000 Ada generation GPU. There are lots of laptop options out there with a 13900H CPU, six hyperthreaded performance cores and eight efficiency cores (for a total of 20 processing threads), but not very many of those are in a small, 14-inch frame. And the RTX 3000 Ada is even harder to come by. With 4,608 CUDA cores, 8GB of GDDR6 memory and nearly 20 teraflops of processing power, the RTX 3000 GPU is the physical equivalent of the GeForce 4070 Mobile, but with professional-level drivers. This little laptop system packs a punch.

The Display
Now there is no getting around the fact that 14 inches is a very small screen. Personally, I like huge screens, so even an 18-inch laptop screen would seem small to me, but much of my time using any laptop is likely to be spent with it connected to a larger display, whether in the office or at home. For times when I am using it on the move, or at the kitchen table from time to time, this 2560×1600 WLED screen is a good resolution for its 14-inch size. It can be set to 100% scale by eagle-eyed users who covet screen real estate, but most people will have a good experience at 150%.

The Dell Precision 5480 is advertised as supporting 500 nits, which can be helpful when using it outdoors, but it is a glossy screen. Windows reports that the display supports HDR video streaming, but there is no “Use HDR” option for the UI. I am still trying to figure out the logic behind Microsoft’s support for HDR monitoring. The screen also supports blue light filtering at a hardware level to reduce eye strain, which should be better than Windows’ night light software solution. It is also a touch screen, which can be a useful feature on occasion.

The Internals
I am always interested in fitting the maximum amount of useful computing power into the smallest possible package. Back in the day, I remember testing the PNY Prevail Pro, which, at 15 inches, was the smallest VR-capable system. Beyond that, I still have my 13-inch Sony Z1 with a quad-core, 3GHz CPU and GeForce 330M and dual SSDs. Back in 2010, it could run Adobe Premiere Pro CS5 with full CUDA acceleration in a 3-pound package. (The Dell Precision 5480 is actually very similar to that one in terms of size and weight, but, of course, the Dell is far more powerful.)

Any system smaller than 15 inches with a discrete GPU is usually hard to come by, which is why my HP ZBook X2 with Quadro GPU and a 14-inch, 10-bit display was so unique. But that system is five years old, with no direct replacement available, so I was very excited to see that Dell was stepping up to the plate with a powerful 14-inch pro workstation in a 3.3-pound package and under ¾ of an inch thick. And with a 13th Gen Intel CPU supporting 20 threads, paired with a new Ada based RTX GPU with 20 teraflops, the Dell Precision 5480 is not lacking in power.

The machine has four Thunderbolt 4 ports, which are all power-delivery-capable, plus an analog audio jack and a MicroSD reader. It comes with a small USB-C device that offers a USB-A port and an HDMI 2.0 output. The keyboard seems solid, with half-size up and down arrows and a fingerprint-enabled power button in the upper right corner, which will be natural for Mac users.

In my initial demo unit, the touchpad had a sticking issue with the click mechanism, but it turned out to have just been a defect. Once replaced, the touchpad worked great. This process did highlight to me just how important a touchpad is on a small laptop, even as a mouse user. Anytime I am using the laptop on the go (which is the point of a small laptop), the touchpad is the main pointing device, so I use it far more than I originally recognized.

The system comes with a USB-C-based power supply, rated for 130 watts, as well as the previously mentioned adapter for HDMI and USB-A ports. It comes packaged in a molded cardboard container inside a folded cardboard packing box for good product protection — and more ecofriendly than the older Styrofoam-based packaging.

A small laptop offers flexibility. In the office, you can use it with a full set of peripherals. When at home, you can plug in your monitor and accessories, and pick up exactly where you left off.

With virtual desktops, you can get a similar experience by working in the cloud on various systems at different locations, but that doesn’t allow you full access when you are in transit or when you are in places with limited internet access. The Dell Precision 5480 seems like an ideal system for anyone who needs editing power on the go and has monitors to plug in to in their primary work environments. (And they don’t need a larger laptop display on the unit itself.)

Battery Life
Admittedly, the configuration of this particular model should be expected to have the worst possible battery life (most powerful CPU and GPU available with a high-resolution-screen), but it’s not as bad as you’d think. I used this system when I attended the Adobe Max conference, and I did not bring the charger with me during the day. The only time I regretted that is when I accidentally left Adobe Photoshop running in the background for a few hours. Otherwise, I was able to do basic tasks all day long with no issue.

For non-work-related activities such as gaming, I typically got about two hours of usage when playing a 3D game before needing to plug it in. Dell has done a great job of saving power when it is not needed. Power-hungry, performance-based tasks will drain the battery… which is to be expected. But when just doing simple browser-based tasks, I was able to use it all day without issue.

Software
The unit comes with Windows 11 Pro installed. Even after 18 months, I still have not “adapted” to Microsoft’s newest OS, and I prefer Windows 10. But, based on my performance tests, the thread director in Windows 11, which is aware of the difference between the performance cores and the efficiency cores on Intel’s newest chips, does make a difference. (Windows 10 assigns hard tasks to the efficiency cores, and it takes longer to finish them, decreasing overall performance.)

One way around this is to disable the E-Cores in the BIOS and stick with Windows 10, but especially on a laptop, that negates much of the power efficiency of the newer designs. So you are pretty stuck with Windows 11 on these newer systems. But besides that, the Dell Precision 5480 comes with very little bloatware — just drivers and utilities for the various hardware devices and some Dell performance and configuration optimization tools.

The Graphics Processor
The RTX 3000 GPU is the physical equivalent of the GeForce 4070 Mobile, with 4608 CUDA cores, 8GB of GDDR6 memory and nearly 20 teraflops of processing power. It benchmarks with about 25% of the performance of my giant GeForce 4090 desktop card, which is to be expected based on the paper specs. This is actually fine in most cases since I rarely need to harness the full power of that GPU when doing regular editing tasks. And 20 teraflops is twice the performance of the top-end GeForce 2080/RTX 5000 from two generations ago, and it’s now available in a 14-inch laptop.

PrecisionKey for professional use of a model this size, I also tested the Dell Precision 5480 with a number of external displays, up to and including the Dell UltraSharp UP3218K monitor, which was supported in its full 8K at 60fps resolution by using two USB-C-to-DisplayPort cables. The last HP mobile workstation I tested required a docking station for full support of that display, and my Razer is limited to 30fps unless I use an external GPU. It’s good to see that Dell fully supports its own display range on its own system, but I do recognize that’s really a function of the GPU and supported output ports. Nonetheless, you can use this system with an 8K monitor if you so desire.

Storage
The hard drive reports 4.5GB/s write and 4.8GB/s read in AJA System Test, which isn’t the fastest PCIe 4.0 speed but more than enough for 99% of power users. Dell offers SSDs in sizes from 256GB to 4TB with self-encrypting models at 512GB and 1TB for users with those requirements.

Performance
CPUs are much harder to compare on paper, which is why tools like Maxon’s Cinebench are so valuable. Blender also has a benchmarking tool for comparing system performance. And performance is always a relative measure since we are comparing a specific system (this one) to other potential options.

Usually, reviewers compare systems to others that are very similar, but in this case, I took a different approach for two reasons. First, I don’t have similar current options to compare to. Second, there is value in comparing what you are sacrificing when you scale down to a small laptop. Which tasks can you do effectively on a mobile system, and which can wait until you are in front of (or remoting into) a powerful desktop workstation?

The 13900H, with six performance cores and eight efficiency cores, has 20 threads available to the OS. My desktop with a 12700K CPU also has 20 threads, coming from eight performance cores and four efficiency cores. In most synthetic render tests, this little laptop has about 70% of the CPU processing power of my consumer desktop tower.

PrecisionIn real-world tests, exporting cinema-quality files out of Premiere, my tests were frustratingly inconsistent. This appears to result from a combination of both Intel’s new power-saving technology and Adobe’s software optimizations. I ran my entire suite of standard test exports multiple times and got widely varying results. I then reran them repeatedly on my 12700K-based desktop and also got less consistent results than I recall in the past. Most of the time, I test repeatedly with slightly different settings so that I don’t repeat the exact same test a number of times. This has really shifted my view on quantifying performance in Premiere.

The best tests would be a live-playback test and potentially a latency test to see how long it takes playback to begin after you press the space bar. But due to the playback optimizations within the program, this is no longer a good way to compare different systems. Puget Systems, which does work in benchmarking, detail the challenges of quantifying performance in Premiere in this great article that dives even deeper into the topic than I have. Regardless of those limitations, here are the raw numbers from my Media Encoder benchmarks for you to evaluate compared to my other systems.

Summing Up
Suffice it to say, this machine can edit and play back nearly any sequence due to Premiere’s optimizations, and it can export high-quality output files with decent performance. But for longer renders and Red source footage, it might be best to render on your desktop workstation. This is totally reasonable for a portable laptop — no one should expect a 14-inch notebook to replace server level hardware. But the Dell Precision 5480 can accomplish most editing tasks with ease.


Mike McCarthy is an online editor/workflow consultant with over 15 years of experience on feature films and commercials. He has been involved in pioneering new solutions for tapeless workflows, DSLR filmmaking and multi-screen and surround video experiences. Check out his site.

 

Foundry Flix 7.0

Foundry Releases Flix 7.0 for Streamlined Preproduction

Foundry has launched Flix 7.0, an update to its preproduction software that helps studios develop stories by managing editorial round-tripping, storyboard revisions, file versioning and more.

Now offering integration with Autodesk Maya, Flix 7.0 enables both 2D and 3D artists to collaborate from anywhere globally using Flix as a central story hub. Snapshots and playblasts can be imported from Maya into Flix 7.0 as panels, then round-tripped to and from editorial. Flix manages naming, storing and organizing all files, as well as allows teams to provide feedback or revisit older ideas as the story is refined.

Foundry Flix 7.0

While Flix also connects to Adobe Photoshop and Toon Boom Storyboard Pro, the Maya integration provides the ability for layout and storyboard teams to work in tandem. These teams can now collaborate concurrently to identify areas for improvement in the story such as timing issues before they become too complicated and expensive to change later in production. 2D artists can bring Flix’s Maya panels into their drawing tool of choice so that they can trace over the viewport for faster storyboarding. 3D artists can reference 2D storyboard panels from Flix directly in Maya when building complex scenes or character models, providing additional time savings.

Flix 7.0 simplifies building new extensions with a new Remote Client API. This API allows studios to create custom tools that integrate with Flix using the same API as the built-in extensions for Maya and Photoshop. Documentation and example code for the Remote Client API are provided to help studios build custom integrations with their choice tools or to create entirely custom workflows. Flix 7.0’s new extension management system enables studio supervisors to test, update and audit all extensions, with the added ability to deploy them across production from a single place.

Flix 7.0 offers single sign-on (SSO), so IT teams can authenticate Flix users through their studio’s existing SSO platform to centrally manage secure access to story development assets to both staff and freelancers. Flix also supports multi-factor authentication to provide an added layer of security.

Other new features in Flix 7.0 include:

  • New metadata system — Scene data is now stored directly on each Flix panel. For example, for Maya users, global cameras, locators and file path data will be recorded for assets selected in the viewer.
  • Enhanced Adobe Premiere plugin — A multitude of updates and a new UI for the Flix Premiere Adapter eliminates limitations of previous versions, providing an efficient editorial workflow.
  • Photoshop plugin redesign — The Photoshop extension has been rebuilt, bringing users new UI customization options.
  • Updated notification preferences — The ability to turn off automatic email updates each time a panel is published or changed.

 

Filespaces

LucidLink Filespaces Available as Panel in Adobe Premiere

LucidLink showed its first integration for a creative application at IBC 2023. LucidLink Filespaces is now available for Adobe Premiere Pro video editing software, allowing creative editors to preemptively cache just the media needed in their edit directly within the Premiere app.

With this new integration within Premiere, creative editors can now either pin just the clips needed within their sequence or, if more precision is needed, cache the clip ranges found within their edit, thus giving editors a faster and more performant experience. The LucidLink Panel for Premiere Pro marks the first in a series of integrations designed to seamlessly integrate with the Adobe Creative Cloud ecosystem. Unlike conventional methods that require downloads and time-consuming transfers, LucidLink streams precisely the essential data required by any creative tool. The LucidLink integration with Premiere brings intelligent, sequence-aware caching directly into the editing tool. By taking away the need to pin entire folders on a desktop level, the LucidLink Panel for Premiere enables creatives to pin and cache only the relevant content needed by the editing timeline.

LucidLink’s caching technology front and center for creative users can, for example, allow an editor to work with high-resolution media despite poor internet speeds and without breaking known and familiar collaborative workflows. All of this is possible without downloading and duplicating their media.

Other key features of the LucidLink Panel for Premiere include:

  • Proxy and high-resolution workflow intelligence: Users can tailor their cache to high-rez, proxy or both, ensuring a seamless offline-to-online editorial workflow without needing to download or duplicate enormous amounts of camera-original content.
  • Optimized performance: Users can optimize Premiere settings for peak performance with a one-touch adjustment.

The new panel will also indicate when content is not in a Filespace during the pinning or caching process.

For documentary filmmakers on the go, in-app, sequence-aware caching means a high-performance experience even in remote locations and without wasteful downloading of content that isn’t needed for the edit. And post conform editorial is at last simple, efficient and free from the longstanding headaches that finishing editors universally know.

 

 

New Adobe Extension for Atomos Edit, Larger Shogun Monitor/Recorders

At IBC 2023, Atomos made two announcements that will benefit video editors and filmmakers: an Adobe extension for Atomos Edit and the new larger, brighter and connected Shogun series of camera-mounted monitor/recorders running AtomOS 11.

Atomos Edit is a browser-based, collaborative video editor that offers a rapid way to get content directly from a camera to an editing timeline, simplifying the end-to-end production process. Users can upload media in the field directly from any connected Atomos device (via Atomos Cloud Studio), edit it and then immediately publish it directly to YouTube or Vimeo. Alternatively, users can export a sequence via an XML file to an NLE such as Adobe Premiere Pro.

To enhance and streamline workflows for Adobe users, Atomos has developed an extension for Atomos Edit whereby the browser-based NLE appears as a panel directly inside Premiere Pro.

Editors will see content being uploaded and can access previously edited sequences from Atomos Edit in real time. Then they can drag this content straight into a Premiere Pro project.

Says Trevor Elbourne, CEO of Atomos, “While users capture content with their connected Atomos device, footage is uploaded to the cloud in near-real time. After editing collaboratively with our browser-based Atomos Edit NLE to create sequences, they can immediately start creatively editing directly within Premiere Pro. Connecting the Atomos and Adobe ecosystems like this will change the rules for millions of existing and new Adobe Creative Cloud users.”

This cloud-based workflow will streamline all kinds of production. Users can simultaneously capture from multiple Atomos-connected products, funneling content into Atomos Edit to create sequences and rough edits. Then Adobe Premiere editors can take these sequences and finish the content using all the facilities within Adobe Creative Cloud applications.

With Atomos Edit, web-optimized playback allows teams and external clients to view images, audio and video on any device. It’s easy to review and approve from phone, tablet or desktop. Activity reports show who has viewed, downloaded, commented and more, and push notifications alert users of reviews or approvals.

Key Atomos Edit features include:

  • Professional-style timeline with multiple audio and video tracks
  • Multi-user editing with ability to lock the timeline
  • Collaborative review, comment, tag and approve workflows
  • High speed, interrupt-resilient uploads of large video files
  • Effects and transitions
  • Powerful sequence versioning
  • The ability to directly import stock video content

Also at IBC, Atomos introduced Shogun and Shogun Ultra, the latest generation of Shogun camera-mounted monitor-recorders. Like the recently announced Ninja models, both Shogun devices incorporate a completely new operating system — AtomOS 11 — that delivers a host of new features, including EL Zone exposure-referenced colorized image, ARRI False Color and new scheduled playback and recording tools.

The new Shoguns include more codecs as standard: 6K Apple ProRes RAW (8K with Shogun Ultra), Apple ProRes, DNxHD as well as H.265, which was previously available only as a paid option. For better consistency, Shogun and Shogun Ultra have the same differentiating features as the new Ninja and Ninja Ultra.

The key differences between the new Shogun and Ninja lines are Shogun’s brighter, 2000-nit, 7-inch screen and its fully integrated connectivity. Because Shogun has both 12G-SDI and HDMI inputs and outputs, it can be used for cross-conversion, unlike a Ninja fitted with Atomos Connect. Shogun has more power options too, with an NP battery slot and an integrated 2.1mm locking jack DC input socket.

Shogun Ultra is designed for use with cinematic cameras and can record full-quality files to Apple ProRes RAW up to 4K 60p. At the same time, Shogun Ultra records HD 60p to H.265 and supports automatic matching file names, timecode and record triggers from many popular models of ARRI, Canon, Red and Sony cameras.

Both Shogun and Shogun Ultra models have the new 4K camera-to-cloud mode that lets users record and upload much higher-quality-bit rate H.265 video with higher frame rates and customized parameters. The files are lightweight enough for camera-to-cloud workflows but more than good enough quality for immediate use in social media, sports reporting or newsgathering.

Both Shogun units can take advantage of Atomos RemoteView — a new technology that makes it possible to share live views wirelessly from an Atomos screen with other Atomos monitors and with iPads, Macs and Apple TVs. Users can monitor what’s happening on-set and look through any connected camera that’s taking a shot from anywhere in the world. RemoteView gives production teams features never seen before at this price point.

Shogun Ultra features lower-latency cloud connectivity and higher throughput thanks to Wi-Fi 6E integration.

“With both our Ninja and Shogun lines, we are bringing new functionality to market and at the same time making our different product offerings easier to understand,” Elbourne says. “Ninja and Shogun offer matched performance, but the main differentiation is in screen size and brightness — 5-inch, 1000-nit versus 7-inch, 2000 nit — and cloud connectivity — modular or integrated. We want to offer our customers a clearer choice of price point versus performance, exactly as we do with new Ninja and Ninja Ultra.”

AtomOS 11 will also be made available for free to existing Shogun Connect users via a firmware update. Going forward, Shogun Ultra will supersede Shogun Connect.

Both Shogun and Shogun Ultra will ship starting in early October at a price of $999 and $1,199, respectively.

 

Lens Flares

Maxon One Updates: Redshift for Maya, Lens Flares for Premiere

Maxon has announced an update to a series of products in its Maxon One family. Redshift now includes support for Autodesk Maya 2024 on Windows, camera and backplates for 3DS Max, new Principled Hair tool for Cinema 4D and more. Red Giant’s VFX adds support for its Real Lens Flares plugin in Adobe Premiere Pro and valuable upgrades to Supercomp. The collection of Maxon Capsules continues to grow — this time in the form of expertly crafted Laubwerk Plant assets.

Updates include:

Redshift
Redshift 3.5.16 extends DCC support and features substantial improvements to memory management in Redshift CPU.

Redshift is now available for the latest version of Maya on Windows machines, with support for other operating systems coming soon.

  • Redshift Camera and Backplates are now available in Redshift for 3DS Max. Now 3DS Max users can easily apply different backplates for each camera, either in-render or as a post-effect, with robust frame fitting and adjustment options.
  • New material preset for Principled Hair added for Cinema 4D makes it easier to get up and running with physically based hair. More realistic hair with nuanced settings for texture and shape variation can now be created with a few clicks in the Material Manager while Redshift is active.
  • Redshift CPU rendering is now faster in all host software packages, with significantly lower memory requirements (50% on average).
  • General performance and stability updates for improved performance in 3DS Max, Cinema 4D and Houdini.
  • Added support for animated vertex deformation with motion blur in Redshift for Blender.

Lens FlaresVFX 
Editors can now add Real Lens Flares to any footage within Adobe Premiere Pro, and Supercomp now offers a more natural simulation of light, allowing editors to quickly match the specific conditions of a shot.

  • Real Lens Flares is now available in Premiere Pro. It works exactly the same as in After Effects, with the same highly detailed level of customization. Videographers can create realistic lens flares based on simulated optical models and raytraced light, with an unprecedented amount of artistic control while creating the most realistic looking lens flares possible.
  • The upgrade to Supercomp provides more realistic and beautiful Light Wrap, Reverse Light Wrap and Diffusion effects with more creative control.

Capsules 
With Maxon’s Capsules, artists can use tailor-made assets to enhance their projects. Therefore, we are happy to present the first 12 Laubwerk Plant Assets, a great sampler of plant assets created by vegetation experts Laubwerk.

  • There are 12 new Laubwerk Plant Assets – from trees like maple and cherry to bushes like lavender and Dutch garlic to tropical ones like the Kentia palm.
  • A collection of 28 new Redshift ArchViz materials now makes it easy to render plastic objects. Great for everything from industrial design to architectural visualization.

Users are encouraged to update immediately through the Maxon App.

 

Missing

Editing Missing: Screens and Smartphones Tell Story

Sony Pictures’ Missing is a story told almost entirely through computer screens and smartphones. This mystery thriller, streaming now, was directed by Nicholas Johnson and Will Merrick and follows a teen girl named June (Storm Reid), whose mother (Nia Long) goes missing while on vacation with her new boyfriend (Ken Leung). Stuck thousands of miles away in LA, June uses technology to find her mom before it’s too late.

Missing

L-R: Editors Austin Keeling and Arielle Zakowski

The filmmakers relied on cloud-based and AI-powered tech to tell their story. Editors Austin Keeling and Arielle Zakowski chose Adobe Premiere, After Effects and Frame.io to edit, build shots and design thousands of graphics simultaneously. The complex and VFX-heavy workflow was custom-built to make the audience feel as if they’re logging in, clicking and typing along with the characters in real time.

Let’s find out more from the editors…

How early did you get involved in the film?
Austin Keeling: We both got started about six months before principal photography began, so we were some of the earliest crew members involved. We spent those first months creating a previz of the entire film by taking temp screenshots of apps on our own computers (we were working from home at the time) and building each scene from scratch.

The directors would take pictures of themselves and record themselves saying all the lines, and we would slot those into the previz timeline to create a sort of animated storyboard of each scene. By the time we were done, we had a completely watchable version of the entire movie. This was a great time to test out the script to see what was working and improve things that weren’t. Nick and Will were still writing at the time, so they were able to incorporate discoveries made in the previz stage into the final script.

This is not your typical film. What were the challenges of telling the story through screens and smartphones?
Arielle Zakowski: This film was unlike anything either of us had ever worked on before. At first the challenges were mostly technical ones. We hadn’t had much experience with Adobe After Effects, so we had to teach ourselves to use it pretty quickly. And none of the film is actually screen-recorded — it’s all built manually out of layered assets (desktop background, Chrome windows, various apps, mouse, etc.), so in some scenes, we were juggling up to 40 layers of graphics.

MissingOnce we became comfortable with the technical side of the process, we really dove into the challenges imposed by the unique screen perspective. It gave us a whole new set of tools and cinematic language to play with — building tension with nothing more than a mouse move, for example, or conveying a character’s emotion simply through how they type a message. Ultimately the limitations of the computer screen forced us to make more and more creative storytelling choices along the way.

What direction were you given by Will and Nick?
Keeling: They were very much involved in the post process from day one. They had already edited the previous film in this series, Searching, so we leaned heavily on them in learning the screen-film workflow.

In the previz stage, each of us would take a scene and build it from scratch and then send it to the directors for notes.

Missing

From that point on, it became a constant collaboration, and when we moved into a traditional office after principal photography, the directors were with us in the editing rooms every day. They wanted this film to feel bigger than Searching in every way, so they really encouraged us to try new things in the pacing, coverage, transitions, etc. They had a wealth of knowledge about how to tell a screen-life story, so working with them was creatively inspiring.

Was the footage shot traditionally and then put into the screens? If traditionally, was it then treated to look like it’s on phones?
Zakowski: All the footage was shot traditionally and then added into the screen graphics in post. Our cinematographer Steven Holleran used a total of eight different cameras to create a realistic feel for the multiple video outputs we see in the news footage, FaceTime calls, security cameras and iPhones.

Once the footage was incorporated into the graphical elements, we added compression and glitches to some of the footage to further replicate the experience of seeing footage on a laptop screen.

There is a lot happening on the screen. How did you balance all of it to make sure it wasn’t distracting to the viewer?
Keeling: This is partly why editing a screen movie takes so much time. We built the entire computer desktop in a wide shot for each scene and then used adjustment layers to create pans, zooms and close-up shots.

We essentially got to choose how to “cover” each scene in the edit, which allowed for nearly endless possibilities. We were able to tweak and alter the scenes in tons of ways that aren’t possible in a traditional film. We relied a lot on feedback to make sure that the audience wouldn’t get lost along the way. Through multiple test screenings, we were able to figure out which beats were distracting or unclear and then push to find the simplest, most effective way of telling the story.

Do you think the story could have been told in a more traditional way? How did the use of screens and phones help ramp up the drama/mystery/suspense?
Zakowski: The mystery at the core of this movie is thrilling enough that it could probably work in a traditional format, but we think the screen-life storytelling elevates this film into something unique and timely. Watching someone dig around on the internet isn’t inherently thrilling, but by putting the audience in June’s POV and letting them find the clues along with her, we’ve created a fully immersive and intimate version of the story.

Probably everyone has felt some dread before opening an email or anticipation while waiting for a phone call to go through. This format allowed us to really explore the relatable emotions we deal with as we use technology every day.

You edited in Premiere. Why was this system the right one to tell this story?
Keeling: We used Adobe Creative Cloud from start to finish. We edited in Premiere Pro using Productions so we could easily move between projects and share timelines with each other and with our assistant editors. All of the final graphics were made in Illustrator and Photoshop. We used Dynamic Link to send the locked film to After Effects, where we added tons of finishing details. And we used Frame.io to share cuts with the directors and the studio, which made it so easy to get notes on scenes.

We needed programs that were intuitive and collaborative, ones that made it possible to move the film seamlessly from one stage to the next.

Can you talk about using cloud tech and AI on the shots and graphics and the tools you used? What was your workflow?
Zakowski: Because we edited the previz during the pandemic, we relied heavily on cloud-based servers to share projects and assets while working from home. We actually used surprisingly few AI tools during the edit — most of the film was created with straightforward, out-of-the-box Adobe products. The unique nature of this film allowed us to use a lot of morph cuts in the FaceTime footage to combine takes and adjust timing.

Everything Everywhere All at Once Assistant Editors Talk Process

By Randi Altman

Everything Everywhere All at Once is quite a ride – tons of action, some messy family dynamics, true love and some super-weird multiverses. It has it all. But keeping track of all those timelines and action scenes was no easy feat. In fact, the film’s editor, Paul Rogers, has received an Oscar nomination for his role on the film. It was well-deserved, but he had help — namely, assistant editors Zekun Mao and Aashish D’Mello.

Zekun Mao

Everything Everywhere All at Once was directed by Daniel Kwan and Daniel Scheinert, collectively known as the Daniels. It has received 11 Oscar nominations, including Best Picture and Best Director.

We reached out to Rogers’ assistant editors to talk about how they worked on the film, going remote and using Adobe Premiere.

How early did you guys get involved on the film and with Paul Rogers?
Zekun Mao: I received an email about a film looking for an AE who could speak Chinese. I applied and met Paul and the Daniels for the first time when I interviewed for the job in late 2019. I eventually started the job when production began in January 2020.

Aashish D’Mello

Aashish D’Mello: Zekun approached me toward the end of principal photography, as the film was looking for another AE. I started the job in April 2020 after principal photography had wrapped and post production had gone remote.

Can you talk about your roles on the film?
Mao: I was mainly responsible for supporting Paul and the Daniels, which included prepping dailies and adding English subtitles for the non-English dialogue. As we went remote in March 2020, I had to quickly come up with a remote workflow and make sure the edit process was working smoothly for everyone.

D’Mello: I was mainly responsible for coordinating between editorial and the VFX team. I would prep and turn over VFX shots to the artists, keep track of each shot as the edit was constantly changing and, eventually, make sure all the VFX were delivered for the color process. I was also helping out Zekun whenever needed.

How much planning had to go into the editing considering how complicated the story was?
Mao: The script was pretty much the blueprint for the editing process. Paul was editing during production, and he was able to communicate with the Daniels about what they were shooting while they were shooting it. Paul would sometimes ask me to prioritize prepping certain scenes because he wanted to work on them sooner than others to make sure they were playing well in the edit. If he felt there was anything additional needed, he would communicate that to the Daniels.

I imagine that keeping track of all the different timelines was a huge process. How did you pull that off?
Mao: We were dealing with two separate kinds of timelines. One was all the universes in the movie, and the other was the actual editing sequences. The number of universes seemed rather complex initially, but as we got more and more familiar with the footage and the story, things became easier to understand. We ended up referring to scenes and shots by their universes instead of scene numbers because it was easier to remember them that way.

D’Mello: As for the edit sequences, the remote workflow made it difficult to wrangle. There were many different versions of the cut, with multiple people working on it and VFX shots changing all the time. We really had to pay attention to labeling each cut with date and other specific information, always backing up previous versions so that if Paul and the Daniels wanted to open an older cut, it would be available. Toward picture lock, we were able to tidy up and simplify the Premiere projects and sequences, as a lot of the cut was already pretty final.

Were you guys asked to take a look at Paul’s edits for feedback? Were you given scenes to edit yourselves?
Mao: Yes, absolutely Paul would ask us for feedback. He would not only ask about our overall feelings of each version, but also about any specific moments, especially when there were drastic changes in the cut. We were constantly busy with different tasks each day, so it became difficult to find time to edit scenes ourselves.

Can you walk us through the editing workflow in general? How was the team getting and implementing notes from the Daniels?
Mao: Paul was assembling cuts during principal photography. Sometimes he would go to set to look at cuts with the Daniels. When we went remote in March 2020, Paul and the Daniels initially used Zoom to edit, but most of the time Paul would just post a sequence to Frame.io, and the Daniels would leave comments on it.

Later, they began using Evercast for their edit sessions. The Daniels would edit on their own as well, and I was asked to prepare those sequences in separate project files for them to work on. Because we were using a new feature in Premiere Pro called Productions, it was easier to exchange projects/sequences with everyone involved. Paul would then incorporate these sequences from the Daniels into the cut. Toward picture lock in 2021, the Daniels and Paul went back to an in-person setting, where it was easier for them to bounce ideas off each other.

There are a ton of VFX. Can you talk about the VFX workflow and the edit? Were you using temp VFX?
D’Mello: We had approximately 500 VFX shots in the final movie. VFX supervisor Zak Stoltz was on-set during production and had made an initial list of potential VFX shots. During the edit, Paul and the Daniels would work on temp versions of the VFX shots in Premiere Pro to see how they played in the edit. The final versions of these shots would then be redone by our VFX team. Our VFX team was a total of six people, most of whom were involved since the beginning of post, so it was easy for the Daniels to communicate with everyone directly.

Because we were all remote, Zak and I created an editorial/VFX workflow using Resilio Sync that enabled us to quickly sync prepped and rendered shots. I had two separate sets of Premiere projects: one for editorial and one for VFX prep. I would take the temp VFX from the edit project to the VFX prep project, relink to the raw footage and render the portion needed for the effect. Using that portion, I would then prep the shot in After Effects to match the edit. I would then sync all these files directly with the VFX artists. The Daniels would also do a bunch of VFX work themselves, so it was important to include and incorporate their process into our workflow as well.

Were some scenes more challenging than others? Can you explain why and how the team dealt with that?
Mao: For me, it was the staircase fight toward the end of the movie. That whole portion of the film was shot over the course of a week. We were getting bits of footage of the fight each day, so when I was organizing the scene for Paul, I had to keep track of which portion of the fight was already shot and which parts were missing.

Moreover, there were parts that were shot as pickups later on. In order to help Paul go through this enormous amount of footage, I arranged the shots in order of the actions taking place in them. It just took a while to go through all the footage and get a complete sequence.

D’Mello: The sequence where Evelyn (Michelle Yeoh) is flashing through multiple universes (right before we go to the rock universe) definitely stands out. We had to keep track of so many stock footage shots that were used as backgrounds, even for a single frame. The VFX artists also created some of their own backgrounds for the sequence. We ended up doing a little precoloring on the greenscreen shot of Evelyn’s face and then sending it to VFX to combine with the backgrounds instead of coloring each and every single background separately.

Because of how many backgrounds were involved, we went through multiple versions before the sequence could be finalized. It was a crazy process for a crazy sequence  in a crazy movie.

I’m sure there are a ton of things I haven’t touched on. Can you both share something that I might have missed?
Mao: Paul, along with the Daniels, had crafted very specific resizes and complicated speed changes/time remapping in the edit. In order to prep the edit for coloring in the most accurate and precise way, we had to use a very specific workflow for the online process — relinking to all the raw media within Premiere while leaving all the resizes and speed changes untouched. We then rendered the full film to 4K DPX files, which were directly sent to the color house.

Adobe was really helpful during the editing process and was always asking us for feedback about Premiere Pro. We would report certain issues or bugs that occurred while using the software, and they would do their best to get us new versions or provide workarounds that addressed the issues.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 25 years. 

Chesa’s Acorn Cloud: SaaS-Based MAM for Video Teams

Chesa, a US-based systems integrator supporting media supply chain workflows, has introduced Acorn Cloud, a SaaS-based media asset management platform. Acorn targets small and mid-sized teams who create, edit and deliver video content. This fully managed service, says Chesa, helps lower the barrier of entry for creative teams to access collaboration tools for their remote teams. 

With Acorn Cloud, video editors will have the ability to collaborate remotely using their cloud-based software-as-a-service platform. A turnkey solution, Acorn gives editors the ability to ingest, search, find, enrich and retain their assets, all within a work-in-progress platform for Adobe Premiere,” says Chesa’s Lance Hukill.

Built for teams that need agility but lack IT support, Acorn comes fully supported. Acorn sits on Chesa’s centralized control plane called Bones and is built on Amazon Web Services, LucidLink and Quantum CatDV technologies. LucidLink offers an innovative cloud-native file service designed specifically for extensive data access over distance.

Quantum CatDV enables Acorn Cloud to be a powerful media asset management and workflow orchestration platform that provides automation and collaboration tools for large volumes of digital media. Acorn customizes CatDV for features creative teams need. LucidLink Filespaces provides security and high-performance scalability to run file-based workloads on object storage for maximum efficiency and productivity.

Functionality includes:

  • Ingest assets in bulk
  • Drag-and-drop ingest with camera card support
  • Live edit
  • Review and edit proxies
  • Scrub proxies via web UI without opening an asset
  • Log notes about each scene as virtual clips
  • Create deliverables quickly for distribution platforms
  • Extended metadata

Support features include:

  • Comprehensive documentation and video library
  • Intuitive user interface for rapid onboarding
  • Simple new user set up
  • Production asset management functionality
  • Backed by industry-leading media supply chain experts
  • Fully supported and monitored
  • Support requests within the platform

 

 

Tiktok

Sundance: Seth Anderson on Editing TikTok, Boom. Documentary

Screened at this year’s Sundance, the documentary TikTok, Boom. dissects the incredibly popular social media platform TikTok. The film examines the algorithmic, sociopolitical, economic and cultural impact of the app — the good and the bad. The doc also features interviews with a handful of young people who have found success on TikTok.

Seth Anderson

TikTok, Boom. was directed by Shalini Kantayya and shot by DP Steve Acevedo, who used a Blackmagic Ursa Mini and a Sony equipped with Rokinon Cine DS Primes and Canon L Series lenses. Editor Seth Anderson, who cut TikTok, Boom., has worked on a variety of docs, features, TV series and shorts.

Let’s find out about his process on this feature documentary.

How early did you get involved on this film?
I was brought on shortly after shooting began.

What direction were you given for the edit? How often was Shalini Kantayya taking a look at your cut?
We cut remotely, so we each had our own systems, and we used Evercast when working together. I watched her previous films to see what edit style she would want to aim for, and Shalini gave me free range on my first pass of scenes.

Tiktok

Director Shalini Kantayya

Initially, I assembled all the verité scenes as stand-alone stories, as if we had no interviews to flesh them out. After creating arcs for each of the main characters, we added the characters’ individual interview bites. Then we cut the character arcs down and started intercutting them. After a version of the film was built that way, we started building the experts’ commentary (reporters, tech experts, etc.).

While shooting she was pretty hands-off, but after primary photography ended, we worked together most days.

Was there a particular scene or scenes that were most challenging?
The biggest challenge was trying to balance making a film that would entertain and inform the users of TikTok — mainly 20-somethings and younger, who already know the inner workings and drama surrounding TikTok — while also giving an introduction and overview of TikTok to non-users. Those are the people who know next to nothing about the app beyond mentions in news articles and jokes by comedians.

TikTok

Seth Anderson’s editing setup

Can you talk about working on this during the pandemic? How did that affect the workflow?
The pandemic definitely affected our workflow. The production company and media were in LA, and the director and I were in New York, so we had to manage the time difference with requests. Since many things that would quickly be worked out in person had to be done by email, some things took longer than usual.

You used Adobe Premiere running on a Mac. Is there a tool within that system that you used the most?
This was my first long-form job on Premiere, so I’m in a position of needing workflow tips rather than giving them.

How did you manage your time?
They started shooting in June, and I came on at the start of July, so we had a massive push to get a decent cut of the film ready to submit to Sundance. Then we had to keep pushing, with the hope we’d get in. Once we were in, we had to hustle to lock, do sound, VFX and color. We probably squeezed a year’s schedule into six months. I wouldn’t recommend it (laughs).

Did you have an assistant editor on this? If so, how did you work with them. Do you give them scenes to edit?
Yes, we had an assistant editor in LA, Tim Cunningham. This is one area where remote doesn’t help. I always want the relationship with the AE to be more collaborative, but that’s harder with different time zones and no actual face time.

I did give Tim a few scenes to assemble, and the post producers always had him doing things. As you can guess, we had a massive amount of archival material.

How do you manage producers’ expectations with reality/what can really be done?
You do your best. In most cases, producers want things done as quickly as possible, while directors want to think and mull over the work.

How do you manage your time? Do you manage expectations or try everything they ask of you?
If possible, I do all the producers’ notes, at least the ones the director signs off on. The director’s opinion and vision are paramount in making an independent feature, so I will say I do what is possible to do, but avoid the head

How do you take criticism?
I’ve been doing this for a while, so I’ve gotten good at accepting criticism. I think you should always be open to other people’s ideas. You never know where a genius idea will come from.

Finally, any tips for those just starting out?
Be open to learning new programs and techniques. Find out what you need to know for the section of the industry you want to work in.

With editing, you should focus on Avid, Premiere, FCP and other aspects of Creative Cloud. Learn those programs as well as you can. Just because you learned on one program doesn’t mean that program will be the one a potential jobs needs. Example: Most students nowadays learn to edit on Premiere, but Avid Media Composer is still the primary tool used on most jobs.

All Photos: Courtesy of Sundance Institute

Audio Design Desk Updated, Integrates With Stream Deck

Audio Design Desk has been updated to Version 1.9 and is compatible with a range of pro audio and video tools and introduces auto-compose, MIDI triggers, instant variations and a dozen other new features for audio and video pros and creatives.

Audio Design Desk has also launched an integration with Stream Deck, which produces sound for video or live streams with simple keystrokes. Additionally, Audio Design Desk (ADD) has introduced a “lite” version of its software as well as ADD Tags, a metadata tagging application that uses ADD’s AI to streamline the tagging process for audio artists.

New features include:

Auto-Compose: ADD can now create a sound pass the moment you drag your video into the timeline. Markers from Final Cut Pro, Adobe Premiere, Avid and Frame.io transfer seamlessly to ADD and can be converted into sound instantaneously.

Variations: Now, users can automatically create any number of variations of combined sounds. One beat can become 100 (or more) with the click of a button. You can combine footsteps with cloth movements or impacts and debris sounds to create an explosion and allow Variations to offer endless alternates in an instant.

MIDI Triggers: Users can perform sounds and control functions in ADD via a MIDI keyboard.

Intelligent Import: Users of sound libraries like Splice, LoopCloud, Pro Sound Effects, Boom Library and others can use ADD or ADD Tags to automatically determine the right metadata and add them to sounds.

DAW Integrations: ADD 1.9 now integrates with Pro Tools, Logic, Live, Cubase, Digital Performer, Nuendo and Reaper, functioning like a giant plug-in within these tools.

Spot Mode: For those users who don’t want to leave their editing applications, ADD’s 40,000+ sounds and music cues can now be dragged into Final Cut Pro, Premiere, Avid, Screenflow and any other audio or video editing application.

Stacks: The new Stacks feature provides a non-destructive way to stack any number of sounds into a single region that can be unstacked and changed at any time. This opens up ways for users to create and share combinations of sounds that can be opened, tweaked and closed.

Audio Design Desk Lite: Targets budding filmmakers, audio artists and sound designers. At $8.99 a month (or $89.99 annually), the lite plan gives users access to Audio Design Desk’s 40,000+ sounds, loops and music cues as well as an all-new lite version of ADD that still has its sync and replacement tools.

Carla Gutierrez

Julia Documentary Editor Carla Gutierrez

Carla Gutierrez is no newbie when it comes to editing documentaries about female icons — she cut the 2018 Oscar-nominated RBG, about the late Supreme Court Justice Ruth Bader Ginsburg. Most recently, this Emmy- and ACE Eddie-nominated editor reteamed with RBG directors Julie Cohen and Betsy West on Julia, about the iconic TV chef Julia Child. (Read our interview with Gutierrez about editing RBG here.)

Carla Gutierrez

Carla Gutierrez

Gutierrez, whose other credits include The Last Out and Pray Away, says she has “a wonderful partnership in the edit room” with Cohen and West.

Let’s find out more…

What’s your relationship like with Julie and Betsy? How often were they looking at your cut?
We laugh a lot together, which I think it is very important for the intense process of editing to go well. We truly trust each other and enjoy engaging in conversation. I think trust and laughter make better films! After coming up with an initial story arc I would show Julie and Betsy segments of the film as soon as I edited a first pass. We watched the entire film all the way through often once we got to the rough-cut stage.

Carla Gutierrez

Was there a particular scene or scenes that were most challenging?
The last segment of the film was tricky to piece together. Julia did so many things toward the end of her life, but we didn’t want to simply list her many accomplishments at the end of the film.

We needed to close her personal journey while highlighting how she impacted the way that Americans view and connect with food. This segment finally came together when we decided to close the film by going back to the very first episode of The French Chef. Everything came full circle by showing how it all started.

Can you talk about working with all the different formats that the footage came in on?
Julia is a very heavy archival film, and we received the archival footage in all different types of formats. We decided to transcode everything to the same format/codec to make sure the vast amount of archival would not bog our edit system down. Adobe Premiere Pro is great at taking in any kind of video format, but when you are working with such a huge project with hundreds of hours of archival footage and hundreds of photos, it helps to keep everything consistent.

Did you do more than edit on this film?

I was only the editor. I worked closely with our incredible archival researcher, Abby Lieberman, and my partner in crime, associate editor, Grace Mendenhall. We had a wonderful team.

How did the pandemic affect the workflow?
Two months into the edit, New York City shut down and we all went home to work. Our associate editor Grace and I were already working on mirrored drives and sharing sequences with each other, so our workflow didn’t change at all.

We shared new archival and production footage over the cloud, and I had feedback sessions with the directors on Zoom. Since the directors and I had worked on another film before, our communication was incredibly fluid already. The trust we had built on our previous film helped us work smoothly through the pandemic.

Carla Gutierrez

Carla Gutierrez remote Premiere setup

You edit on Premiere. Is there a tool within that system that you rely on a lot?Oh, there are so many! The keyboard shortcut I use constantly is the E key, which extends the selected edit to my play head. I hit the E key at least a hundred times a day.

How did you manage your time?
I’m a mom, so I’m very productive during working hours. I also like to keep reasonable hours. Documentary editing is a mentally demanding job, and adding a lot more hours to the day doesn’t necessarily make the work more productive.

How do you work with assistant editors? Do you see the role of assistant editor as strictly technical or as a collaborator?
I view assistant editors as close, creative collaborators. I rely on them to keep the post moving with tech support, but it is extremely important for me to create opportunities for AEs to learn the craft and be exposed to the creative process. I was lucky to be mentored by the amazing Kim Roberts early in my career and that’s how I learned to edit long-form documentaries.

How do you manage producer’s expectations with reality/what can really be done?
As long as we are trying to capture the essence and the heart of the story, I find that producers and executives are very receptive to what we present to them. Limitations often give you opportunities that can lead to better ways of telling a story. You just have to be open to seeing limitations in a new light.

How do you take criticism? Do you find yourself defensive or accepting of other’s ideas?
I prefer to call it feedback not criticism. And I absolutely love tough feedback. I think the best way to approach notes is to try to understand what is not working in the cut. Then, it is up to us to come up with solutions. Sometimes people suggest fixes or ideas, but we are the ones that know the material the best and we know what is possible. So, instead of grabbing onto ideas of how to fix things, I try to really listen to the notes to truly understand what the problems are with our film.

When someone who is starting out asks what they should learn, what do you recommend?
They should focus on honing their voice as storytellers. Learn the craft of editing, not the technical aspects of it. You can always pick up the technical skills you’ll need, but learning how to tell a story with intensity and emotion is how you become an editor. Find mentors, watch as many documentaries as you can, and build a community around you that can support you.

Adobe Premiere

Digital Anarchy Updates AI Search Tool for Adobe Premiere

Digital Anarchy released a new update to its intelligent search engine for Adobe Premiere editors – PowerSearch 3.0. Acting as an intelligent search engine designed to scour video sequences for dialogue, PowerSearch integrates directly within Premiere, enabling editors to quickly search an entire project or Premiere Production for dialogue and instantly locate specific clips and sequences based on those keyword searches.

Adobe Premiere

PowerSearch takes advantage of transcripts generated by either Digital Anarchy’s Transcriptive A.I. transcription technology or by Adobe’s new transcription service to find dialogue and phrases. It’s the editor’s choice on which AI service to use.

To further simplify searching Premiere projects, the latest version of PowerSearch offers editors the ability to use common search engine commands, such as minus signs and quotes, for more precise searching. For editors with hundreds of hours of video, PowerSearch 3.0 will scour an entire Premiere project, making it easy for them to find exactly what they’re looking for by showing only relevant search results.

According to Digital Anarchy, the new version of PowerSearch offers a significant performance upgrade with measurable benefits over Premiere’s internal search tools, especially for editors working with transcripts for all their footage and sequences. Editors can either use transcripts generated by Transcriptive A.I. or Adobe Sensei (via SRT).

Adobe Premiere

The new version of PowerSearch also enables faster indexing and search processing along with additional new search tools. Enhanced integration with Premiere’s Source and Program panels means clicking on search results automatically opens up clips and sequences directly where the dialogue was spoken.

Here’s a short list of some of the key new features in PowerSearch 3.0:

  • Ability to index SRTs: Users can now search all captions simultaneously, versus other options such as Adobe Text Panel, which only allows users to search one SRT at a time.
  • Support for Adobe Sensei transcripts: By importing SRTs into Premiere or Transcriptive Rough Cutter Adobe Sensei transcripts can be searched.
  • Search improvements: Ability to search with quotes for more accurate results.
  • Support for Premiere Productions: Index individual projects in Premiere Productions and easily switch between them.
  • Project switching: New buttons on both screens are now accessible to load the index for an active project.
  • Increased indexing and searching speed.
  • Instant loading:  Users can now load the database without having to re-index. This eliminates the need to re-index the same project with a different name.

PowerSearch 3.0 is available now and is free for all users of Transcriptive Rough Cutter ($199). For new customers or those not using Transcriptive Rough Cutter with Premiere, PowerSearch 3.0 is priced at $99. A free trial of PowerSearch 3.0 is also available here. PowerSearch 3.0 is compatible with Premiere Pro 2020 and above (14.0 and above).

 

 

 

Look Designer 2

Color Intelligence’s Look Designer 2 for Adobe Premiere, After Effects

Color Intelligence, specialists in AI image processing, color management, color grading and film emulation technologies, has released Look Designer 2 for Adobe’s Premiere Pro and After Effects. Previously only available for Blackmagic DaVinci Resolve, this latest Look Designer version gives multi-skilled directors and video creators a complete toolset for producing footage with a filmic look and feel. Look Designer 2 is Color Intelligence’s way of democratizing high-caliber and costly look-creation tools and technologies traditionally reserved for streaming episodic TV shows and high-end feature film production.

Look Designer 2

Available as a plugin for Premiere and After Effects, Look Designer 2 accurately emulates the traditional process associated with film acquisition, development, scanning and printing. It goes beyond a simple film emulation LUT with custom subtractive color CMY processing. Look Designer software offers smarter tools and more efficient workflows for colorists, filmmakers, art directors, game developers and video content creators to tap into the science of color and perception associated with traditional celluloid acquisition, resulting in stronger emotional connections between content and story.

In a limited time offer, Look Designer 2 for Adobe is available for $99 for a perpetual license with access from two system installs.

According to Color Intelligence CEO Dado Valentic, “This is just a first step in our efforts to bring high-end color grading tools to creators far outside of the borders of Hollywood and is just the beginning of our commitment to providing tools for the Adobe platform.”

Look Designer 2By remaining all in one place instead of requiring users to switch between separate and sometimes incompatible plugins, Look Designer 2 offers Adobe users an accelerated and intuitive workflow while maintaining an efficient color management pipeline. Cinematographers and filmmakers can now shoot both film and digital while the technology cleverly fills in the gaps, enabling digital footage to closely emulate the highly desirable aesthetics of analog film.

Additionally, video creators can easily create unique LUT’s directly in Look Designer, replacing the need to laboriously search and download incompatible LUT’s from the web. Color Intelligence and its suite of color management tools are being used by companies such as Activision Blizzard, CBS, Comedy Central, Disney, HBO, Moving Picture Company, NBCUniversal and Warner Bros.

Highlights:

  • Look Designer DNA fully integrated – Users can access the same powerful tools and technology as Look Designer 2 for DaVinci Resolve.
  • One-click Scene Referred Image Processing, the Netflix industry standard – Users can grade once and export in HDR, SDR, Cinema Screen and other popular formats to multiple display devices while preserving the original source material. Color grade is automatically adapted to any medium.
  • Lumetri friendlyLook Designer complements the Lumetri engine by expanding functionality to achieve stunning looks.
  • Seamless compatibility between apps – Users can create and save presets in Adobe Premiere Pro and import presets to Adobe After Effects, saving valuable time and effort.
  • Ed Lachman Zones – integrated exposure tool.
  • Accepts source material from multiple cameras – Efficiently handles multiple sources such as GoPro or iPhone by matching and bringing to a unified color space.
  • Expanded film emulation library – Users can quickly access and apply new film stock profiles.
  • Supports iPhone capture formats HD, 4K, HD (PAL) and 4K (PAL).
  • Supports new cameras — Z-Cam and Kinefinity.
  • New 3D LUT testing feature – Quickly test and evaluate color calibration values before application.
  • Extended output and display formats – Netflix HDR, HDR10, Dolby Vision, Apple Pro displays.

 

 

Review: Adobe MAX Brings Premiere Version 22

By Mike McCarthy

The Adobe MAX creativity conference is taking place virtually for the second year in a row, and with this event comes the release of new versions of many of Adobe’s products. One interesting note relating to this is that Adobe’s versioning of each video application is now Version 22, regardless of the tool’s previous version. This will make the version numbers consistent across the different applications and match the year that the release is associated with. Last year, Premiere Pro 2021 was released, but it was Version 15.0, while After Effects was Version 18.0. Unlike Adobe’s move to redesign its applications icons to all look the same (so you can’t easily tell the difference between an AEP file and a Premiere project), this broad consistency change seems like a good idea to make it easier to track versions across time.

The application I am most interested in is Premiere Pro (although at the end of this review, I touch on After Effects and Photoshop). Last year’s Version 15 release added a new approach to captions, which Adobe has continued to flesh out with more automatic speech-to-text tools and better support for new titling options. Other improvements to Version 15 introduced through the year included more control over project item labels and colors in collaborative environments, HDR output on UI displays via DirectX and automatic switching of audio devices to match the OS preferences.

Adobe Premiere Version 22 Updates: HDR and More
HEVC and H.264 files are now color-managed formats, which means that Premiere now correctly supports HDR files in those codecs. This had been a huge hole in the existing HDR workflow because Premiere could export HEVC and H.264 files of HDR content but couldn’t import or view them. The issue is now resolved, opening a host of new HDR workflow options.

Adobe also added support for hardware-accelerated decoding of 10-bit 4:2:2 HEVC files on new Intel CPUs, which is a new format for recording HDR on high-end DSLRs that is not currently accelerated on Nvidia or AMD GPUs. This should allow processing of HDR content on much smaller and lighter systems than are currently required with the existing ProRes-based HDR workflows. Adobe also added color management for XAVC files in SLog color space and better support for Log files from Canon and Panasonic as well.

One other feature Adobe has announced for Premiere Pro 2022, that hasn’t been released to the public version, is fully redesigned import and export windows, which consume the entire UI, for no apparent reason, and do not include all of the functionality of the previous approaches. I believe it might be more consistent with Premiere Rush’s UI, and may be similar to Resolve’s export options.

The main thing I am missing is the source settings in the export window, which previously allowed you to crop and scale the output in different ways. These results can be achieved by adding export sequences that include the content are trying to output, but this is not as simple to do on a large scale, and can’t be included in presets. Obviously I am not a fan of these changes, and see no upside to the new approach. Currently the older import and export UI controls are still available in Version 22.0, and are still available in the Beta versions, if you send your sequence to Media Encoder. Hopefully these functions will be included in the new approach to exporting before it comes out of beta.

The Lumetri scopes have also gotten some attention as they become more significant for HDR processing. The vectorscope is now colorized, and you can zoom in to any section by double-clicking. The histogram is much more detailed and accurate, offering a more precise view of the underlying content. The Lumetri Curves effect UI now scales horizontally with the panel for more precision. I would prefer to be able to scale it vertically as well, but that is not yet supported. Adobe has also implemented a more powerful AI-assisted Auto Tone function that sets all of the basic controls based on an analysis of the content.

Another new feature coming out of beta is the Simplify Sequence functionality. This creates a new, cleaned-up copy of an existing sequence. The clean version can remove inactive tracks and clips, drop everything to the lowest available track and be further fine-tuned by locked layers.  This is a great tool that was implemented in a well thought out and nondestructive way.

Also arriving in the beta version is a feature called Remix. Originally introduced in Audition, Remix will adjust the duration of music tracks while using AI to preserve the tempo and feel of the original asset. I believe it does this by attempting to remove or loop repetitive sections, and it visually displays where the automatic edits are being made right on the clip in the sequence.

After Effects & Other Apps
After Effects is another application I use, although less and less over time as Premiere gains many of the functions that used to require jumping over to AE. But the big news there is that Adobe is introducing multi-frame rendering to help users tap into the potential processing power of multi-core CPUs. On my high-core-count systems, I am seeing a 3x speed increase when rendering the composited scenes for my Grounds of Freedom animated web series. My main 5K composited scenes used to take 3 to 5 hours to render, and that looks like it will be cut to 1 to 1 ½ hours, which is fantastic.

After Effects is also getting a speculative render feature to try to prepare for smoother playback when your system is idle. Because of the type of work I do, I wouldn’t use this feature much, but I am sure it will be great for some users. I tested out GridIron Nucleo Pro for AE7 15 years ago, and Adobe was playing with both of these functions back then. The old multi-frame render options got bogged down managing that much data, but Adobe seems to have sorted that issue out by now because a 3x increase in real-world speed is nothing to scoff at. Adobe has also added a composition profiler that tells users how long each layer is adding to the render time, with that info available right in the layer stack.

Adobe also just completed its acquisition of cloud collaboration tool Frame.io, and as an existing Frame.io user, I am eagerly waiting to see what develops from this. But there are no new details to announce yet.

Photoshop is also getting a number of new features, mostly centered on AI-powered tools and collaboration with iPad and web users. The power of Photoshop for iPad will soon be available directly in a web browser for collaboration through the new Creative Cloud Spaces and Canvas. Users will be able to share their work directly from Photoshop, which will generate a public link to the cloud document for browser-based feedback or editing.

 

The AI-based object selection tool has been improved to show users what object boundaries have been detected wherever they hover their cursor over the image. There are also improvements in the interoperability between Photoshop and Illustrator, allowing Illustrator layers to be pasted into Photoshop while retaining their metadata and even vector editability. Illustrator is also getting an AI-enhanced vectorizing tool to better convert bitmap imagery to vector art.

Lots of new functionality is coming to Creative Cloud, and you can learn plenty of tips and tricks from the various sessions that are available throughout the free event. Anyone can sign up to attend online, so be sure to check it out.


Mike McCarthy is a technology consultant with extensive experience in the film post production. He started posting technology info and analysis at HD4PC in 2007. He broadened his focus with TechWithMikeFirst 10 years later.

 

Quantum and Adobe Team

Quantum and Adobe Team for Remote Editing on Premiere

Quantum has introduced the now-available Quantum Collaborative Workflow Solution powered by CatDV. This solution was designed specifically for Adobe Premiere Pro users to address the challenges of remote workflow editing and collaboration for large creative teams seeking maximum productivity. The turnkey solution is the result of months of testing and tuning to ensure that Adobe Premiere users receive a completely integrated, Quantum-tested and supported solution.

The Quantum Collaborative Workflow Solution is designed for creative teams needing reliable and scalable offerings for users that are increasingly dispersed, with some members working onsite in an office or studio location and some working from a remote location. The solution integrates everything needed for a collaborative and remote workflow based on the Adobe Creative Cloud tool set, including Adobe Premiere Pro.

Quantum StorNext shared storage provides the workflow storage while Quantum CatDV Asset Management, with included CatDV Cloud Panel for Adobe Creative Cloud, offers asset and project management and orchestration. For project archive and asset protection, customers can choose their preferred asset archiving solution, ranging from Quantum Scalar tape to Quantum ActiveScale high-performance object storage, or any S3-compatible cloud storage. The complete, integrated solution is installed and supported by Quantum Professional Services and reseller partners certified to install StorNext and CatDV.

“Collaborative teams who rely on Adobe Creative Cloud apps, like Adobe Premiere Pro, After Effects or Photoshop, need a platform that gives them the performance and flexibility to realize their vision,” says Sue Skidmore, head of partner relations for Adobe Video. “Our users value the tight integration and flexibility of environments provided by Quantum CatDV and StorNext.”

The Quantum Collaborative Workflow Solution will be installed and configured by a combination of Quantum Value Added Resellers (VARS) and the Quantum Professional Services team, and is customized to each customer’s environment and needs, and choice of archiving technology and size.

Main Image: CatDV Panel embedded within Adobe Premiere Pro

 

 

The Live+Post Workflow

The Live/Post Workflow for Sci-Fi Series Orbital Redux

A hybrid live/post technique is how writer/director Steven Calcote and showrunner Lillian Diaz-Przybyl approached Orbital Redux, an eight-episode adventure that was originally performed and broadcast live, featuring studio musicians, real-time special effects, multiple cameras, live switching, audience interactivity and more. The story follows a former astronaut who is tasked with teaching a new pilot the ropes of the space program.

The Live+Post Workflow

Steven Calcote and Lillian Diaz-Przybyl

Now, following a year of post production, Butcher Bird Studios has released the final version of this series on the sci-fi channel Dust. We recently caught up with the creators to find out why they took their unusual production/post approach.

Why shoot a sci-fi film live?
Lillian Diaz-Przybyl: To some extent, because we could! We’d been doing a lot of experimenting in the live space and worked heavily with our friends at Blackmagic to integrate everything from our cameras to our switchers and recording setup, making everything interoperable and seamless.

Steven Calcote: There’s nothing like the adrenaline of real-time storytelling, but even though we filmed it live, we used a post mindset from the start. We always planned to release the project in two stages and so we needed clean ISO recordings of everything.

Stage layout for live shoot

With as many as 10 cameras operating at any time (including Blackmagic’s Ursa Mini Pro and the Micro Studio 4K) it was crucial that we preserved every frame of the interactive live show (think “rough cut”) for the editing needed to create our definitive version.

While the show was initially live-edited with the ATEM Television Studio 4K, we recorded clean feeds of all cameras through a stack of HyperDeck Studios.

How does prep change when filming for a live audience?
Diaz-Przybyl: Our motto as an organization has always been “fix it in pre,” but you have to double-down on that for live — from previsualizations to prepping all of our VFX for the projection screens and ship-board monitors ahead of time.

Calcote: I find that live narrative requires a deeper understanding of the world, characters and plot by everyone on both sides of the camera. But more fundamentally, all the on-set elements require a high degree of functionality that actors can interact with while filming.

The Live+Post Workflow

Shooting live

For instance, our interface designer Jason Milligan used the interactive digital prototyping tool ProtoPie to create functional touch interfaces for the spaceship using NASA UI as a reference. These files were all preserved for post as well in case any compositing touch-ups were needed.

What changed between the live cut and the final cut?
Diaz-Przybyl: We wanted to honor and celebrate the original live cut, so the changes between that and the final are extensive, but generally subtle. We are an Adobe shop, so we pulled all our ISOs and the live line-cut into Premiere (averaging 12 to 15 video tracks). This then allowed us to easily tweak our camera choices, or the timing of cuts.

Calcote: The one area we haven’t been able to conquer for live narrative is post-style color grading with multiple power windows, moving mattes and more. I was excited to create a new workflow with our very patient colorist, Nick Novotny, since we wanted to keep our editing choices flexible even after sending the show to color using Resolve. Rather than collapsing the edit to a single track like a traditional turnover, we preserved eight active camera tracks per episode and transferred editorial sequences from Premiere to Resolve via XML.

The Live+Post Workflow Given that we significantly updated the look of the show with a color palette, grain and falloff that evoked 1970’s Soviet science-fiction films, this allowed us to reexamine some of our edit choices without requiring a new round trip from Premiere.

What about the mix?
Diaz-Przybyl: That was another critical area — the live sound mix. After sound designer Alex Choonoo played thousands of sound effects live using Ableton for the initial broadcast, he output all of those files as new audio ISOs, and then expanded out the series’ SFX bed with hundreds of new sounds.

From there, post sound mixer Ben Chan took the 32 dialogue and music tracks recorded from the Behringer X32 during the live show and then blew us away with his new Avid Pro Tools mix for the dozens of tracks we brought back online for the final cut.

Is this two-stage release format here to stay?
Diaz-Przybyl: Especially with the rise of virtual production, I think it’s likely that this process will continue, and expand. The tools are there, even for smaller organizations like ours. Live performance gives the audience a reason to tune in and gives “appointment viewing” a lot more appeal. But it’s great to take the time to polish and refine what is essentially a “live rough cut” to get it to that next level, with real staying power.

Calcote: I totally agree with Lillian that “virtual production” is driving a filmmaking renaissance, where we’re striving to get final pixels during shooting. (This is a focus on our next narrative project using Unreal Engine.) But audiences also want to be involved with storytelling like never before, given the rise of platforms like Twitch and TikTok.

Taking a two-stage release — where you shape a live rough cut with your biggest fans and then release a final version featuring a full theatrical-level post process — satisfies audiences in a whole new way. Isn’t that what the evolution of storytelling is all about?