NBCUni 9.5.23

Category Archives: Adobe Premiere

Foundry Flix 7.0

Foundry Releases Flix 7.0 for Streamlined Preproduction

Foundry has launched Flix 7.0, an update to its preproduction software that helps studios develop stories by managing editorial round-tripping, storyboard revisions, file versioning and more.

Now offering integration with Autodesk Maya, Flix 7.0 enables both 2D and 3D artists to collaborate from anywhere globally using Flix as a central story hub. Snapshots and playblasts can be imported from Maya into Flix 7.0 as panels, then round-tripped to and from editorial. Flix manages naming, storing and organizing all files, as well as allows teams to provide feedback or revisit older ideas as the story is refined.

Foundry Flix 7.0

While Flix also connects to Adobe Photoshop and Toon Boom Storyboard Pro, the Maya integration provides the ability for layout and storyboard teams to work in tandem. These teams can now collaborate concurrently to identify areas for improvement in the story such as timing issues before they become too complicated and expensive to change later in production. 2D artists can bring Flix’s Maya panels into their drawing tool of choice so that they can trace over the viewport for faster storyboarding. 3D artists can reference 2D storyboard panels from Flix directly in Maya when building complex scenes or character models, providing additional time savings.

Flix 7.0 simplifies building new extensions with a new Remote Client API. This API allows studios to create custom tools that integrate with Flix using the same API as the built-in extensions for Maya and Photoshop. Documentation and example code for the Remote Client API are provided to help studios build custom integrations with their choice tools or to create entirely custom workflows. Flix 7.0’s new extension management system enables studio supervisors to test, update and audit all extensions, with the added ability to deploy them across production from a single place.

Flix 7.0 offers single sign-on (SSO), so IT teams can authenticate Flix users through their studio’s existing SSO platform to centrally manage secure access to story development assets to both staff and freelancers. Flix also supports multi-factor authentication to provide an added layer of security.

Other new features in Flix 7.0 include:

  • New metadata system — Scene data is now stored directly on each Flix panel. For example, for Maya users, global cameras, locators and file path data will be recorded for assets selected in the viewer.
  • Enhanced Adobe Premiere plugin — A multitude of updates and a new UI for the Flix Premiere Adapter eliminates limitations of previous versions, providing an efficient editorial workflow.
  • Photoshop plugin redesign — The Photoshop extension has been rebuilt, bringing users new UI customization options.
  • Updated notification preferences — The ability to turn off automatic email updates each time a panel is published or changed.

 

Kirk Baxter on Editing David Fincher’s The Killer

By Iain Blair

David Fincher’s The Killer is a violent thriller starring Michael Fassbender as an unnamed hitman whose carefully constructed life begins to fall apart after a botched hit. Despite his mantra to always remain detached and methodical in his work, he lets it become personal after assassins brutally attack his girlfriend, and soon he finds himself hunting those who now threaten him.

L-R: Kirk Baxter and David Fincher

The Netflix film reunites Fincher with Kirk Baxter, the Australian editor who has worked on all of Fincher’s films since The Curious Case of Benjamin Button and who won Oscars for his work on The Social Network and The Girl with the Dragon Tattoo.

I spoke with Baxter about the challenges and workflow.

How did you collaborate with Fincher on this one?
I try not to weigh David down with too many background questions. I keep myself very reactionary to what is being sent, and David, I think by design, isolates me a bit that way. I’ll read the script and have an idea of what’s coming, and then I simply react to what he’s shot and see if it deviates from the script due to the physicality of capturing things.

The general plan was that the film would be a study of process. When The Killer is in control, everything’s going to be deliberate, steady, exacting and quiet. We live in Ren Klyce’s sound design, and when things deviate from The Killer’s plan, the camera starts to shake. I start to jump-cut, the music from composers Trent Reznor and Atticus Ross comes into the picture, and then all of our senses start to get rocked. It was an almost Zenlike stretching of time in the setup of each story then a race through each kill. That was the overarching approach to editing the film. Then there were a thousand intricate decisions that we made along the way each day.

I assume you never go on-set?
Correct. I just get dailies. Then David and I go back and forth almost daily while he is shooting. [Fincher and DP Erik Messerschmidt, ASC, shot widescreen anamorphic aspect ratio, 2.39×1, with the Red V-Raptor and recorded footage in 8K.] David remains very involved, and he’ll typically text me. Very rarely does he need to call me to talk during an assembly. Our communications are very abbreviated and shorthand. I put assemblies of individual scenes up on Pix, which allows David to be frame-accurate about feedback.

Sometimes I’ll send David selects of scenes, but often on larger scenes, a select sequence can be 30 to 40 minutes long, and it’s difficult for David to consume that much during a day of shooting. So I’ve developed a pattern of sending things that are sort of part edited and part selected. I’ll work out my own path through action, and then I open up and include some multiple choice on performance or nuance — if there are multiple approaches that are worth considering. I like to include David. If I leap to an edit without showing the mathematics of how I got there, often the professor wants to know that you’ve done the research.

The opening Paris stake-out sequence sets up the whole story and tone. I heard that all of Fassbender’s scenes were shot onstage in New Orleans along with the Paris apartment he’s staking out. How tricky was it to put that together?
Yes, it came in pieces. The Paris square and all the exteriors were shot on location in Paris. Then there’s a set inside WeWork, and that came as a separate thing.

What made it more complex was that all the footage of the target across the street came much later than the footage of The Killer’s location. But I still had to create an edit that worked with only The Killer’s side of the story so that Fincher knew he had it. Then he could strike that set and move on. My first version of that scene just had words on the screen [to fill in the blanks] of what was happening across the street. I built it all out with the song “How Soon Is Now?” by The Smiths and The Killer’s inner monologue, which allowed me to work out Fassbender’s best pieces. Then, when I eventually got the other side of the footage, I had to recalibrate all of it so that it wasn’t so pedantic. I had to work out ways to hide the target by the size of the POV or stay on The Killer’s side to allow the scene to stretch to its perfect shooting opportunity, ladling suspense into it.

What were the main challenges of editing the film?
For me, it was a complex film to edit due to how quiet and isolated the lead character is. In the past, I’ve often edited scenes that have a lot of characters and conversation, and the dialogue can help lead you through scenes. There’s a musicality to voices and talking that sometimes makes it obvious how to deliver or exploit the information. Crafting a silent, exacting person moving through space and time called for a different muscle entirely. I often used Fassbender’s most subtle micromovements to push things along. We are always obsessing over detail with Dave’s films, but the observational study of a methodical character seemed to make the microscope more powerful on this one.

It’s very much a world of seeing what he sees, and his temperature controls the pace of the movie. He slows things down; he speeds things up. And that’s the way David covers things — there are always a lot of angles and sizes. There are a lot of choices in terms of how to present information to an audience. It was a very fiddly film to perfect.

As you note, there’s very little dialogue, but there’s a lot of voiceover. Talk about handling that.
Anytime you deal with voiceover, it’s always in flux. It’s quite easy to keep writing a voice-over — keep moving it, keep streamlining it, removing it, bringing it back. That all impacts the picture. We recorded Fassbender performing his monologue four different times, and he became more internal and quieter with the delivery each time. In editing the sniper scene and playing The Smiths in The Killer’s headphones at full volume over all of his POVs, I had to time his voiceover to land on the coverage. That then became a language that we applied to the entire film. POVs never had voice-over, even on scenes when The Killer wasn’t playing music. It created a unique feeling and pacing that we enjoyed.

What was the most difficult scene to cut and why?
The scene with the secretary, Dolores, begging for her life in the bathroom was very challenging because it’s somewhat torturous watching an almost innocent person about to be killed by our lead character. There was a lot of nuance in her performance, so we had to figure out how to manipulate it to make it only slightly unbearable to watch. And that’s always my role. I’m the viewer. I’m the fan. Because I’m not on-set, I’m often the one who’s least informed and trying to make sense of things, learning as I go.

I think the scene with Tilda Swinton (The Expert) was rather difficult as well, probably because she’s so good. The scene was originally a lot longer than it is now, but I had to work the scene out based on what Fassbender was doing, not what Tilda was doing. There are only so many times you can cut to The Killer and his lack of response without diluting that power. So I reduced the scene by about a third in order to give more weight to the lack of vocalization, pushing things forward with the smallest facial performances. That was the scene we played with the most.

There is some humor in the film, albeit dark humor. How tricky was it trying to maneuver that and get it exactly right?
I think there’s always dark humor in David’s movies. He’s a funny guy. I love the humor in it, especially in the fight scene. There’s such brutality in that physical  fight scene, and the humor makes it easier for the audience to watch. It gives you pauses to be able to relax and catch up and brace yourself for what’s coming.

There’s also humor in the voiceover throughout the film. I had to work out the best possible timing for the voice-over and decide what we did and didn’t need. There was a lot of experimentation with that.

Did you use a lot of temp sound?
There was some underwhelming temp stuff that we put in just to get by, but usually sound designer Ren Klyce comes in and does a temp mix before we lock the film. From that point on, we continue to edit with all of his mix splits, which is incredibly helpful.

The same goes for Trent Reznor and Atticus Ross. They scored about 40 minutes of music very early in the process, and that’s how I temped the music in the film — using their palette so we didn’t have to do needle-drops from other films. Working with their music and finding homes for the score is probably the most enjoyable part of film editing for me.

What about temping visual effects?
We do temp effects when they’re based on storytelling and timing, and there are always so many split screens. David often keeps shots locked off so that we can manipulate within a frame using multiple takes. There are a lot of quiet visual effects that are all about enhancing a frame. And we are constantly stabilizing camera work — and in this case destabilizing, adding camera shake during the fight or flight scenes.

There’s a lot of that sort of work with David, so I don’t need to get bogged down with it when I’m getting ready to lock a cut. That all comes afterward, and it’s [all about] enhancing. We mostly communicate storytelling and timing and know that we’re secure in our choices — that’s what I need to deal with while editing.

Did you do many test screenings?
There’s always a trusted crowd that David will show it to, but we didn’t do test screenings in the conventional sense of bringing in piles of strangers to see how they respond. David’s more likely to share with filmmakers and friends.

How long did the edit take to complete?
It was close to a year and then David reshot two scenes. When David’s in Los Angeles, I like to work out of his office in Hollywood so he can casually pop in and out of the cutting room. Then we picked up and went to the south of France to Brad Pitt’s property Miraval. They have cutting rooms there. We worked there in the summer for a couple of months, which was incredible and very focused.

I heard you’re not much of a tech head.
For me, editorial is more of a mindfuck. It’s a head game, much like writing. I’m focused on what I’m crafting, not on data management. I can be like that because I’ve got a great team around me that is interested and curious about the tech.

I have no curiosity in the technology at all. It just allows me to do my work efficiently. We cut on Adobe Premiere, and we have done for quite a few movies in a row. It is an excellent tool for us — being able to share and pass back and forth multiple projects quickly and effortlessly.

You’ve cut so many of Fincher’s films, but this was a very different type of project. What was the appeal?
I was very excited about this film. There’s a streamlined simplicity in the approach that I think is quite opposite to a lot of movies being done right now in this type of genre. And it felt somewhat punk rock to strip it back and present a revenge film that applied the rules of gravity to its action.

Finally, what’s next? Have you got another project with him on the horizon?
David’s always sitting on a bunch of eggs waiting for one to hatch. They all have their own incubation speed. I try not to badger him too much about what’s coming until we know it’s in the pipeline.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

NBCUni 9.5.23

Adobe Max 2023 Part 2: Tips, Sneaks and Inspiration

By Mike McCarthy

My first article reporting on Adobe Max covered new releases and future announcements. This one will focus on other parts of the conference, and what it was like to attend in person once more. After the opening keynote, I spent the rest of that day in various sessions. The second day featured an Inspiration Keynote that was, as you can imagine, less technical in nature, but more on that later. Then more sessions and Adobe Sneaks, where they revealed new technology still in development.

In addition to all of these events, Adobe hosted the Creativity Park, a hall full of booths showcasing hardware solutions to extend the functionality of Adobe’s offerings.

Individual Sessions
With close to 200 different Max sessions and a finite amount of time, it can be a challenge to choose what you want to explore. I, of course, focused on video sessions, and within that, specifically the topics that would help me better use After Effects’ newest 3D object functionality. My one take away came from Robert Hranitzky in his session Get the Hollywood Look on a Budget Using 3D session, where he talked about how GLB files will work better than the OBJ files I had been using because they embed the textures and other info directly into the file for After Effects to use. He also showed how to break a model into separate parts to animate directly in After Effects. It was the way that I had envisioned, but I haven’t yet had a chance to try in the beta releases.

Ian Robinson’s Move Into the Next Dimension With After was a bit more beginner-focused, but pointed out that one unique benefit of Draft3D mode is that the render is not cropped to the view window, giving you an over-scan effect, which allows you to see what you might be missing in your render perspective. He also did a good job of covering how the Cinema 4D and Advanced 3D render modes allow you to bend and extrude layers and edit materials, while the Classic 3D render mode does not. I have done most of my AE work in Classic mode for the past two decades, but I may use start using the new Advanced 3D renderer for adding actual 3D objects to my videos for postviz effects.

Nol Honig and Kyle Hamrick had a Leveling Up in AE session where they showed all sorts of shortcuts and unique ways to use Essential Properties to create multiple varying copies of a single subcomp. One of my favorite shortcuts was hitting “N” while creating a rectangle mask. It sets its mode to None, which allows you to see the layer you are masking while you are drawing the rectangle. (Honestly, it should default to None until you release the mouse button, in my opinion.) A couple other favorites: Ctrl+Home will center objects in the comp, and, even more useful, Ctrl+Alt+Home will recenter the anchor point if it gets adjusted by accident. But they skipped “U,” which reveals all keyframed properties. When pressed again (“UU”), it reveals all adjusted properties. (I think they assumed everyone knew about the Uber-Key.)

I also went to Rich Harrington’s Work Faster in Premiere Pro session, and while I didn’t learn many new things about Premiere (besides copying the keyboard shortcuts to the Clipboard results in a readable text list), I did learn some cool things in Photoshop that can be used in Premiere-based workflows.

Photoshop can export LUTs (look-up tables) that can be used to adjust the color on images in Premiere via the Lumetri color effects. It generates these lookup tables via adjustment layers applied to the image. While many of the same tools are available directly within Premiere, Photoshop has some further options that Premiere does not, and this is how you can use them for video

First export a still of a shot you want corrected and bring it into Photoshop as a background image. Then you apply adjustment layers — in this case, curves, which is a powerful tool that is not always intuitive to use. For one thing, Alt-clicking the “Auto” button gives you more detailed options in a separate window that I had never even seen.  Then the top left button in the curves panel is the “Targeted Adjustment Tool,” which allows you to modify the selected curve by clipping on the area of the image that you want to change. When you do that, the tool will adjust that point on the curve. In this way, you can use Photoshop to make your still image look the way you want it, and then export a LUT for use in Premiere or anywhere else you can use LUTs. (Hey Adobe, I want this in Lumetri.)

Adobe Sneaks
In what is a staple event of the Max conference, Adobe Sneaks brings the  company’s engineers together to present technologies they are working on that have not yet made it into specific products. The technologies range from Project Primrose, a digital dress that can display patterns on black and white tiles, to Project Dub Dub Dub, which automatically dubs audio in multiple foreign languages via AI.  The event was hosted by comedian Adam Devine, who offered some less technical observations about the new functions.

Illustration isn’t really my thing, but it could be once Project Draw & Delight comes to market.  It uses the power of AI to convert the artistic masterpiece on the left into the refined images to the right, with the simple prompt “cat.” I am looking forward to how much better my storyboard sketches will soon look with this simple and accessible technology.

Adobe always has a lot with fonts, and Project Glyph Ease continues that tradition with complete AI-generated fonts based on a user’s drawing of two or three letters.  This is a natural extension of the new type-editing features demonstrated in Illustrator the day before, whereby any font can be identified and matched from a couple letters, even from vectorized outlines.  But unlike the Illustrator feature, this tool can create whole new fonts instead of matching existing ones.

Project See Through was all about removing reflections from photographs, and the technology did a pretty good job on some complex scenes while preserving details.  But the part that was really impressive was when engineers showed how the computer could also generate a full image based on the image in the reflection.  A little scary when you think about the fact that the photographer taking the photo will frequently be the one in the reflection.  So much for the anonymity of being “behind the camera.”

Project Scene Change was a little rough in its initial presentation but it’s a really powerful concept. It extracts a 3D representation of a scene from a piece of source footage, and then uses that to create a new background, for a different clip, but with the background rendered to match the perspective of the foreground clip. The technology is not really limited to background; that is just the easiest way to explain it with words.  As you can see by the character in the scene behind the coffee cup, the technology really is creating an entire environment, not just a background.  It will be interesting to see how this gets fleshed out with user controls for higher-scale VFX processes.

Project Res Up appears to be capable of true AI-based generative resolution improvements in video. I have been waiting for this ever since Nvidia demonstrated live AI-generated upscaling of 3D rendered images, which is what allows real-time raytracing to work, but haven’t seen it in action until now. If we can create something out of thin air from generative AI, it stands to reason that we should be able to improve something that already exists. But in another sense, I recognize that it is more challenging when you have a specific target to match.  This is also why generative video is much harder to do than stills. Each generated frame has to smoothly match the ones before and after it, and any artifacts will be much more noticeable to humans when in motion.

This is why the most powerful demo, by far from my perspective, was the AI-based generative fill for video, called Project Fast Fill. This was something I expected to see, but I did not anticipate it to be so powerful yet. It started off with a basic removal of distractions from elements in the background. But it ended with adding a necktie to a strutting character walking through a doorway with complex lighting changes and camera motion… all based on a simple text command and a vector shape to point the AI in the right place. The results were stunning and if seeing it believing, it will revolutionize VFX much sooner than I expected.

Creative Park
There was also a hall of booths hosting Adobe’s various hardware and software partners, some of whom had new announcements of their own. The hall was divided into sections, with a quarter of it devoted to video, which might be more than in previous years.

Samsung was showing off its ridiculously oversized wraparound LCD displays, in the form of the 57-inch double wide UHD display, and the 55-inch curved TV display that can be run in portrait mode for an overhead feel. I am still a strong proponent of the 21:9 aspect ratio, as that is the natural shape of human vision, and anything wider requires moving your head instead of your eyes.LudidLink Filespaces

Logitech showed its new Action Ring function for its MX line of productivity mice.  I have been using gaming mice for the past few years, and after talking with some of the reps in the booth, I believe I should be migrating back to the professional options.  The new Action Ring is similar to a feature in my Logitech Triathlon mouse, where you press a button to bring up a customizable context menu with various functions available.  It is still in beta, but it has potential.

LucidLink is a high-performance cloud storage provider that presents to the OS as a regular mounted hard drive. LucidLink demonstrated a new integration with Premiere Pro, as a panel in the application that allows users to control which files maintain a local copy, based on which projects and sequences they are used in. I have yet to try LucidLink myself, as my bandwidth was too low until this year, but I can it envision being a useful tool now that I have a fiber connection at home.

Inspiration Keynote
Getting back to the Inspiration Keynote, I usually don’t have much to report from the Day 2 keynote presentation, as it is rarely technical in detail, and mostly about soft skills that are hard to describe. But this year’s presentation stood out in a number of ways.

There were four different presenters with very different styles and messages. First off was Aaron James Draplin, a graphic designer from Portland with a very unique style, who would appear to pride himself on not fitting the mold of corporate success. His big, loud and autobiographical presentation was entertaining, whose message was if you work hard, you can achieve your own unique success.

Second was Karen X Cheng, a social media artist with some pretty innovative art, the technical aspects of which I am better able to appreciate. Her explicit mix of AI and real photography was powerful. She talked a lot about the algorithms that rule the social media space, and how they skew our perception of value. I thought her five defenses against the algorithm were important ideas:

Money and passion don’t always align – pursue both, separately if necessary
Be proud of your flops – likes and reshares aren’t the only measure of value
Seek respect, not attention, it lasts longer – this one is self explanatory
Human+AI > AI – AI is a powerful tool, but even more so in the hands of a skilled user
Take a sabbath from screens – It helps keep in perspective that there is more to life

Up next was Walker Noble, an artist who was able to find financial success selling his art when the pandemic pushed him out of his day job… which he had previously been afraid to leave. He talked about taking risks and self-perception, asking, “Whynot me?” He also talked about finding your motivation, in his case his family, but there are other possible ways. He also pointed out that he has found success without mastering “the algorithm,” in that he has few social media followers or influence in the online world.  So, “Why not you?”

Last up was Oak Felder, a music producer spoke about channeling emotions through media, specifically music. He made a case for the intrinsic emotional value within certain tones of music, as opposed to learned associations from movies and alike. The way he sees it, there are “kernels of emotion” within music that are then shaped by a skilled composer or artist. He said that the impact it has on others is the definition of truly making music. He ended his segment showing a special needs child being soothed by one of his songs during a medical procedure.

The entire combined presentation was much stronger than the celebrity-interview format they have previously hosted at Max.

That’s It!
That wraps up my coverage of Max, and hopefully gives readers a taste of what it would be like to attend in person, instead of just watching the event online for free, which is still an option.


Mike McCarthy is a technology consultant with extensive experience in the film post production. He started posting technology info and analysis at HD4PC in 2007. He broadened his focus with TechWithMikeFirst 10 years later.

 

 


Adobe Max 2023: A Focus on Creativity and Tools, Part 1

By Mike McCarthy

Adobe held its annual Max conference at the LA Convention Center this week. It was my first time back since COVID, but Adobe hosted an in-person event last year as well. The Max conference is focused on creativity and is traditionally where Adobe announces and releases the newest updates to its Creative Cloud apps.

As a Premiere editor and Photoshop user, I am always interested in seeing what Adobe’s team has been doing to improve its products and improve my workflows. I have followed Premiere and After Effects pretty closely through Adobe’s beta programs for over a decade, but Max is where I find out about what new things I can do in Photoshop, Illustrator and various other apps. And via the various sessions, I also learn some old things I can do that I just didn’t know about before.

The main keynote is generally where Adobe announces new products and initiatives as well as new functions to existing applications. This year, as you can imagine, was very AI-focused, following up on the company’s successful Firefly generative AI imaging tool released earlier this year. The main feature that differentiates Adobe’s generative AI tools from various competing options is that the resulting outputs are guaranteed to be safe to use in commercial projects. That’s because Adobe owns the content that the models are trained on (presumably courtesy of Adobe Stock).

Adobe sees AI as useful in four ways: broadening exploration, accelerating productivity, increasing creative control and including community input. Adobe GenStudio will now be the hub for all things AI, integrating Creative Cloud, Firefly, Express, Frame.io, Analytics, AEM Assets and Workfront. It aims to “enable on-brand content creation at the speed of imagination,” Adobe says.

Firefly

Adobe has three new generative AI models: Firefly Image 2, Firefly Vector and Firefly Design. The company also announced that it is working on Firefly Audio, Video and 3D models, which should be available soon. I want to pair the 3D one with the new AE functionality. Firefly Image 2 has twice the resolution of the original and can ingest reference images to match the style of the output.

Firefly Vector is obviously for creating AI-generated vector images and art.

But the third one, Firefly Design, deserves further explanation. It generates a fully editable Adobe Express template document with a user-defined aspect ratio and text options. The remaining fine-tuning for a completed work can be done in Adobe Express.

FireflyDesign

For those of you who are unfamiliar, Adobe Express is a free cloud-based media creation and editing application, and that is where a lot of Adobe’s recent efforts and this event’s announcements have been focused. It is designed to streamline the workflow for getting content from the idea stage all the way to publishing on the internet, with direct integration to many various social media outlets and a full scheduling system to manage entire social marketing campaigns. It can reformat content for different deliverables and even automatically translate it into 40 different languages.

As more and more of Photoshop and Illustrator’s functionality gets integrated into Express, Express will probably begin to replace them as the go-to for entry-level users. And as a cloud-based app accessed through a browser, it can even be used on Chromebooks and other non-Mac and Windows devices. And Adobe claims that via a partnership with Google, the Express browser extension will be included in all new Chromebooks moving forward.

Photoshop for Web is the next step beyond Express, integrating even more of the application’s functions into a cloud app that users can access from anywhere, once again, also on Chrome devices. Apparently, I’m an old-school guy who has not yet embraced the move to the cloud as much as I could have, but given my dissatisfaction with the direction the newest Microsoft and Mac OS systems are going, maybe browser-based applications are the future.

Similarly, as a finishing editor, I have real trouble posting content that is not polished and perfected, but that is not how social media operates. With much higher amounts of content being produced in narrow time frames, most of which would not meet the production standards I am used to, I have not embraced this new paradigm. That’s why I am writing an article about this event and not posting a video about it. I would have to spend far too much time reframing each shot, color-correcting and cleaning up any distractions in the audio.

Firefly Generative Fill

For desktop applications, within the full version of Photoshop, Firefly-powered generative fill has replaced content-aware fill. You can now use generative fill to create new overlay layers based on text prompts or remove things by overlaying AI-generated background extensions. AI can also add reflections and other image processing. It can “un-crop” images via Generative Expand. Separately, gradients are now fully editable, and there are now adjustment layer presets, including user-definable ones.

Illustrator can now identify fonts in rasterized and vectorized images and can even edit text that has already been converted to outlines. It can convert text to color palettes for existing artwork. It can also AI generate vector objects and scenes that are all fully editable and scalable. It can even take in existing images as input to match to stylistically. There is also a new cloud-based web version of Illustrator coming to public beta.

Text-based editing in Premiere

From the video perspective, the news was mostly familiar to existing public beta users or to those who followed the IBC announcements: text-based editing, pause and filler word removal, and dialog enhancement in Premiere. After Effects is getting true 3D object support, so my session schedule focused on learning more about the workflows for using that feature. You need to create and texture models and then save them as GLB files before you can use them in AE. And you need to set up the lighting environment in AE before they will look right in your scene. But I am looking forward to being able to use that functionality more effectively on my upcoming film postviz projects.

I will detail my experience at Day 2’s Inspiration keynote as well as the tips and tricks I learned in the various training sessions in a separate article. At the time of this writing, I still had one more day to go at the conference. So keep an eye out. The second half of my Max coverage is coming soon.


Mike McCarthy is a technology consultant with extensive experience in the film post production. He started posting technology info and analysis at HD4PC in 2007. He broadened his focus with TechWithMikeFirst 10 years later.

 


Missing

Editing Missing: Screens and Smartphones Tell Story

Sony Pictures’ Missing is a story told almost entirely through computer screens and smartphones. This mystery thriller, streaming now, was directed by Nicholas Johnson and Will Merrick and follows a teen girl named June (Storm Reid), whose mother (Nia Long) goes missing while on vacation with her new boyfriend (Ken Leung). Stuck thousands of miles away in LA, June uses technology to find her mom before it’s too late.

Missing

L-R: Editors Austin Keeling and Arielle Zakowski

The filmmakers relied on cloud-based and AI-powered tech to tell their story. Editors Austin Keeling and Arielle Zakowski chose Adobe Premiere, After Effects and Frame.io to edit, build shots and design thousands of graphics simultaneously. The complex and VFX-heavy workflow was custom-built to make the audience feel as if they’re logging in, clicking and typing along with the characters in real time.

Let’s find out more from the editors…

How early did you get involved in the film?
Austin Keeling: We both got started about six months before principal photography began, so we were some of the earliest crew members involved. We spent those first months creating a previz of the entire film by taking temp screenshots of apps on our own computers (we were working from home at the time) and building each scene from scratch.

The directors would take pictures of themselves and record themselves saying all the lines, and we would slot those into the previz timeline to create a sort of animated storyboard of each scene. By the time we were done, we had a completely watchable version of the entire movie. This was a great time to test out the script to see what was working and improve things that weren’t. Nick and Will were still writing at the time, so they were able to incorporate discoveries made in the previz stage into the final script.

This is not your typical film. What were the challenges of telling the story through screens and smartphones?
Arielle Zakowski: This film was unlike anything either of us had ever worked on before. At first the challenges were mostly technical ones. We hadn’t had much experience with Adobe After Effects, so we had to teach ourselves to use it pretty quickly. And none of the film is actually screen-recorded — it’s all built manually out of layered assets (desktop background, Chrome windows, various apps, mouse, etc.), so in some scenes, we were juggling up to 40 layers of graphics.

MissingOnce we became comfortable with the technical side of the process, we really dove into the challenges imposed by the unique screen perspective. It gave us a whole new set of tools and cinematic language to play with — building tension with nothing more than a mouse move, for example, or conveying a character’s emotion simply through how they type a message. Ultimately the limitations of the computer screen forced us to make more and more creative storytelling choices along the way.

What direction were you given by Will and Nick?
Keeling: They were very much involved in the post process from day one. They had already edited the previous film in this series, Searching, so we leaned heavily on them in learning the screen-film workflow.

In the previz stage, each of us would take a scene and build it from scratch and then send it to the directors for notes.

Missing

From that point on, it became a constant collaboration, and when we moved into a traditional office after principal photography, the directors were with us in the editing rooms every day. They wanted this film to feel bigger than Searching in every way, so they really encouraged us to try new things in the pacing, coverage, transitions, etc. They had a wealth of knowledge about how to tell a screen-life story, so working with them was creatively inspiring.

Was the footage shot traditionally and then put into the screens? If traditionally, was it then treated to look like it’s on phones?
Zakowski: All the footage was shot traditionally and then added into the screen graphics in post. Our cinematographer Steven Holleran used a total of eight different cameras to create a realistic feel for the multiple video outputs we see in the news footage, FaceTime calls, security cameras and iPhones.

Once the footage was incorporated into the graphical elements, we added compression and glitches to some of the footage to further replicate the experience of seeing footage on a laptop screen.

There is a lot happening on the screen. How did you balance all of it to make sure it wasn’t distracting to the viewer?
Keeling: This is partly why editing a screen movie takes so much time. We built the entire computer desktop in a wide shot for each scene and then used adjustment layers to create pans, zooms and close-up shots.

We essentially got to choose how to “cover” each scene in the edit, which allowed for nearly endless possibilities. We were able to tweak and alter the scenes in tons of ways that aren’t possible in a traditional film. We relied a lot on feedback to make sure that the audience wouldn’t get lost along the way. Through multiple test screenings, we were able to figure out which beats were distracting or unclear and then push to find the simplest, most effective way of telling the story.

Do you think the story could have been told in a more traditional way? How did the use of screens and phones help ramp up the drama/mystery/suspense?
Zakowski: The mystery at the core of this movie is thrilling enough that it could probably work in a traditional format, but we think the screen-life storytelling elevates this film into something unique and timely. Watching someone dig around on the internet isn’t inherently thrilling, but by putting the audience in June’s POV and letting them find the clues along with her, we’ve created a fully immersive and intimate version of the story.

Probably everyone has felt some dread before opening an email or anticipation while waiting for a phone call to go through. This format allowed us to really explore the relatable emotions we deal with as we use technology every day.

You edited in Premiere. Why was this system the right one to tell this story?
Keeling: We used Adobe Creative Cloud from start to finish. We edited in Premiere Pro using Productions so we could easily move between projects and share timelines with each other and with our assistant editors. All of the final graphics were made in Illustrator and Photoshop. We used Dynamic Link to send the locked film to After Effects, where we added tons of finishing details. And we used Frame.io to share cuts with the directors and the studio, which made it so easy to get notes on scenes.

We needed programs that were intuitive and collaborative, ones that made it possible to move the film seamlessly from one stage to the next.

Can you talk about using cloud tech and AI on the shots and graphics and the tools you used? What was your workflow?
Zakowski: Because we edited the previz during the pandemic, we relied heavily on cloud-based servers to share projects and assets while working from home. We actually used surprisingly few AI tools during the edit — most of the film was created with straightforward, out-of-the-box Adobe products. The unique nature of this film allowed us to use a lot of morph cuts in the FaceTime footage to combine takes and adjust timing.


Editing the Indie Comedy Scrambled

Editor Sandra Torres Granovsky, whose credits include Promised Land, The Opening Act and  Alpha, recently cut the SXSW film Scrambled. Torres Granovsky, who studied film theory and anthropology at UC Berkeley, learned her craft from her mentor, editor Dan Lebental, ACE, while he worked on Jon Favreau-directed films such as Elf and Iron Man and other Favreau projects such as Couple’s Retreat and The Break-Up.

Torres Granovsky started on Scrambled — which follows millennial Nellie Robinson on a hilarious, existential journey as she faces reproductive challenges and decides to freeze her eggs — a week before the film started principal photography. The film’s director, Leah McKendrick, stars along with Yvonne Strahovski, Clancy Brown and June Diane Raphael.

Editor Sandra Torres Granovsky

Let’s find out more…

How did you work with the director, Leah McKendrick?
During production I shared a few scenes so that the director could get a sense of how her footage was coming together. Leah and I had never worked together before, so we decided to connect and work on a couple of scenes during production. This was great for beginning to establish a rapport.

Was there a particular scene or scenes that were most challenging?
One of the most fun and challenging scenes to edit in Scrambled was one of the first scenes of the film. Nellie, our protagonist, takes ecstasy at her best friend’s wedding. She’s alone and we had to convey what that experience was like.

We were tasked with creating a comedic moment when Nellie’s reality shifted and became different from the reality of everyone around her. The director and I also wanted it to be fun and viscerally accurate. There was a lot of trial and error and a lot of laughs. In the end, we played with the speed of the footage, the music and the lighting that DP Julia Swain created. (Swain shot on ARRI Alexa in the OpenGate 4.5K format.) It felt like a wild dance party for Nellie, and it was a great way to start the film.

Can you talk about your editing workflow?
When I edit a project, I eagerly wait for my first completed scene. Once I receive all of the organized dailies for that scene, I will edit according to script. I move quickly because as I continue to receive footage, it’s important to keep up to camera so I can flag any issues or desired coverage as soon as possible.

I focus on sketching out the scenes and do not allow myself to get bogged down by any footage puzzles. Once the whole scene is sketched out, I fine-tune the cut until I am happy with it. I move forward in this way throughout the whole of production. However, I always go back to the scenes I have edited at least once more with a fresh eye.

Scrambled

Writer/director Leah McKendrick

What editing system did you use?
We used Adobe Premiere Pro because our deadlines and production workflow dictated that we transfer dailies internally.

Is there a tool within that system that was particularly helpful?
The copy and paste features are great in Premiere.

How did you manage your time on the film?
Managing time with editing is somewhat esoteric because it’s such a creative endeavor. The most important thing for me is to have an idea and plan for execution. That doesn’t necessarily come when I sit in front of my computer. Oftentimes it comes when I am doing everyday activities, such as washing dishes, walking my dogs or having my morning coffee. I try to be patient with myself if I don’t have the inspiration or plan right away. I know that once that comes, the execution takes no time at all.

Did you have an assistant editor on this?
My assistant editors (Malcolm Garvey and Jeff Cummings) and I worked remotely on this film. This had actually been the case with most of the projects I worked on shortly before COVID-19 greatly affected our workflows.

ScrambledLuckily, post technology has enabled a successful remote workflow. However, it requires working with a very strong assistant editor and good communication on both ends. This has also been an interesting way of working because we met in person only a handful of times. Most of the time we communicated by messaging, Zoom and phone calls.

In this case, we sent cuts and footage back and forth with Premiere Productions and Google Drive. Occasionally, when we had tight deadlines or screenings, we worked in person.

How do you manage producers’ expectations with reality/what can really be done?
Communication and confidence are the most effective ways to manage expectations with the reality of what can actually be done. It is important to develop the confidence to know and to communicate that a certain expectation may not result in the best possible work. In these situations, I have found that almost everyone has respected, supported, facilitated and appreciated my desire to do good work.

Scrambled

How do you take criticism?
I try to make sure that I feel very good about the work that I do. If I feel good about my work, then any feedback I receive is a welcome part of the process.

I have also found that a good idea is undeniable, and everyone I have worked with has strived for that. I believe that ideas that don’t work are just as valuable as ideas that do. While it is more challenging to execute an idea I don’t believe in, it’s a wonderful exercise that makes me a better editor. I also very much enjoy the times when I don’t believe in an idea, and it works. It enables me to always have an open mind and to be excited about collaboration.

When someone who is starting out asks what they should learn, what do you recommend?
I would recommend that anyone starting out as an editor be humble and always open to learning. There are so many aspects of editing that require an openness at all levels of experience, from learning new tech to learning how to express new ideas.


Adobe CC 2023 Updates Include Text-Based Editing in Premiere

By Brady Betzel

Adobe has revealed the latest batch of updates for its Creative Cloud video apps to be shown at NAB 2023. During a recent online press conference, the company was touting workflows available from Frame.io as well as generative AI functions — what they are calling Firefly.

Those updates are great, but what editors really have been asking for from Adobe is to solidify Premiere Pro’s foundation — to make it more stable. For their part, Adobe says they have created “the fastest and most stable version of Premiere Pro ever.” Time and testing will tell.

Adobe reports that it’s transitioning its AI-powered, text-based editing out of beta and into the official Premiere Pro release in May. If you’ve used Avid’s ScriptSync, then you are already halfway through learning text-based editing. Premiere Pro’s automated transcription and subtitle workflow is probably the best on the pro market. Adding text-based editing really ties up one of Premiere’s loose ends in the professional editing world. Highlighting sequences in a transcription/script and editing them into the timeline is a great workflow to quickly build rough cuts based solely on what is said.

Text-Based Editing

In addition, Adobe is continuing to advance Premiere’s automatic tone-mapping feature, which was released in February. Simply, Premiere will read the metadata of the video clip and apply any LUTs necessary to match your working color space. Some more important updates include background auto-save, system reset options, Effect Manager for plugins and additional GPU acceleration. Adobe has even updated how GPUs work with certain camera codecs, including ARRIRAW (ARRI Alexa 35) and R3D (Red V-Raptor XL). They have also expanded support for Sony Venice 2 cameras with v2 firmware.

Other Premiere improvements include upgrading captions to graphics, batch graphic adjustments, alignment tools for titles and graphics, AAF support on Apple Silicon and more. If you are a fan of Premiere Pro’s collaborative multi-user editing workflows, then you’ll appreciate that Adobe has added progressive project loading, sequence locking, presence indicators, publish and update buttons, and a work offline feature.

After Effects
After Effects is also getting some useful updates, including a Properties Panel. This gives creators access to their most important settings — essentially improving efficiency with less timeline twirling. ACES and OpenColorIO color space workflows improve professional color-correcting workflows and keep consistent color between collaborators.

Frame.io
Frame.io has added the ability to sync still-frame photography just like video. Cameras like the Fujifilm X-H2S and X-H2 have been certified to upload directly to Frame.io if they are connected to the internet via Wi-Fi, Ethernet, or smartphone tether. Frame.io has been integrated into Capture One software so photos can go from camera to Frame.io to Capture One.

Look out for some of these updates now and some in the next few months.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and Uninterrupted: The Shop . He is also a member of the Producers Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Podcast 12.4

Move.ai’s Mobile App: Wireless, Suitless Mocap via AI

Move.ai is offering high-fidelity, markerless motion capture to the masses with software that extracts human motion from video using advanced AI. Through its new iOS application, users can generate studio-quality mocap with their phones.

Mocap

Using a minimum of two iPhones and a maximum of six, users can capture motion data anywhere at AAA game-studio quality. The app works with any new, old or refurbished iPhone, from the iPhone 8 up to the latest iPhone 14 model running iOS 15 or above. The beta program has already seen over 20,000 prelaunch signups, and the technology is already being used by AAA studios.

Creators now have the ability to set up in minutes — no suits required — in any environment at a lower cost than those using traditional suit- and sensor-based solutions. The Move.ai software can also capture completely natural movement without the restrictions of wearing suits.

“One of my favorite parts of the app is the simplicity,” says Fred Isaac, head of product at Move.ai. “Anyone anywhere can pick up the app and start capturing human motion with intricate detail, including hands, which are notoriously difficult to capture.

Mocap

“Our greatest achievement so far has been capturing motion from a person using a jetpack, but we’ve also explored the app’s possibilities in various activities such as dance and sports. We’ve even being able to capture fundamental human emotions.”

Move.ai is offering a two-minute free trial and a dollar-a-day yearly subscription.

Pricing is as follows: Free trial — two free minutes of usage to try out the app; Creator — one-off payment of $365 with unlimited usage for 12 months.

 

Podcast 12.4

Adobe Camera to Cloud: Integration With Mo-Sys, More

Adobe has developed a cloud-based collaboration solution for shooting content on virtual sets. Thanks to a new integration between Adobe Camera to Cloud (via Frame.io) and Mo-Sys, video pros can now see their visual effects scenes in Frame.io as they’re being shot on-set. By partnering with Mo-Sys, Adobe hopes to make instant collaboration and the speed of virtual production accessible to more filmmakers, not just big-budget productions.

In addition, Adobe has new integrations that extend Camera to Cloud to even more cameras, along with new versions of Premiere Pro and After Effects that make it faster and easier for video professionals to collaborate.

“With Premiere Pro, Frame.io and Camera to Cloud, we’re connecting the entire video creation process from camera capture to final delivery, allowing customers to collaborate in new ways, from anywhere,” says Scott Belsky, chief product officer/EVP of Creative Cloud.

With Camera to Cloud, footage is accessible instantly, no matter where it’s filmed, allowing editors to start cutting shows and movies while they are still being shot.

“Camera to Cloud has changed the way we think about dailies and editorial. The immediate review capability and seamless integration into Premiere Pro have improved our process and allowed us to work even faster,” says Alex Regalado, head of post at Duplass Brothers Productions. “For the first time, it feels like indie filmmaking is comparable to big-budget productions, and we can’t imagine a production without it.”

The complexity of virtual production adds time and expense to the process of producing content. Mo-Sys’ NearTime cloud rendering in Unreal Engine brings all the benefits of on-set visualization and high-quality visual effects shots without the expensive limitations of achieving real-time playback. Mo-Sys combines camera footage with virtual content, generating a high-fidelity 4K composite and transferring it into a neatly organized Frame.io project, which results in near-instant access to visual effects content for editorial and review from anywhere in the world.

Built on Frame.io, Camera to Cloud allows pros to securely transfer and store hundreds of thousands of assets in the cloud, whether for films, TV shows, commercials, corporate videos, live events or social media content. Teams can now start working together sooner, giving them more time to be creative. New integrations with the Atomos Zato Connect device, Teradek Serv Micro transmitter and Teradek Prism Flex encoder/decoder extend the use of Camera to Cloud to more cameras and production devices than before.

Besides forming new integrations that enable cloud collaboration in virtual production, Adobe has added new features and workflow refinements to Premiere Pro and After Effects that help them to create their content on increasingly tighter timelines.

Premiere Pro:

  • Better titling tools give quick access to editing elements and layers, new fill options for adding textures and graphics inside of text, and new export options for sharing text with the rest of the creative team.
  • Improved audio ducking changes the position of automatically generated fades under dialogue, making it easier to bring out dialogue over music.
  • GPU-accelerated Unsharp Mask and Posterize time filters save time by offloading work from the CPU to the graphics card.

After Effects:

  • New keyframe color labels let users quickly find important parts of an animation based on their color.
  • Track matte layers (public beta) let users choose any layer as a track matte to simplify compositing.
  • Quick exports with native H.264 encoding (public beta) use hardware acceleration to render H.264 files directly from within After Effects.

Good Luck to You, Leo Grande Editor Bryan Mason

Editor Bryan Mason has been busy recently. He edited two films that screened at this year’s Tribeca film festival. One was the Maya Newell-directed doc The Dreamlife of Georgie Stone, about the life and memories of Georgie as she prepares for her gender affirmation surgery. The second film, which he edited and shot, was Good Luck to You, Leo Grande.

Bryan Mason

Starring Emma Thompson as a retired widow who hires a sex worker (Daryl McCormack), Good Luck to You, Leo Grande was shot in the UK in early 2021 during the lockdowns. “The director, Sophie Hyde, and I have a long-term collaboration,” says Mason. “This is our fourth film together.” Post was done in Adelaide, Australia, where he lives.

Let’s find out more from Mason about editing and shooting this film…

How early did you get involved on this film?
Good Luck to You, Leo Grande was in the process of being drafted by writer Katy Brand and director Sophie Hyde when I was asked onto it. It is a nice combination, the shooting and editing, because you are deeply involved from preproduction through the shoot and all the way to delivery. I find it a satisfying way to work.

How did you work with the director? What direction were you given for the edit? How often was the director taking a look at your cut?
Sophie and I worked together in the room through the entirety of the edit for Leo Grande. This is the fourth feature-length project we have made together, and this is the way we tend to work — in-depth, in-person.

Was there a particular scene or scenes that were most challenging? If so, why? And how did you overcome that challenge?
Leo Grande is basically a two-hander that is set almost entirely in one room, so the challenge of that edit was to figure the best way to keep the audience engaged and interested. Sophie worked intensely with the cast in a five-day rehearsal period to find the rhythm of the dialogue.

Bryan Mason's editing setup

Bryan Mason’s editing setup

What system did you use to cut?
I cut both Tribeca films on Adobe Premiere Pro. I have been cutting on Premiere for the last eight years. It’s really intuitive and customizable, which I love.

How did you manage your time?
As with all jobs you have a limited time, and I believe this to be true of all films, it’s only as good as you can get it within the time constraints you have. That is certainly true of both of these pieces.

Did you have an assistant editor on this? If so, how did you work with them?
For Good Luck to You, Leo Grande, I worked with long-time assembly/assistant editor Cleland Jones. We were shooting in the UK, so at the end of each shoot day the rushes would be uploaded to Australia, and by the time we got back to set the next day, there was often a first pass assembly of yesterday’s work. Taking advantage of the international time difference in this way was very useful.

Once we were back in Australia and nearing the end of the edit process, I shared a cut with Cleland to see what he thought of how the film was shaping up, his input was as always valuable.

How do you manage producers’ expectations with reality/what can really be done?
Ha! That is a great question. Both films I worked on for this year’s Tribeca film festival required a delicate balance to feel right. With Dreamlife, we tried a number of different structures and dug back into the archive numerous times to strike the balance.

With Leo Grande, we managed to find a good rhythm early and then the job was very much to protect that and see it through.

How do you manage your time? Do you manage expectations or try everything they ask of you?
When setting up an edit schedule, it is important to build in time for feedback, as investors, producers and sometimes cast will almost always need opportunities to give feedback on the work and provide notes. Sophie and I, when working on Leo Grande, considered all the notes that were given and tried our best to address them if they resonated with the film we were trying to make. Sometimes notes come in that are not really in the spirit of the piece you are striving to create.

How do you take criticism?
I feel like it is always good to talk through critique of an edit as you are making a film, even if you don’t agree with what is being said. Tracing the source of the criticism to a particular moment or sequence in the film can offer great insight as to what an audience is or is not getting from the work.

Finally, any tips for those just starting out?
Find something you love doing and get busy doing it. For me that is making films as a cinematographer an editor or both.

A Finishing Artist’s Journey to NAB 2022

By Barry Goch

I almost didn’t make it to the first NAB in three years! I was on hold for weekend edit sessions for ABC’s A Million Little Things, and it wasn’t looking good. With some NAB events beginning on Saturday and the exhibits opening on Sunday, there was going to be a high degree of difficulty in making this work.

Thankfully, the post gods aligned, and after finishing my Friday night edit session, I was released for the weekend, and I immediately left North Hollywood and started my trek to Las Vegas…all the while knowing I’d have to be back to ColorTime at 9am Monday to continue my work on A Million Little Things.

NAB 2022

I told you!

Since I was driving my Tesla, I had to make a couple stops for charging. The first was EddieWorld in Yermo, California, home to one of California’s largest gas stations and which also weirdly hosts a roped-off display highlighting a piece of the Lakers basketball court. (Seriously, I’m not kidding — see picture for proof.) My next charging stop was near the world’s largest thermometer (because, why not?) in Baker, California. I finally arrived at my hotel in Vegas at two in the morning.

When I woke up Saturday morning, I decided a hike to the Mary Jane Falls at Mt. Charleston would clear my head in preparation for all the tech I would see at the LVCC. Just one hour outside of Las Vegas, there was snow.

NAB 2022Arriving at the LVCC
After the hike, I hit the convention center and caught the end of the Devoncroft Executive Summit. That included watching Devoncroft’s Josh Stinehour interview the president/CEO of Sinclair Broadcasting, Chris Ripley, about the transition to the cloud.

There was a lot of evidence of support for Ukraine, with banners like this one throughout the exhibit area and conference center. NAB was also handing out lapel pins with the colors of the Ukrainian flag.

After the Devoncroft summit, I attended a Colorfront BBQ at the Artisan Hotel, with Nacho Mazzini manning the grill. Sponsors included Frame.io, Alt Systems, SNS, ARRI, Zeiss and AJA. This was an opportunity to catch up with old friends and make some new ones. It was a wonderful respite before the official start of the show the next day. This type of event is one of the reasons I attend NAB — it’s the opportunity to see people you haven’t seen in years and to make that personal connection.

On Sunday morning, I attended a session by the Colorist Society Hollywood that featured colorist Lou Levinson, who works with Apple in technology development; Company 3 colorist Walter Volpatto; and Warner Bros. colorist John Daro. Lou is quite the character, and he presented his five-step path to grading nirvana, which was a tongue-in-cheek look at his approach to color grading.

NAB 2022Daro talked about trending technologies for colorists, including those for virtual production, AI processing of imagery with matte creation and advanced mattes called Cryptomatte, an open-source software created by Jonah Friedman and Andy Jones at VFX studio Psyop.

This year’s NAB convention looked a bit different than those in the past. Gone was the usual post-centric South Hall, which is now housed in the North Hall, as well as the brand-new West Hall. Because it’s a bit of a trek to get to on foot, the LVCC built what they call The Loop, which connects the West to the North and Central halls. It was built by the Boring company and featured Tesla cars driving folks in a loop between the buildings.

The first booth I visited was AWS, which was featured prominently in the West Hall — the largest booth they’ve had if memory serves. AWS had two separate pods that related to our world of finishing. They were also showing FilmLight Baselight and Autodesk Flame working in the AWS cloud.

Canon

Another welcome change to the show, in my opinion, were the camera test scenes, where camera manufacturers featured musicians instead of having models sit around idly. The image that you see here is from the Canon booth, where there was a live band playing — a refreshing addition to the show this year.

At the Canon booth, I got a demo of Canon’s AMLOS technology, which is a software and camera production suite designed to make hybrid work more interactive. They were featuring Joseph Gordon-Levitt talking about his new short film, A Forest Haunt.

As I zipped around Central Hall a bit more, a couple of things caught my eye, including the headless Sony Venice camera. I had seen a demonstration of this when Jon Landau was talking about using the Sony Venice for the Avatar sequels. It was nice to see this backpack rig in the Sony booth, where the body of the camera is on the camera operator’s back and the head “floats” on the front of this harness.

Adobe

From large cameras to small, I caught a glimpse of the Panasonic GH6 with attached SSD drive for recording higher-data-rate media, such as ProRes.

The other fun part of going to NAB for me was seeing behind-the-scenes footage and learning about new techniques of production. With that in mind, I attended a panel discussion for the HBO Max show Our Flag Means Death, with Stargate Digital’s Sam Nicholson and VFX supervisor Dave Van Dyke talking about the ship they built for the show and how they used virtual production to make it appear that it was on the water.

In addition to learning about production, software vendors had demonstrations and classes on the show floor, as you can see from the Maxon booth and also from the Adobe booth where Michael Cioni spoke about Camera to Cloud and the Frame.io integration into Premiere.

Blackmagic

Finally, Blackmagic Cloud, which was one of the bigger preNAB announcements, was on display at the Blackmagic booth. You could see it throughout Las Vegas near the convention center in the form of large Blackmagic banners on the sides of buildings. I went by the Blackmagic booth to see its new cloud hardware in person and learn about the just-released Resolve 18 and its AI processing, which helps create mattes and cutouts for fine-detail color and effects work.

I finished my fabulous NAB weekend by attending an Adobe cocktail event and a meeting of the Flame Users Group, where I was able to reconnect with people I hadn’t seen for years. It made the exhausting weekend completely worthwhile, and I’m glad I can share my journey with you.


Barry Goch is a post industry veteran, a frequent contributor to postPerspective, the newly elected chair of SMPTE Hollywood, and senior editor at North Hollywood’s ColorTime.

 

Colourlab Ai 2.0

Colourlab Ai 2.0 Adds Premiere, FCP Integration, New UI

Color Intelligence has released the public beta version of Colourlab Ai 2.0, with new features and an entirely new user interface created with the goal of making the color-grading process faster, easier and more accessible. In addition to the existing integration with DaVinci Resolve, Colourlab Ai v2.0 features support for both Adobe Premiere and Apple Final Cut Pro. There are also new subscription pricing options, including Colourlab Ai Pro and the more affordable Colourlab Ai Creator.

Colourlab Ai 2.0

In addition to the new key features, there are many general improvements, including optimizations to the Cinematic Neural Engine and color-matching functionality as well as overall speed and performance improvements that build on capabilities of Apple’s M1X processor.

“Our mission has always been to make the color-grading process more creative and accessible for everyone. Legacy color-grading tools are complex — requiring significant experience and skill to get Hollywood-quality results,” explains CEO Dado Valentic, who co-founded the company with Mark L. Pederson and Steve Bayes. “Artificial intelligence enables creators to spend more time on the creative aspects of their craft. With this 2.0 release, we have focused on making an app that empowers creators — regardless of their level of experience with color.”

New Features:
–    Timeline Intelligence
Dynamically sort shots based on their similar image characteristics using AI-based analysis from the Cinematic Neural Engine. This enables creators to work exponentially faster by “auto-sorting” shots together that require the same color grade.
–      Adobe Premiere and Apple Final Cut Pro Integration
Seamlessly round-trip timelines directly from Apple and Adobe editing systems. Users can now non-destructively color grade with Colourlab.ai even while they are still editing. Combined with traditional controls like Printer Lights and Lift Gamma Gain, Premiere and Final Cut Pro users can now benefit from a full color-managed pipeline.

–      Ai Powered Auto Color
Leverage the AI-based image analysis of Colourlab Ai’s cinematic neural network to facilitate a “one-click” color adjustment with consistent and superior results across your entire project. This feature can reduce the work of a dailies colorist to a single click and save hours on projects with extreme shooting ratios like reality-based content.

–      Color Tune
This intuitive toolset gives users with visual grading options that will enable them to more quickly fine-tune color. Color adjustments are simple, fast and easy.

–      Show Look Library
Colourlab Ai 2 ships with a new Show Look Library. Show Looks combine 3D LUTs with the parametric metadata of Color Intelligence’s Look Designer technology, allowing users to easily modify and create variations and leverage the film stock emulations included in Look Designer. You can also import any 3D LUTs, and they will be converted into a Show Look that can be further edited in Look Designer.
–      Smart LUTs
Colourlab Ai 2 introduces Smart LUTs, which answer the questions: What if a color “preset” or “filter” was intelligent? What if it was parametric could be adjusted and changed? What if it was content-aware? Beyond a 3D LUT, Smart LUTs contain parametric values that can be adjusted in Look Designer. They also contain a unique content fingerprint created from reference frames for use with Colourlab Ai’s cinematic neural network. Colourlab Ai 2 comes preloaded with a selection of stunning Smart LUTs to get users started. Users can also use Look Designer to modify these Smart LUTs in Look Designer and can even create a Smart LUT from any still image reference.

–      Improved Camera Matching
Instantly color-matched shots from different cameras in a full color-managed pipeline. When working with multicam projects, Colourlab Ai’s built-in camera profiles combined with its AI-based color matching will save hours of manual image-tweaking.

Pricing and Availability
With the Version 2.0 release, Colourlab Ai is now offered at two subscription levels:

Colourlab Creator includes full Ai functionality – subscription for $129 per year

Colourlab Pro includes Look Designer, Tangent device control, SDI video output, and DaVinci Resolve integration – subscription for $39 per month, $99 per quarter, $299 per year or $599 for a perpetual license.

Both versions come with a fully functional seven-day trial. While it is currently available only for macOS and fully optimized for Mac computers with Apple silicon, a Windows version is coming later this year.

Tiktok

Sundance: Seth Anderson on Editing TikTok, Boom. Documentary

Screened at this year’s Sundance, the documentary TikTok, Boom. dissects the incredibly popular social media platform TikTok. The film examines the algorithmic, sociopolitical, economic and cultural impact of the app — the good and the bad. The doc also features interviews with a handful of young people who have found success on TikTok.

Seth Anderson

TikTok, Boom. was directed by Shalini Kantayya and shot by DP Steve Acevedo, who used a Blackmagic Ursa Mini and a Sony equipped with Rokinon Cine DS Primes and Canon L Series lenses. Editor Seth Anderson, who cut TikTok, Boom., has worked on a variety of docs, features, TV series and shorts.

Let’s find out about his process on this feature documentary.

How early did you get involved on this film?
I was brought on shortly after shooting began.

What direction were you given for the edit? How often was Shalini Kantayya taking a look at your cut?
We cut remotely, so we each had our own systems, and we used Evercast when working together. I watched her previous films to see what edit style she would want to aim for, and Shalini gave me free range on my first pass of scenes.

Tiktok

Director Shalini Kantayya

Initially, I assembled all the verité scenes as stand-alone stories, as if we had no interviews to flesh them out. After creating arcs for each of the main characters, we added the characters’ individual interview bites. Then we cut the character arcs down and started intercutting them. After a version of the film was built that way, we started building the experts’ commentary (reporters, tech experts, etc.).

While shooting she was pretty hands-off, but after primary photography ended, we worked together most days.

Was there a particular scene or scenes that were most challenging?
The biggest challenge was trying to balance making a film that would entertain and inform the users of TikTok — mainly 20-somethings and younger, who already know the inner workings and drama surrounding TikTok — while also giving an introduction and overview of TikTok to non-users. Those are the people who know next to nothing about the app beyond mentions in news articles and jokes by comedians.

TikTok

Seth Anderson’s editing setup

Can you talk about working on this during the pandemic? How did that affect the workflow?
The pandemic definitely affected our workflow. The production company and media were in LA, and the director and I were in New York, so we had to manage the time difference with requests. Since many things that would quickly be worked out in person had to be done by email, some things took longer than usual.

You used Adobe Premiere running on a Mac. Is there a tool within that system that you used the most?
This was my first long-form job on Premiere, so I’m in a position of needing workflow tips rather than giving them.

How did you manage your time?
They started shooting in June, and I came on at the start of July, so we had a massive push to get a decent cut of the film ready to submit to Sundance. Then we had to keep pushing, with the hope we’d get in. Once we were in, we had to hustle to lock, do sound, VFX and color. We probably squeezed a year’s schedule into six months. I wouldn’t recommend it (laughs).

Did you have an assistant editor on this? If so, how did you work with them. Do you give them scenes to edit?
Yes, we had an assistant editor in LA, Tim Cunningham. This is one area where remote doesn’t help. I always want the relationship with the AE to be more collaborative, but that’s harder with different time zones and no actual face time.

I did give Tim a few scenes to assemble, and the post producers always had him doing things. As you can guess, we had a massive amount of archival material.

How do you manage producers’ expectations with reality/what can really be done?
You do your best. In most cases, producers want things done as quickly as possible, while directors want to think and mull over the work.

How do you manage your time? Do you manage expectations or try everything they ask of you?
If possible, I do all the producers’ notes, at least the ones the director signs off on. The director’s opinion and vision are paramount in making an independent feature, so I will say I do what is possible to do, but avoid the head

How do you take criticism?
I’ve been doing this for a while, so I’ve gotten good at accepting criticism. I think you should always be open to other people’s ideas. You never know where a genius idea will come from.

Finally, any tips for those just starting out?
Be open to learning new programs and techniques. Find out what you need to know for the section of the industry you want to work in.

With editing, you should focus on Avid, Premiere, FCP and other aspects of Creative Cloud. Learn those programs as well as you can. Just because you learned on one program doesn’t mean that program will be the one a potential jobs needs. Example: Most students nowadays learn to edit on Premiere, but Avid Media Composer is still the primary tool used on most jobs.

All Photos: Courtesy of Sundance Institute

HEVC

Hardware-Accelerated HEVC (H.265) in Premiere Pro

By Mike McCarthy

The High Efficiency Video Codec (HEVC), or H.265, is a processing-intensive codec for both encode and decode that leads to higher video quality at lower data rates. There have been both CPUs and GPUs available for years that have dedicated hardware within them to accelerate HEVC encoding and decoding. But this hardware acceleration requires specific support within software applications to use them. And unlike with software encoders, there are a finite number of supported encoding options that can be accelerated, each of which has to be explicitly supported. The newest updates to Premiere Pro have increased the number of hardware-accelerated options for HEVC workflows, greatly increasing performance with those types of files.

CPU-Based Codec Acceleration
Premiere Pro has had CUDA-based GPU acceleration for over a decade, since CS5, but it did not use Nvidia’s accelerated encode and decode hardware until recently. Adobe started with Intel’s hardware-based acceleration for H.264 and HEVC encoding in Version 13, which was limited to 4K at 8-bit on CPUs with Quick Sync video processing. This, from my perspective, was because of laptop chips. (High-end Xeon CPUs don’t support Quick Sync, including the newest W-3300 chips.)

 HEVC

The quality was also inferior to software encodes in the initial release, but that was fixed shortly thereafter. The next step was hardware-accelerated decoding of H.264 and HEVC, which made editing with those codecs much more doable on less powerful systems, especially when it came to scrubbing through footage, which is usually rough with long GOP compression formats.

GPU-Based Codec Acceleration
Then in June of 2020, Adobe added GPU encoding acceleration to Premiere Pro 14.2, which gave support for hardware acceleration of H.264 and HEVC encoding with both Nvidia and AMD graphics cards, regardless of your CPU. This capability was much more applicable to high-end workstations, which don’t have Intel’s consumer-level Quick Sync feature but have top-end, discrete GPUs. This is when I started using hardware acceleration for more than just testing purposes. It supported up to 8K resolution on newer hardware, but it was still limited to 8-bit color.

The Limits of Hardware Acceleration
Eight-bit color was fine for most web deliverables, which was what many of those types of encodes were geared toward at the time. But that was also about the time we started seeing more HDR workflows being developed…and HDR definitely requires at least 10-bit color. All HDR exports were still using the slower software encoding and required more processing under the hood to render the extra color detail when Max Bit Depth was enabled.

HEVC Once accelerated encoding became mainstream, my standard system benchmarking process was to encode 8K Red to 8K HEVC with hardware encoding to 8-bit Rec. 709, and with software encoding to 10-bit HDR, which took considerably longer. Those benchmarks were not really affected when Adobe added GPU decoding support for H264 and HEVC in Premiere 14.5, but that support really helped playback performance, especially when using multiple streams (like in a multicam timeline).

New, High-Quality 10-Bit 4:2:2 HEVC Recordings
What about newer, high-quality 10-bit 4:2:2 HEVC recordings? With the most recent release of its Version 22, Adobe added support for accelerated decode of 10-bit 4:2:2 HEVC files. This is specific to Intel Quick Sync because neither Nvidia nor AMD currently support 4:2:2 acceleration in their GPUs. Without hardware acceleration, these newer 4:2:2 HEVC files do not play back well at all on most systems. 4:2:2 refers to the amount of color data in a file, and it used to be much more frequently discussed when the industry was making the jump from SD to HD.

The human eye is more sensitive to brightness than chroma, so higher-resolution images could be encoded more efficiently by focusing on the luminance values over the chroma data. A 4:2:0 video file has basically half-res color detail in both dimensions, while a 4:4:4 file has full color data for every pixel. 4:2:2 sits between the two, with full-vertical but half-horizontal resolution for the color data. It is the default format for SDI connections.

Because H.264 and HEVC are designed to be delivery formats, they are targeted to carry the detail that is visible to the human eye and, for the sake of efficiency, drop anything that won’t be noticed. But now those codecs are being used in cameras for acquisition, and the lost color detail is becoming more noticeable during grading, when colorists highlight image detail that otherwise wouldn’t have been visible.

Because of this, camera manufacturers — who want the affordable efficiency of HEVC encoding but better-quality imagery — have started using HEVC encoding on 4:2:2 image data. Specifically, the Canon R5 and R6, Sony’s a7S III and other DSLRs use this new format, which is not as widely supported for hardware-accelerated playback. But users of Premiere Pro 22 who have Intel Quick Sync support on their newer CPUs (11th-gen graphics or higher) should now see much smoother playback of files from those cameras — on the order of 10 times the frame rate for real-time playback and three times faster processing for export or transcoding tasks.

HEVC 10-bit Encoding Acceleration
The most recent feature, which just appeared in the Premiere 22.1.1 beta, is 10-bit HEVC-accelerated encoding. This includes support for HDR output formats and runs on Intel CPUs or Nvidia GPUs. My initial tests showed my standard benchmarking encodes completing four times faster on my workstation (which can already encode pretty fast in software on the top-end CPU) and 16 times faster on my Razer laptop. Hardware acceleration usually makes a bigger difference on less powerful systems because the system has less spare processing power to throw at the software-encoding implementation.

This new 10-bit encoding acceleration will be a big help to those working in HDR, especially if they are using an Intel laptop — and all the more if they don’t have a discrete GPU (which I wouldn’t usually recommend editing on). HEVC export is limited to 4:2:0 color space because no one should need to output 4:2:2 HEVC for delivery, and HEVC is not a good choice for intermediate exports, even at 4:2:2. But if you have a top-end DSLR shooting 4:2:2 HEVC files, and you want to edit on a laptop and post your work to YouTube in HDR, then the playback and export of your project is going to be a whole lot better with the newest version of Premiere than it would have been before.


Mike McCarthy is a technology consultant with extensive experience in the film post production. He started posting technology info and analysis at HD4PC in 2007. He broadened his focus with TechWithMikeFirst 10 years later.

Avid MediaCentral Panel

Avid MediaCentral Panel Supports Adobe After Effects, Photoshop

Production teams working in broadcast can now bring graphic design work directly into Avid MediaCentral through the MediaCentral panel, which now includes support for Adobe Photoshop and After Effects.

The lightweight software plugin removes the manual steps graphic designers need to take in order to contribute to large-scale production projects and keep track of their content. Adobe Photoshop and After Effects users can store their photos, clips, graphical elements and other content for access anytime on the open MediaCentral platform.

With access to Avid Nexis storage through the MediaCentral panel, graphic designers can store their assets and projects and catalog them, enabling teams to easily access, edit on-the-fly and send back content into Avid production workflows. This allows teams to instantly locate individual design elements in archived content, eliminating productivity drains and having to save files on a separate storage platform.

“Until now, graphic designers have been kept at the periphery of news and post production teams. With support for Photoshop and After Effects, the MediaCentral panel opens the door to bring them right into the mix and make them part of the production workflow,” says Raúl Alba, director of solutions marketing, media and cloud, Avid. “With continuous third-party integrations coming for the open MediaCentral platform, media enterprises will continue to widen their circle of collaboration and contribution with a lot less complexity to generate content faster, increasing production output and efficiency.”

In addition, Media Composer and Adobe Premiere Pro editors can take advantage of MediaCentral’s capabilities to integrate within existing workflows, providing a seamless editing experience. This offers increased flexibility by opening workflows directly into the production environment, providing an unprecedented level of access and real-time collaborative power. Creatives can also preview MediaCentral-managed assets directly in Adobe Photoshop and After Effects without having to import media.

“Making the connection between Avid and Adobe users is an opportunity for significant productivity gains for media production teams, streamlining processes that can be frustratingly manual,” explains Adobe’s Van Bedient, who is director of strategic business development. “Our goal is to help increase collaboration and improve how graphic designers, editors and other content creators can browse, search, edit, share and distribute content more effortlessly and efficiently.”

 

Adobe Premiere

Digital Anarchy Updates AI Search Tool for Adobe Premiere

Digital Anarchy released a new update to its intelligent search engine for Adobe Premiere editors – PowerSearch 3.0. Acting as an intelligent search engine designed to scour video sequences for dialogue, PowerSearch integrates directly within Premiere, enabling editors to quickly search an entire project or Premiere Production for dialogue and instantly locate specific clips and sequences based on those keyword searches.

Adobe Premiere

PowerSearch takes advantage of transcripts generated by either Digital Anarchy’s Transcriptive A.I. transcription technology or by Adobe’s new transcription service to find dialogue and phrases. It’s the editor’s choice on which AI service to use.

To further simplify searching Premiere projects, the latest version of PowerSearch offers editors the ability to use common search engine commands, such as minus signs and quotes, for more precise searching. For editors with hundreds of hours of video, PowerSearch 3.0 will scour an entire Premiere project, making it easy for them to find exactly what they’re looking for by showing only relevant search results.

According to Digital Anarchy, the new version of PowerSearch offers a significant performance upgrade with measurable benefits over Premiere’s internal search tools, especially for editors working with transcripts for all their footage and sequences. Editors can either use transcripts generated by Transcriptive A.I. or Adobe Sensei (via SRT).

Adobe Premiere

The new version of PowerSearch also enables faster indexing and search processing along with additional new search tools. Enhanced integration with Premiere’s Source and Program panels means clicking on search results automatically opens up clips and sequences directly where the dialogue was spoken.

Here’s a short list of some of the key new features in PowerSearch 3.0:

  • Ability to index SRTs: Users can now search all captions simultaneously, versus other options such as Adobe Text Panel, which only allows users to search one SRT at a time.
  • Support for Adobe Sensei transcripts: By importing SRTs into Premiere or Transcriptive Rough Cutter Adobe Sensei transcripts can be searched.
  • Search improvements: Ability to search with quotes for more accurate results.
  • Support for Premiere Productions: Index individual projects in Premiere Productions and easily switch between them.
  • Project switching: New buttons on both screens are now accessible to load the index for an active project.
  • Increased indexing and searching speed.
  • Instant loading:  Users can now load the database without having to re-index. This eliminates the need to re-index the same project with a different name.

PowerSearch 3.0 is available now and is free for all users of Transcriptive Rough Cutter ($199). For new customers or those not using Transcriptive Rough Cutter with Premiere, PowerSearch 3.0 is priced at $99. A free trial of PowerSearch 3.0 is also available here. PowerSearch 3.0 is compatible with Premiere Pro 2020 and above (14.0 and above).