NBCUni 9.5.23

Category Archives: post production

Posting Life in Six Strings With Kylie Olsson

By Oliver Peters

Whether you’re a guitar nerd or just into rock ‘n roll history, learning what makes our music heroes tick is always entertaining. Music journalist and TV presenter Kylie Olsson started a YouTube channel during the pandemic lockdown, teaching herself how to play guitar and reaching out to famous guitarists that she she knew. This became the concept for a TV series called Life in Six Strings With Kylie Olsson that airs on AXS TV. The show is in the style of Comedians in Cars Getting Coffee with Olsson exploring the passions behind these guitarists, plus gets a few guitar pointers along the way.

James Tonkin

I spoke with James Tonkin and Leigh Brooks about the post workflow for these episodes. Tonkin is founder of Hangman in London, which handled the post on the eight-part series. He was also the director of photography for the first two episodes and has handled the online edit and color grading for all of the episodes. Leigh Brooks Firebelly Films was the offline (i.e. creative) editor on the series, starting with episode three. Together they have pioneered an offline-to-online post workflow.

Let’s find out more…

James, how did you get started on this project?
James Tonkin: Kylie approached us about shooting a pilot for the series. We filmed that in Nashville with Joe Bonamassa and it formed the creative style for the show. We didn’t want to just fixate on the technical side of the guitar and tone of these players, but their geographical base — we wanted to explore the city a little bit. We had to shoot it very documentary style but wrap it up into a 20-25 minute episode. No pre-lighting, just a tiny team following her around, interacting with these people.

Then we did a second one with Nuno Bettencourt and that solidified the look of the show during those two initial episodes. She eventually got distribution through AXS TV in the States for the eight-part series. I shot the first two episodes, and the rest were shot by a US-based crew, which followed the production workflow that we had set up. Not only the look and retaining the documentary format, but also maintaining the highest production value we could give it in the time and budget that we’re working with.

We chose to shoot anamorphic with a cinematic aspect ratio, because it’s slightly different from the usual off-the-cuff reality TV look. Also whenever possible, record in a raw codec, because we (Hangman) were doing all of the post on it, and me specifically being the colorist.

I always advocate for a raw workflow, especially something in a documentary style. People are walking from daylight into somebody’s house and then down to a basement, basically following them around. And Kylie wants to keep interacting with whomever she’s interviewing without needing to wait for cameras to stop and rebalance. She wants to keep it flowing. So when it comes to posting that, you’ve got a much more robust digital negative to work with [if it was shot as camera raw].

Leigh Brooks

What was the workflow for the shows and were there any challenges?
Leigh Brooks: The series was shot mainly with Red and Canon cameras as 6K anamorphic files. Usually, the drive came to me, and I would transcode the rushes or create proxy files and then send the drive to James. The program is quite straightforward and narrative-based, without much scope for doing crazy things with it.

It’s about the nuts and bolts of guitars and the players that use them. But each episode definitely had its own little flavor and style. Once we locked the show, James took the sequence, got hold of the rushes and then got to work on the grade and the sound.

What Kylie’s pulled off on her own is no small feat. She’s a great producer, knows her stuff and really does the research. She’s so passionate about the music and the people that she’s interviewing and that really comes across. The Steve Vai episode was awesome. He’s very holistic. These people dictate the narrative and tell you where the edit is going to go. Mick Mars was also really good fun. That was the trickiest show to do because the A- and B-side camera set-up wasn’t quite working for us. We had to really get clever in the edit.

Resolve is known for its finishing and color grading tool, but you used it to edit the offline as well. Why?
Tonkin: I’ve been a longtime advocate of working inside of Resolve, not just from a grading perspective, but editorial. As soon as the Edit page started to offer me the feature set that we needed, it became a no-brainer that we should do all of our offline in Resolve whenever possible.

On a show like this, I’ve got about six hours of online time and I want to spend the majority being as creative as I can. So, focusing on color correction, looking at anything I need to stabilize, resize, any tracking, any kind of corrective work — rather than spending two or three hours conforming from one timeline into another.

The offline on this series was done in Resolve, except for the first episode, which was cut in Apple Final Cut Pro X. I’m trying to leave editors open to the choice of the application they like to use. My gentlemen’s agreement with Matt [Cronin], who cut the first pilot, was that he could cut it in whatever he liked, as long as he gave me back a .drp (DaVinci Resolve project) file. He loves Final Cut Pro X because that’s what he’s quickest at. But he also knows the pain that conforms can be. So he handled that on his side and just gave me back a .drp file. So it was quick and easy.

From Episode 3 onwards, I was delighted to know that Leigh was based in Resolve, as well, as his primary workflow. Everything just transfers and translates really quickly. Knowing that we had six more episodes to work through together, I suggested things that would help us a lot, both for picture for me and for audio as well, which was also being done here in our studio. We’re generating the 5.1 mix.

Brooks: I come from an Avid background. I was an engineer initially before ever starting to edit. When I started editing, I moved from Avid to Final Cut Pro 7 and then back to Avid, after which I made the push to go to Resolve. It’s a joy to edit on and does so many things really well. It’s become my absolute workhorse. Avid is fine in a multi-user operation, but now that doesn’t really matter. Resolve does it so well with the cloud management, and I own the two editor keyboards.

You mentioned cloud. Was any of that a factor in the post on Life in Six Strings?
Tonkin: Initially, when Leigh was reversioning the first two episodes for AXS TV, we were using his Blackmagic Cloud account. But for the rest of the episodes, we were just exchanging files. Rushes either came to me or would go straight to Leigh. He makes his offline cut and then the files come to me for finishing, so it was a linear progression.

However, I worked on a pilot for another project where every version was effectively a finished online version. And so we used Blackmagic Cloud for that all the way through. The editor worked offline with proxies in Resolve. We worked from the same cloud project and every time he had finished, I would log in and switch the files from proxy to camera originals with a single click. That was literally all we had to do in terms of an offline-to-online workflow.

Brooks: I’m working on delivering a feature-length documentary for [the band] Nickelback that’s coming out in cinemas later in March. I directed it, cut it in Avid, and then finished in Resolve. My grader is in Portsmouth, and I can sit here and watch that grade being done live, thanks to the cloud management. It definitely has a few snags, but they’re on it. I can phone up Blackmagic and get a voice — an actual person to talk to that really wants to fix my problem.

You’ve both worked with a variety of other nonlinear editing applications. How do you see the industry changing?
Tonkin: Being in post for a couple of decades now and using Final Cut Studio, Final Cut Pro X and a bit of Premiere Pro throughout the years, I find that the transition from offline to online starts to blur more and more these days. Clients watching their first pass want to get a good sense of what it should look like with a lot of finishing elements in place already. So you’re effectively doing these finishing things right at the beginning.

It’s really advantageous when you’re doing both in Resolve. When you offline in a different NLE, not all of that data is transferred or correctly converted between applications. By both of us working in Resolve, even simple things you wouldn’t think of, like timeline markers, come through. Maybe he’s had some clips that need extra work. He can leave a marker for me and that will translate through. You can fudge your way through one episode using different systems, but if you’re going to do at least six or eight of them — and we’re hopefully looking at a season two this year — then you want to really establish your workflow upfront just to make things more straightforward.

Brooks: Editing has changed so much over the years. When I became an engineer, it was linear and nonlinear, right? I was working on the James Bond film, The World Is Not Enough, around 1998. One side of the room was conventional — Steenbeck’s, bins, numbering machines. The other side was Avid Media Composer. We were viewing 2K rushes on film, because that’s what you can see on the screen. On Avid it was AVR-77. It’s really interesting to see it come full circle. Now with Resolve, you’re seeing what you need to see rather than something that’s subpar.

I’d say there are a lot of editors who are “Resolve curious.” If you’re in Premiere Pro you’re not moving [to a different system], because you’re too tied into the way Adobe’s apps work. If you know Premiere, you know After Effects and are not going to move to Resolve and relearn Fusion. I think more people would move from Avid to Resolve, because simple things in Resolve are very complicated in Avid — the effects tab, the 3D warp and so on.

Editors often have quite strange egos. I find the incessant arguing between platforms is just insane. It’s this playground kind of argument about bloody software! [laugh] After all, these tools are all there to tell stories.


Oliver Peters is an award-winning editor/colorist working in commercials, corporate communications, television shows and films.

Creating Titles for Netflix’s Avatar: The Last Airbender

Method Studios collaborated with Netflix on the recently released live-action adaptation of the series, Avatar: The Last Airbender. The series, developed by Albert Kim, follows the adventures of a young Airbender named Aang, and his friends, as they fight to end the Fire Nation’s war and bring balance to the world. Director and executive producer Jabbar Raisani approached Method Studios to create visually striking title cards for each episode — titles that not only nodded to the original animated series, but also lived up to the visuals of the new adaptation.

The team at Method Studios, led by creative director Wes Ebelhar, concepted and pitched several different directions for the title before deciding to move forward with one called Martial Arts.

“We loved the idea of abstracting the movements and ‘bending’ forms of the characters through three-dimensional brushstrokes,” says Ebelhar. “We also wanted to create separate animations to really highlight the differences between the elements of air, earth, fire and water. For example, with ‘Air,’ we created this swirling vortex, while ‘Earth’ was very angular and rigid. The 3D brushstrokes were also a perfect way to incorporate the different elemental glyphs from the opening of the original series.”

Giving life to the different elemental brushstrokes was no easy task, “We created a custom procedural setup in Houdini to generate the brushstrokes, which was vital for giving us the detail and level of control we needed. Once we had that system built, we were able to pipe in our original previz , and they matched the timing and layouts perfectly. The animations were then rendered with Redshift and brought into After Effects for compositing. The compositing ended up being a huge task as well,” explains Ebelhar. “It wasn’t enough to just have different brush animations for each element, we wanted the whole environment to feel unique for each — the Fire title should feel like its hanging above a raging bonfire, while Water should feel submerged with caustics playing across its surface.”

Ebelhar says many people were involved in bringing these titles to life and gives “a special shout out to Johnny Likens, David Derwin, Max Strizich, Alejandro Robledo Mejia, Michael Decaprio and our producer Claire Dorwart.”

NBCUni 9.5.23

Telestream to Intro AI-Powered Tools at NAB 2024

At NAB 2024, Telestream will introduce a new AI-powered suite of media processing tools designed to change how media pros ingest, enhance, and deliver content, optimizing every step for speed, quality and efficiency across the media production life cycle.

The industry’s shift to remote production presents significant challenges for media companies, particularly in accessing high-resolution, mezzanine content. Telestream’s GLIM as a Service, a new cloud-based solution, addresses this issue by offering instant playback of content in any format through a web browser.

This service streamlines remote content access, eliminating the need for extensive downloads or specialized playback hardware. By enabling quicker access and simplifying the production process, GLIM as a Service, according to Telestream, not only accelerates production workflows but also reduces operational costs by eliminating the reliance on physical hardware and streamlining content review and approval processes.

AI-Powered Tools
Telestream is introducing a suite of AI-powered media processing tools, marking a significant advancement in the production and distribution of media content. These solutions are designed to empower production teams, enabling them to produce and distribute high-quality content more efficiently and swiftly than ever before, aligning with the demand for fast turnaround times across diverse platforms.

  • Automated Workflow Creation: By leveraging artificial intelligence, Telestream’s Vantage Workflow Designer automates the configuration of media processing workflows. Telestream says this drastically reduces manual interventions, streamlines operations and minimizes errors, significantly speeding up the production cycle.
  • Intelligent Quality Control (QC): Telestream’s AI-driven QC tools automate the process of ensuring consistent content quality across large volumes of media. This automation supports the delivery of high-quality content at the speed demanded by multiple platforms, eliminating the scalability challenges of manual QC.
  • Efficient Captioning and Subtitling: The integration of AI also extends to captioning and subtitling processes, making them faster and more efficient. This not only enhances content accessibility and global reach but also ensures that content can be quickly turned around to meet the immediate needs of a diverse and widespread audience.

Simplified Adoption and Integration
Understanding the industry’s hesitation toward complex technology adoption, Telestream says it has focused on making its advanced AI solutions accessible and easy to integrate. This approach lowers the barrier to adopting these technologies, enabling media entities to adapt and innovate quickly.

Quality Control
Telestream is offering updates to its Qualify QC:

  • IMF Compliance: Enhanced with Netflix Photon support, ensuring Interoperable Master Format (IMF) packages meet critical industry standards.
  • Harding FPA Test: Incorporates detection capabilities for potentially epileptic content, prioritizing viewer safety.
  • Dolby E Presence and Dolby Vision Validation: Verifies the inclusion of Dolby E audio and the accuracy of Dolby Vision metadata, guaranteeing top-notch audiovisual experiences.
  • Rude Word Detection: A new tool to screen and flag unsuitable language, ensuring content suitability for all audiences.

These enhancements to Qualify QC reflect Telestream’s commitment to advancing quality control processes, making it simpler for media professionals to deliver content that is not only compliant but is also of the highest quality and delivers the best viewer experience.

Live Capture in the Cloud
Telestream is introducing a new cloud-based Live Capture as a Service offering that is designed to simplify the live capture of content from any location in real time, allowing production teams to bypass the traditional hurdles of remote setup and maintenance. With this solution, media companies can overcome the limitations of traditional physical infrastructure, facilitating a faster transition from live capture to broadcast and optimizing production workflows. This new flexibility not only accelerates content delivery but also empowers companies to capture and monetize additional content.

Emerging Protocols
Telestream is introducing the Inspect Monitoring Platform, a monitoring solution crafted for SMPTE ST 2110, SRT and NDI protocols. The platform offers a comprehensive solution for continuous media stream integrity monitoring, in-depth issue analysis and strategic optimization of broadcasting and streaming operations. This approach enables production companies to detect, diagnose and optimize high-quality content delivery across all platforms and protocols.

Managing Media Storage via Diva 9
As the media industry transitions to cloud storage, organizations have to navigate integrating cloud solutions with existing infrastructures, all while safeguarding their assets and ensuring operational continuity. By adopting a hybrid storage strategy, these organizations can strike a balance between ensuring seamless access to content, optimizing storage costs and implementing robust disaster recovery protocols. The challenge lies in executing this balance effectively.

Diva 9 addresses these critical pain points associated with moving content to the cloud by offering a seamless, hybrid content management solution. It facilitates the smart transition of media assets between on-premises and cloud environments, leveraging intelligent media storage policies, advanced Elasticsearch search functions and integrations with MAM, automation and other cloud systems. This approach ensures the secure and scalable storage of content and improves accessibility and cost-effectiveness.


Perinno

ColorNation Adds Colorists Mary Perrino and Ana Rita

Remote color service ColorNation has added two new colorists to its roster — Mary Perrino and Ana Rita.

Perrino is a veteran New York-based colorist who has worked out of her own studio, La Voglia, for nearly a decade. Initially trained as a cinematographer at NYU’s Tisch School of the Arts, she segued into color as her interest in post grew. Collaborating on everything from indie features to commercials, her work as a color artist took on a life of its own, allowing her to elevate but not overpower the visuals with which she’s entrusted. With spots for brands like Tiffany, DKNY, Steve Madden, Pink, Canon and Google on her reel, ColorNation marks her first representation agreement.

Rita has worked in post production for almost a decade. Based in her native Portugal, she’s worked on short films and commercials, handling assignments from some of the largest agencies in the world and for some of the biggest global brands. During a stint in New York, she worked on several long-form projects, including an indie feature and the YouTube series Made in America.

With a strong representation in food and beverage work, Rita is also adept at lifestyle and fashion spots and has done work in the music video space, with evocative grading seen in videos for indie singer and guitarist Rorey and brightly lit work for the rising jazz fusion saxophone star Grace Kelly.

“Adding Mary and Ana to our roster is part of our plan to offer ColorNation clients access to a diverse talent pool, located in different regions around the world,” says founder/EP Reid Brody. “Both of these artists have amazing showreels, and their work fits perfectly with what the marketplace is looking for today – colorists with a point of view, with an understanding of how to enhance the work and with a wide range of experience in terms of content categories and visual styles.”

Perrino says she joined the roster because she views Brody’s approach to the business as being in step with the times. “What he’s doing with ColorNation is unique,” she observes. “I’ve never signed with anyone before because nothing has ever felt right. My independence is incredibly precious, and I was seeking a relationship that wouldn’t change how I do business, but rather build upon it.”

Perrino says her path to the color suite seems almost pre-ordained: “I enjoyed post-processing photographs and video from a young age,” she recalls. During her years at NYU studying cinematography, she adds, “peers appreciated my aesthetic, and soon realized a huge part of the look I was achieving was through color, so they started asking me to grade their projects.” Once she mastered Resolve, she says, “my aesthetic ideas could flow easily, and I fell even more in love with color as a craft.”

Rita came across ColorNation while researching independent color services and was already looking for a remote option that would allow her to expand her client base and the kinds of projects she was handling. A social media post from current ColorNation artist Vincent Taylor led her to Brody.

“What interests me most about color is its ability to shape the viewer’s emotions,” explains Rita. “It’s truly powerful how subtle adjustments can evoke such varied feelings. Additionally, I find the mathematical aspects fascinating, along with delving into the intricacies of different color spaces and discovering myriad tricks that can yield diverse and impactful results.”

Perrino and Rita join a ColorNation roster that includes colorists Gino Amadori, Cory Berendzen, Calvin Bellas, Yohance Brown, Ben Federman, Andrew Francis, Heather Hay, Lea Mercado, Mark Todd Osborne, Matthew Rosenblum, Vincent Taylor and Matt West.

 

 


Rodeo FX Adds Ana Escorse To Lead New Color Suite

VFX, post production, animation and experiential services provider Rodeo FX has added senior colorist Ana Escorse to lead its new color grading suite.

Escorse joins Rodeo FX from Alter Ego. Before that, she did stints at Studio Feather, Nice Shoes and Frame Discreet. She started her career in color grading as a color assistant at Sim Post (now part of Streamland Media). Escorse’s work on Lovezinho earned her the Music Video award at the 2022 FilmLight Colour Awards. Then she joined the 2023 jury panel alongside leading creatives and DPs such as Lawrence Sher, ASC; Greig Fraser, ACS, ASC; and Natasha Braier, ASC, ADF.

By adding color grading to its roster of services, Rodeo FX can now serve its clients’ projects from start to finish. The new suite, located in Toronto, is equipped with FilmLight Baselight. Escorse, who has been using Baselight for many years, will work either remotely or on-site in Toronto.

“Baselight is widely recognized and respected in the film and television industry and allows me to offer our clients and collaborators the most advanced features and highest quality image processing available in post,” Escorse says. “FilmLight’s commitment to continuously developing new technologies and features as well as Baselight’s customization and control make it a very efficient and reliable tool, allowing me to focus on the creative process and client collaboration.”


HPA Tech Retreat 2024: Networking and Tech in the Desert

By Randi Altman

Late last month, many of the smartest brains in production and post descended on the Westin Rancho Mirage Golf Resort & Spa in Palm Springs for the annual HPA Tech Retreat. This conference is built for learning and networking; it’s what it does best, and it starts early. The days begin with over 30 breakfast roundtables, where hosts dig into topics — such as “Using AI/ML for Media Content Creation” and “Apprenticeship and the Future of Post” — while the people at their table dig in to eggs and coffee.

Corridor Digital’s Niko Pueringer

The day then kicks further into gear with sessions; coffee breaks inserted for more mingling; more sessions; networking lunches; a small exhibit floor; drinks while checking out the tools; dinners, including Fiesta Night and food trucks; and, of course, a bowling party… all designed to get you to talk to people you might not know and build relationships.

It’s hard to explain just how valuable this event is for those who attend, speak and exhibit. Along with Corridor Digital’s Niko Pueringer talking AI as well as the panel of creatives who worked on Postcard from Earth for the Las Vegas Sphere, one of my personal favorites was the yearly Women in Post lunch. Introduced by Fox’s Payton List, the panel was moderated by Rosanna Marino of IDC LA and featured Daphne Dentz from Warner Bros. Discovery Content Creative Services, Katie Hinsen from Marvel and Kylee Peña from Adobe. The group talked about the changing “landscape of workplace dynamics influenced by #metoo, the arrival of Gen Z into the workforce and the ongoing impact of the COVID pandemic.” It was great. The panelists were open, honest and funny. A definite highlight of the conference.

We reached out to just a few folks to get their thoughts on the event:

Light Iron’s Liam Ford
My favorite session by far was the second half of the Tuesday Supersession. Getting an in-depth walk-through of how AI is currently being used to create content was truly eye-opening. Not only did we get exposed to a variety of tools that I’ve never even heard of before, but we were given insights on what the generative AI components were actually doing to create these images, and that shed a lot of light on where the potential growth and innovation in this process is likely to be concentrated.

I also want to give a shoutout to the great talk by Charles Poynton on what quantum dots actually are. I feel like we’ve been throwing this term around a lot over the last year or two, and few people, if any, knew how the technology was constructed at a base layer.

Charles Poynton

Finally, my general takeaway was that we’re heading into a bit of a Wild West over the next three years.  Not only is AI going to change a lot of workflows, and in ways we haven’t come close to predicting yet, but the basic business model of the film industry itself is on the ropes. Everyone’s going to have to start thinking outside the box very seriously to survive the coming disruption.

Imax’s Greg Ciaccio
Each year, the HPA Tech Retreat program features cutting-edge technology and related implementation. This year, the bench of immensely talented AI experts stole the show.  Year after year, I’m impressed with the practical use cases shown using these new technologies. AI benefits are far-reaching, but generative AI piqued my interest most, especially in the area of image enhancement. Instead of traditional pixel up-rezing, AI image enhancements can use learned images to embellish artists’ work, which can iteratively be sent back and forth to achieve the desired intent.

It’s all about networking at the Tech Retreat.

3 Ball Media Group’s Neil Coleman
While the concern about artificial intelligence was palpable in the room, it was the potential in the tools that was most exciting. We are already putting Topaz Labs Video AI into use in our post workflow, but the conversations are what spark the most discovery. Discussing needs and challenges with other attendees at lunch led to options that we hadn’t considered when trying to get footage from field back to post. It’s the people that make this conference so compelling.

IDC’s Rosanna Marino
It’s always a good idea to hear the invited professionals’ perspectives, knowledge and experience. However, I must say that the 2024 HPA Tech Retreat was outstanding. Every panel, every event was important and relevant. In addition to all the knowledge and information taken away, the networking and bonding was also exceptional.

Picture Shop colorist Tim Stipan talks about working on the Vegas Sphere.

I am grateful to have attended the entire event this year. I would have really missed out otherwise. The variation of topics and how they all came together was extraordinary. The number of attendees gave it a real community feel.

IDC’s Mike Tosti
The HPA Tech Retreat allows you to catch up on what your peers are doing in the industry and where the pitfalls may lie.

AI has come a long way in the last year, and it is time we start learning it and embracing it, as it is only going to get better and more prevalent. There were some really compelling demonstrations during the afternoon of Supersession.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 25 years. 


GoPro Hero12

Review: GoPro Hero12 Black Action Camera

By Brady Betzel

The updated GoPro Hero12 Black introduces a few features that make it a must-buy for very specific professional-level users. I love it when GoPro releases updates to its cameras and software. It’s always a step forward in quality and features while keeping the familiar form factor that has made GoPro the go-to action camera for years. The GoPro Hero12 Black is no exception, with features like the new GP-Log color profile and wireless audio recording. It’s even better when you bundle it with the Max Lens Mod 2.0.

GoPro Hero12

Whether you are mounting dozens of GoPros on loaders and excavators with an eye toward syncing Avid Media Composer later, or you need to closely match color between the Hero12 and a Blackmagic RAW clip, the Hero12 Black is an upgrade you’ll want to consider if you are a pro looking to streamline your workflow. And if you haven’t already subscribed to the GoPro Premiere subscription service, grab yourself a year subscription for the sale price of $24.99.

GoPro Hero12 Black Edition Upgraded Specifications

  • Mounting – Built-in mounting with folding fingers¼-20 mount
  • Image sensor – 1/1.9″ CMOS – 27.6 MP active pixels(5599×4927)
  • Lens Aperture – F2.5
  • FOV – 156° in 8:7 aspect ratio (35mm Equivalent Focal Length)
    • Min = 12mm
    • Max = 39mm
  • Video Resolutions and Frame Rates
    • 3K (8:7) 30/25/24 fps5.3K (16:9) 60/50/30/25/24 fps4K (8:7) 60/50/30/25/24 fps4K (9:16) 60/50/30/254K (16:9) 120/100/60/50/30/25/24 fps2.7K (4:3) 120/100/60/50 fps2.7K (16:9) 240/200 fps1080 (9:16) 60/50/30/251080p (16:9) 240/200/120/100/60/50/30/25/24 fps
  • Video stabilization – HyperSmooth 6.0
  • Aspect ratio – 16:9 9:16 4:3 8:7
  • HDR video – 5.3K (16:9) 30/25/24 fps4K (8:7) 30/25/24 fps4K (16:9) 60/50/30/25/24 fps
  • Video compression standard – H.265 (HEVC)
  • Color video bit depth – 8-bit/10-bit (4K and higher)
  • Maximum video bit-rate – 120Mbps
  • Zoom (Video) – Up to 2x
  • Slo-Mo – 8x – 2.7K; 1080p4x – 4K2x – 5.3K
  • Live streaming – 1080p60 with HyperSmooth 4.0 + 1080p60 recording
  • Webcam mode – up to 1080p30
  • Timecode synchronization – Yes
  • Wireless AudioSupport for AirPods and other Bluetooth headsets
  • GP-Log encoding with LUTs

There are a lot of specs in this tiny little GoPro hardware. But as I mentioned earlier, the Hero 12 Black has a few very specific features that pros and semi-pros should really love.

Let’s dig in…

GP-Log Color Profile
First up is the highly sought after (at least by me) GP-Log color profile. I am an online editor, so I deal with video finishing and color correction. From painting out camera crews to stabilizing to noise reduction, I try to make the end product as flawless as possible before it goes to air. So cameras with low noise floors, low moiré and natural-looking stabilization go a long way in my book.

GoPros have been a staple in docuseries and unscripted television shows for years. They can be easily hidden in cars for OTF interviews or discussions between cast members or even buried in the snow to catch a wild animal walking by. If the camera breaks, it’s not the end of the world because they are reasonably priced. The hard part has always been matching the look of an action-cam like a GoPro to that of a higher-end camera system that uses full-frame sensors and multi-thousand-dollar lenses. GoPro has attempted to make that a little easier with the newly added GP-Log color profile.

A Log color profile is a way for the camera to record more steps in dynamic range (think highlights that don’t blow out or shadows that retain details). Log profiles are not meant to be used by everyday filmmakers because, at times, it can be tricky to color-correct Log profiles correctly versus recording in standard Rec. 709 color space or even in the GoPro HDR color profile. Pros use Log profiles to aid in camera color and aesthetic-matching with the hope of giving the audience a more filmic feel, with more details in shots with high contrast. This helps the audience not to notice a change from an ARRI Alexa Amira to a GoPro Hero12 Black, for example.

As I was working with the GoPro Hero12 Black footage in Blackmagic’s DaVinci Resolve 18.6.5, I was monitoring the footage on a large OLED monitor through a Blackmagic DeckLink 4K Extreme over HDMI. Looking at GoPro footage on a phone or a small tablet does not give the entire story. It is essential to view your footage through proper I/O hardware on a professional monitor — preferably color-calibrated. Otherwise, you might miss crucial issues, like noise in the shadows.

GoPro Hero12

In addition, on the same computer but with a separate screen, I monitored the video signal using Nobe’s OmniScope 1.10.117. OmniScope is an amazing software-based scope that can be used in conjunction with your nonlinear editor or color-correcting software like Resolve. It is giving hardware scopes a huge run for their money these days, and I wouldn’t be surprised if these types of scopes took over. My base computer system includes an AMD Ryzen 9 5950X processor, an Asus ProArt motherboard, 64GB RAM and an Nvidia RTX 4090 Founder’s Edition GPU.

How well does the new GoPro Hero12 Black Edition’s GP-Log color profile work? When looking at footage shot in GP-Log through color scopes, there is more detail retained in the shadows and highlights, but it really isn’t enough to warrant the extra work to get there. Instead, if you turn down the sharpness in GoPro’s HDR mode, you can get to a similar starting point as something shot in GP-Log. Aside from that, one of the benefits of using GP-Log and applying the GoPro LUT is the ability to color “behind the LUT” to expand the highlights or dial in the shadows. But again, I didn’t see as much value as I had hoped, and I tested color in both DaVinci Wide Gamut and Rec. 709 color spaces. The biggest letdown for me was that the GP-Log footage appeared less detailed than HDR or a standard color profile. And it wasn’t as simple as just increasing the sharpness to match. There is something odd about it; the colors seemed “dense,” but the footage felt soft. I just don’t think the GoPro GP-Log color profile is the panacea I was hoping it would be. Maybe future updates will prove me wrong. For now, the HDR mode with low sharpness seems to be a sweet spot for my work.

Syncing Cameras Via Timecode
Another update to the GoPro Hero12 Black that I was excited to see is the ability to sync cameras via timecode. Maybe 10 or 12 years ago, one of the banes of my existence as an assistant editor was transcoding footage from MP4 to a more edit-friendly codec, like ProRes or DNxHD. This would not only help slower editing systems work with the hundreds of hours of footage we received, but it would also insert actual timecode and tape names/IDs into the clips.

This is a crucial step when working in a traditional offline-to-online workflow process. If you skip this step, it can quickly become a mess. The GoPro Hero12 Black inserts timecode into the file to help with syncing and auto-syncing cameras in your favorite NLE, like Adobe Premiere Pro, Media Composer, Apple FCPX or Resolve. You’ll still need to force a proper tape name/camera name/tape ID to clearly distinguish clips from differing dates/times, but with faster computers, the addition of actual timecode could help eliminate a lot of transcoding.

What’s really smart about GoPro’s timecode sync is the workflow. Jump into the Quik app, find a Hero12 that you want to sync, click the three-dot drop-down menu, click “Sync Timecode” and, while turned on, it will show the QR code to the GoPro Hero12 Black. Once recognized, you will get a verification on the GoPro that it has been synced. And that’s it! While this feature is a long time coming, it is a welcome addition that will save tons of time for professional creators who run dozens of cameras simultaneously.

Other Updates
Finally, there are a couple of minor updates that also caught my eye. The addition of the ¼-20 mount between the GoPro folding finger mounts is a huge structural change. It’s something that should have been there from the beginning, and it’s nice not to have to purchase GoPro-specific mounts all the time.

Another great update is the ability to pair AirPods or other Bluetooth audio devices for wireless sound recording and voice control. Keep in mind that when using Bluetooth earbuds with built-in microphones, any noise reduction built into the headphones will be hard-coded into the recorded audio file. But hand it to GoPro to record two channels of audio when using a Bluetooth earbud mic. This way, if your wireless mic signal drops out, you won’t be out of luck. The GoPro’s built-in mic will still be recording.

On the accessory front, if you purchase the newest Max Lens Mod 2.0 with the GoPro Hero12 Black, you’ll be able to take advantage of a few new features. Besides the larger 177-degree field of view when shooting 4k at 60fps, GoPro recently released a software update that allows for using the Max Lens Mod 2.0 in Linear lens mode. This means no fish-eye look! So in addition to the HyperView and SuperView recording modes, you can get an even larger field of view than the standard GoPro Hero12 Black lens in Linear mode.

Something to keep in mind: You cannot record in the GP-Log color profile when using the Max Lens Mod 2.0. Hopefully GoPro will continue to lean into the GP-Log color profile, improve the quality and dynamic range, and add it to the recording ability with the Max Lens Mod 2.0. But for now, the Max Lens Mod 2.0 is a great accessory to put on your wish list.

If the GoPro Hero12 Black is above your price range, or you aren’t sure that you want to give it to your 6-year-old to throw around on the water slide like I did, then there are a few lower-priced options that get you pretty close. The Akaso Brave 7 is waterproof for up to 30 minutes and has up to 4K/30fps video, time lapse, hyperlapse and photo-taking abilities. The Akaso Brave 7 retails for $169.99 and not only comes packed with tons of GoPro-like accessories, but also a wireless shutter remote.

While the video recording quality isn’t at the same level as the Hero12 Black, if you’re looking for a well-rounded but not quite pro-level camera, the Brave 7 might be for you. In fact, I might actually prefer the color of the Brave 7, which feels a little more accurate as opposed to the heavily saturated GoPro. Keep in mind that with lower-priced cameras like the Brave 7, the physical quality can be a little lower, and options like frame rates can be minimal. For instance, the Brave 7 does not record in 24p, lacks 10-bit and does not have the GoPro style fingers or ¼-inch 20 connection.

Summing Up
In the end, the GoPro Hero12 Black is a great update if you have an older-model GoPro… think Hero10 or earlier. And while the battery appears to last longer when recording in cold or imperfect conditions, in my tests I found that heat is still the enemy of the Hero12. Anything above 80 degrees in direct sunlight will limit your recording time. Running it for a couple of my son’s baseball games left me guessing whether I would actually be able to record full games because of the heat.

If you have a GoPro Hero11 Black, then I suggest you skip the Hero12 and grab the Media Mod accessory for your Hero11, which will add a higher-quality mic and external inputs. You could also add some sort of shade to keep your camera cool — there are a lot of interesting 3D-printed products on Etsy. The Hero12 Black no longer has a GPS, so if the graphic overlays or metadata were helpful to you, the Hero11 might be where you should stay for now.

However, if you need the new timecode sync, grab the Hero12 Black. That’s a solid feature for those of us who need to sync multiple GoPros at once. I love the Hero12 Black’s Quik QR code syncing feature. The wireless audio recording is a welcome addition as well, but in my testing, the audio didn’t come out as clean as I had wished for. I think using the built-in or a hard-wired mic is still best.

The GoPro Hero12 Black edition currently retails for $349.99, and the Hero12 Black with Max Lens Mod 2.0 currently retails for $429.98.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and Uninterrupted: The Shop. He is also a member of the PGA. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Podcast 12.4

Foundry Intros Modo 17.0, Bundles With Otoy OctaneRender

Foundry has released Modo 17.0, an update to its 3D software that overhauls internal systems to provide performance increases. These enhancements help artists by providing the interactivity necessary for modern asset creation workflows, with an additional focus on quality-of-life features in multiple areas. Foundry has also bundled Otoy’s Prime version of OctaneRender, which gives artists a speed increase of up to 50x over traditional CPU renderers straight out of the box.

“With 3D asset creation becoming widely adopted, performance is paramount for the future of DCC apps,” says Greg Brown, product manager at Foundry. “Modo 17.0 sets a foundation for increased performance now plus further enhancements well into Modo’s future. Additionally, bundling the Prime version of the OctaneRender from Otoy with Modo 17.0 will speed up the entire experience, from modeling to final render, reducing performance barriers for artists.”

Artists working on Apple Silicon machines will see an additional speed increase of 50% on average, thanks to Modo’s new native macOS ARM build.

With overhauled core systems and granular performance updates to individual tools, Modo, says Foundry, is poised to re-envision 3D workflows. The Modo community can expect a return to more frequent releases for Modo in 2024, which will build on the foundation of 17.0 to further accelerate more aspects of Modo. This 3D application is tailored to enhance the capabilities of experts while also making those capabilities easier for novices to use.

Foundry has enhanced several capabilities of Modo’s powerful modeling tools, including:

  • Decal workflow — It’s now faster and easier to use decals and wrap flat images onto complex surfaces with minimal distortion and no UV creation.
  • Primitive Slice — Users can now clone multiple slices of the same shape at once, making it easier to produce complex patterns. A new Corner Radius feature rounds corners on rectangles and squares so artists can make quick adjustments without switching between presets.
  • Mesh Cleanup — With this tool, users can automatically fix broken geometry and gaps so they can stay productive and avoid interrupting the creative flow.
  • Radial Align — Radial Align turns a selection into a flat circle, but artists frequently need a partial radius and not a complete circle for things like arches. Modo 17.0 ships with the ability to create a partial radial alignment.
  • PolyHaul — PolyHaul combines many of the most used modeling tools into one streamlined tool. This means artists can spend less time jumping between separate tools, helping them to stay in the flow.

“We are thrilled to bundle OctaneRender with Modo 17.0, bringing instant access to the industry’s first and fastest unbiased GPU render engine. Our mission is to democratize high-end 3D content creation, enabling anyone with a modern GPU to create stunning motion graphics and visual effects at a fraction of the cost and time of CPU architectures. We are excited to see how Modo artists integrate OctaneRender’s GPU-accelerated rendering platform into their creative process, including the ability to scale large rendering jobs across near-unlimited decentralized GPU nodes on the Render Network,” says Otoy founder/CEO, Jules Urbach.

 

 

 

 

Podcast 12.4

Masters of the Air: Directors and DP Talk Shoot, VFX and Grade

By Iain Blair

World War II drama Masters of the Air is a nine-episode Apple TV+ limited series that follows the men of the 100th Bomb Group as they conduct perilous bombing raids over Nazi Germany and grapple with the frigid conditions, the lack of oxygen and the sheer terror of combat at 25,000 feet in the air. Starring Austin Butler and Barry Keoghan, it’s the latest project from Steven Spielberg, Tom Hanks and Gary Goetzman, the producing team behind Band of Brothers and The Pacific.

Anna Boden and Ryan Fleck

Ranging in locations from the fields and villages of southeast England to the harsh deprivations of a German POW camp, Masters of the Air is enormous in both scale and scope. It took many years and an army of creatives to bring it to life — such as directors including Anna Boden and Ryan Fleck and DPs including Jac Fitzgerald.

Here, Boden and Fleck (Captain Marvel) talk about the challenges of shooting, editing and posting the ambitious show. In a sidebar, Fitzgerald (True Detective) talks about integrating the extensive VFX and the DI.

After doing Captain Marvel, I guess you guys could handle anything, but this was still a massive project. What were the main challenges?
Anna Boden: We did episodes 5 and 6. I’d say for us, Episode 5 was a big challenge in terms of wrapping our heads around it all. Some of the prep challenges were very big because it’s really a long air battle sequence that takes up almost the entire episode, and we had limited prep and not a ton of time to do previz and work everything out ahead of time. Also, simultaneously, we were prepping Episode 6, which was going to take us on location and to a whole bunch of new spaces that the show had never been to before. Finding those new locations and doing both of those things at once required so much planning, so it was challenging.

How did you handle the big air battle sequence and working with the volume stage?
Boden: You don’t want to show up on the day and wing it. As filmmakers, sometimes it’s really fun to get on-set and block the sequence based on what the actors want to do. But you can’t do that when you’re shooting on a volume stage, where you’re projecting a lot of imagery on the wall around you. You have to plan out so much of what’s going to be there. That was new for us. Even though we’d worked on Captain Marvel and used greenscreen, we’d never used those big-volume LED stages before. It was a really cool learning experience. We learned a lot on the fly and ultimately had fun crafting a pretty exciting sequence.

I assume director Cary Joji Fukunaga and his DP, Adam Arkapaw, set the template in the first four episodes for the look of the whole show, and then you had to carry that across your episodes.
Boden: Yeah. They’d obviously started shooting before us, and so we were studying their dailies and getting a sense of their camera movements and the color palettes and the vibe for the show. It was really helpful. And our DP, Jac Fitzgerald, knows Adam pretty well, so I think that they had a close working relationship. Also, we were able to visit the set while Cary was shooting to get a sense of the vibe. Once we incorporated that, then we were on our own to do our thing. It’s not like we suddenly changed the entire look of the show, but we had the freedom to put our personalities into it.

And one of the great things about the point where we took over is that Episode 5 is its own little capsule episode. We tried to shoot some of the stuff on the base in a similar tone to how they were shooting it. But then, once we got to that monster mission, it became its own thing, and we shot it in our own way. Then, with Episode 6, we were in completely different spaces. It’s a real break from the previous episodes because it’s the midpoint of the season, we’re away from the base, and there’s a big shift in terms of where the story is going. That gave us a little bit of freedom to very consciously shift how we were going to approach the visual language with Jac. It was an organic way to make that change without it feeling like a weird break in the season.

Give us some sense of how integrating all the post and visual effects worked.
Ryan Fleck: We were using the volume stage, so we did have images, and for the aerial battles, we had stuff for the actors to respond to, but they were not dialed in completely. A lot of that happened after the shooting. In fact, most of it did. (Jac can probably help elaborate on that because she’s still involved with the post process for the whole show.) It wasn’t like Mandalorian levels of dialed-in visual effects, where they were almost finished, and the actors could see. In this show, it was more like the actors were responding to previz, but I think that was hugely helpful.

On Captain Marvel, so often actors are just responding to tennis balls and an AD running around the set for eyelines. In this case, it was nice for the actors to see an actual airplane on fire outside their window for their performances to feel fresh.

Did you do a lot of previz?
Fleck: Yeah, we did a lot for those battle sequences in the air, and we worked closely with visual effects supervisor Stephen Rosenbaum, who was integral in pulling all that stuff together.

What did Jac bring to the mix? You hadn’t worked together before, right?
Fleck: No, and we like her energy. She has experience on big movies and small movies, which we appreciate, and so do we. We like those sensibilities. But I think she just has a nice, calm energy. She likes to have fun when she’s working, and so do we, but she’s also very focused on executing the plan. She’s an organized and creative brain that we really appreciated.

Boden: I think that we had a lot of the same reference points when we first started talking, like The Cold Blue, an amazing documentary with a lot of footage that was taken up in the planes during World War II. Filmmakers actually were shooting up there with the young men who were on missions in these bomber planes. That was a really important reference point for us in terms of determining where the cameras can be mounted inside one of these planes. We tried as much as possible to keep those very real camera positions on the missions so that it felt as reality-based and as visceral as possible and not like a Marvel movie. We used some of the color palette from that documentary as well.

It was also Jac’s working style to go to the set and think about how to block things in the shot list… not that we need to stick to that. Once we get in there and work it through with the actors, we all become very flexible, and she’s very flexible as well. Our work styles are very similar, and we got on really well. We like our sets to be very calm and happy instead of chaotic, and she has a very calm personality on-set. We immediately hired her to shoot our next feature after this show, so we’re big fans.

Was it a really tough shoot?
Boden: Yeah. We started shooting in July and finished in October. That’s pretty long for two episodes, but COVID slowed it all down.

Fleck: I’ve never shot in London or the UK before, but I loved it. I loved the crews; I loved the locations. We got to spend time in Oxford, and I fell in love with the place. I really loved exploring the locations. But yes, there were challenges. I think the most tedious stuff was the aerial sequences because we had mounted cameras, and it was just slow. We like to get momentum and move as quickly as we can when shooting.

Even though this is TV, you guys were involved in post to some degree, yes? 
Ryan Fleck: Yes, we did our director’s cuts, and then Gary kept us involved as the cuts progressed. We were able to get back into the edit room even after we delivered our cuts, and we continued to give our feedback to guide the cuts. Typically, TV directors give over their cuts, and then it’s “Adios.” But because we worked so long on it and we had a good relationship with Gary and the actors, we wanted to see this through to the end. So we stayed involved for much longer than I think is typical for episodic directing.

Typically, on our films, we’re involved in all the other post departments, visual effects and sound, every step of the way. But on this series, we were less involved, although we gave notes. Then Jac did all the grading and the rest of the show. She kind of took over and was very involved. She’ll have a lot of insights into the whole DI process. (See Sidebar)

Anna, I assume you love post, and especially editing, as you edited your first four features.
Boden: I love post because it feels like you’ve made all your compromises, and now all you can do is make it better. Now your only job is to make it the best version of itself. It’s like this puzzle, and you have all the time in the world to do the writing again. I absolutely love editing and the process of putting your writing/editing brain back on. You’re forgetting what happened as a director on-set and rethinking how to shape things.

Give us some idea of how the editing worked. Did you also cut your episodes?
Boden: No, we hired an editor named Spencer Averick, who worked on our director’s cut with us. Every director was able to work on their director’s cut with a specific editor, and then there was Mark Czyzewski, the producer’s editor, who worked on the whole series after that. We worked with him after our director’s cut period. We went back into the room, and he was really awesome. We edited in New York for a couple of weeks on the director’s cut, and then we were editing in LA after that in the Playtone offices in Santa Monica.

What were the big editing challenges for both episodes? Just walk us through it a bit.
Boden: I’d say that one of the biggest challenges, at least in terms of the director’s cut, was finding the rhythm of that Episode 5 mission. When you have a long action sequence like that, the challenge is finding the rhythm so that it has the right pace without feeling like it’s barraging you the whole time. It needs places to breathe and places for emotional and character moments, but it still has to keep moving.

Another challenge is making sure viewers know where they are in every plane and every battle throughout the series. That ends up being a big challenge in the edit. You don’t realize it as much when you’re reading a script, but you realize it a lot when you’re in the edit room.

Then, for Episode 6, it was about connecting the stories because in that episode, we have three main characters — Crosby, Rosenthal and Egan — and they’re in three different places on three very separate journeys, in a way. Egan is in a very dark place, and Rosenthal is in a dark place as well, but he finds himself in this kind of palatial place, trying to have a rest. And then Crosby’s having a much lighter kind of experience with a potential love interest. The intercutting between those stories was challenging, just making sure that the tones were connecting and not colliding with each other, or if they were colliding, colliding in a way that was interesting and intentional.

How hands on were Spielberg and Hanks, or did they let you do your own thing?
Fleck: We mostly interacted with Gary Goetzman, who is Tom Hanks’ partner at Playtone. I think those guys [Spielberg and Hanks] were involved with early days of prep and probably late days of post. But in terms of the day-to-day operations, Gary was really the one that we interacted with the most.

Boden: One of the most wonderful things about working with Gary as a producer — and he really is the producer who oversaw this series — is that he’s worked with so many directors in his career and really loves giving them the freedom and support to do what they do best. He gave us so much trust and support to really make the episodes what we wanted them to be.

Looking back now, how would you sum up the whole experience?
Fleck: All of it was challenging, but I think the biggest challenge for us was shooting during COVID. We kept losing crew members day by day, and it got down to the point where everybody had to test every day and wait for their results. We would have crew members waiting three to four hours before they could join us on-set, so that really cut the amount of shooting time we had every day from 11 hours down to six.

Boden: Some days we’d show up and suddenly find out an hour into the day that we weren’t going to get an actor that we were planning to shoot with, so we’d have to rearrange the day and try to shoot without that actor. That was a big challenge.

Fleck: The great thing for me was how much I learned. Back in history class, you get all the big plot points of World War II, but they don’t tell you about how big these B-17s were, how violent it was up in the air for these guys. You think of the D-Day invasion when you think of the great milestones of World War II, but these aerial battles were unbelievably intense, and they were up there in these tin cans; they were so tight and so cold. I just couldn’t believe that these kids were sent into these situations. It was mind-boggling.

Boden: I also learned a lot through the process of reading the material and the research about the history of these specific people in the stories. But I’d say that one of the things that really sticks with me from the experience was working with this group of actors. That felt very special.

DP Jac Fitzgerald on Shooting Masters of the Air

Jac, integrating all the VFX with visual effects supervisor Stephen Rosenbaum must have been crucial.
Yes. When I started the show, I imagined that the majority of the VFX work would be done on the volume stage. But then I realized that he had a whole World War II airfield to create on location. Obviously, we had the tower structure for the airfield, and we had two planes, one of which was being towed. And it was all so cobbled together from the outside.

Jac Fitzgerald

The planes looked like they were complete, but they weren’t moving by themselves. They didn’t have engines in them or anything. What was interesting to me was the extent of the visual effects that Stephen had to do on the exteriors. We only had two plane bodies, but at any one time when you see the airstrip, there are 12 planes there or more. So there was a huge amount of work for him to do in that exterior world, which was actually as important as the VFX in the volume.

What about the DI? Where did you do all the grading?
It was predominantly in LA at Picture Shop with colorist Steven Bodner, who did the whole show. And because of the enormous amount of VFX, it was obvious early on that things were going to need to be done out of order in the DI.

At first, they thought that my two episodes [5 and 6] would be the first ones to have the DI, as Adam Arkapaw was unavailable to do his episodes [1 through 4] because he was working on another film. At the time they thought they would go in and do my episodes and start prepping and setting the look for episodes 1 through 4 as well. Then it became clear that the DI schedule would have to adjust because of the enormity of the VFX.

Stephen Rosenbaum spent a lot of time making the footage we’d shot and all the VFX worlds collide. I think he had an extraordinary number of people from vendors around the world involved in the project, so there was certainly a lot of cleaning up to do. We all did a lot of work on the look in the DI, trying to make it as seamless as possible. And then again, because episodes 1 through 4 needed so much VFX work, we did my episodes and then we did 7, 8 and 9, and then we went back to 1 through 4. It was certainly a lot of jumping around. I wish that we could have mapped it all from beginning to end, but it wasn’t to be.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Life in Tandem: Making an Unexpected Documentary

Though poignant and beautiful, this wasn’t the documentary the filmmakers originally set out to make. Here we talk with one of the directors, Mia Grimes, about how the film unfolded and the process of making it.

L-R: Chris Multop, Joe Litzinger and Mia Grimes

How did you come up with the idea for the short?
My co-director Joe Litzinger discovered a viral YouTube video of Marc Ornstein performing a canoe dancing routine to “Lady in Red” as well as a video of Stephen Colbert poking fun at it. Intrigued by the sport and the individual in the video, we did some research and reached out to Elaine Mravetz, a pivotal figure within the community. We were immediately struck by her warm and inviting demeanor.

Tragically, just days after our initial conversation, Elaine was killed in a car accident. With the blessing of both the freestyle community and Elaine’s family, we pivoted the documentary to follow her husband, Bob (also a canoeist), on his journey of recovery and grief.

The original concept was to take a Best in Show approach to a unique sport, but it evolved into a heartfelt emotional story about a community rallying around a member facing a tragic and unimaginable life change.

Did you guys fund it on your own?
My co-director funded the short through his production company, Interesting Human Media, using personal funds. While we attempted to raise additional money, the unexpected nature of the life event we were documenting meant we had to adapt and tell the story with the resources available to us while it was happening.

And we received a great many contributions of time, resources and work at reduced rates from friends and co-workers, embodying the essence of this project as a true labor of love and a community coming together for a common purpose.

What was the process of just getting it off the ground?
In early February 2022, cinematographer Jeff Smee and I made our way to film at Bob’s house in Cleveland. This initial three-day filming session with Bob was just the first of many. Over the course of the following year, we were invited to document a series of significant events marking Bob’s journey of recovery. These events offered a lens into his resilience and his gradual return to the activities that once brought him joy.

It was during a trip to Florida in February 2023 that we witnessed Bob return back to the water in his canoe for the first time since his accident — a symbolic act of reclaiming his passion and a step forward in his healing process. This experience provided a natural and powerful conclusion to our film, capturing the essence of human perseverance and the support of a community rallying around one of its own.

Can you talk script?
Because we were following an event, we did not have a script or outline of any kind, as we were not sure how Bob’s recovery would progress. We truly had no idea how the documentary would end pretty much the entire time we were filming.

Was this your first time directing? How did you work with your co-director, Joe?
I started out in logistics and scheduling, but my role quickly expanded as I found myself involved in all aspects of the production process. This transition marked the beginning of a learning experience that extended far beyond my initial responsibilities. Joe, who served not only as my boss but also as my co-director, played a pivotal role in this evolution. In an industry where the hierarchical structure is often rigid, Joe’s decision to trust me with the direction of early scenes was indicative of his inclusive leadership style.

This opportunity allowed me to learn directly from Joe and the cinematographer, Chris Multop, about not only the technical aspects of filmmaking and camera operation but the storytelling.

As the project progressed, our partnership evolved into a collaborative co-directing effort. This collaboration was not limited to just Joe and me; Chris, our co-producer, was integral as well. Together, the three of us functioned as a cohesive unit, with each of us bringing our own perspectives, expertise and visions to the table.

How did you decide on the cameras you used?
To capture the sport’s beauty, we needed high-quality, versatile cameras that were also light, portable and affordable. Most of the documentary was shot using Z cameras in 4K, with a mix of ultrawide, stylistic lenses for interviews and 800mm lenses for paddling and cinematic shots. Other cameras we used during production were Sony FX3, multiple drones and a Blackmagic camera.

Was it shot with natural lighting?
While the canoeing scenes benefited from natural lighting, we used artificial lighting for the indoor interviews to enhance the visual quality.

You had multiple DPs?
Chris Multop, our co-producer, served as the director of photography, but it was a collaborative effort, with Joe, Jeff Smee, me and others on-set contributing to the cinematography alongside archival footage from the canoeists.

You edited on Adobe Premiere. What was that process like?
We have edited a variety of projects on a variety of platforms. We decided on Premiere because we liked the ease and capability of sending the project to multiple editors to play around with.

One of the things we did early on was hire an experienced AE, Ken Ren, who organized the drive and synced the footage, so our projects started in a way that gave us a leg up throughout the editing process. With about 8TB of footage, we relied on proxies to keep the editing process smooth.

Who did the actual editing? And what about the audio and color grading?
Editing was a collective effort led by Joe and me, with contributions from Emmy award-winning editors Matt Mercer and Eric Schrader and assistant editing by Jenny Hochberg. We set out to film a feature, so we were managing a large amount of footage, which presented a significant challenge in crafting a short, concise documentary.

You can watch the doc here:

Colorfront’s New SDR to-Dolby Vision HDR Conversion Process

At the 2024 HPA Tech Retreat at the end of last month, Colorfront demo’d the Colorfront Engine’s new Dolby Vision conversion capability. The conversion process not only transitions SDR to Dolby Vision HDR but also produces unique Dolby Vision metadata, guaranteeing that the Dolby-derived SDR output visually matches the original SDR content. This round-tripping method presents a unified, streamlined, single-source workflow for mastering and distribution.

The Colorfront Engine now allows users to seamlessly upgrade extensive SDR content libraries to the Dolby Vision HDR format, addressing the surge in HDR-ready displays and devices with a straightforward, time-efficient and cost-effective solution.

“This Dolby-specific version of the Colorfront Engine has been developed to facilitate a seamless conversion from SDR to Dolby Vision HDR with perfect round-tripping,” says Colorfront’s Mark Jaszberenyi. “It’s already shipping and has received feedback from content owners, studios, OTTs and streamers for its ability to maintain fidelity to the original SDR content while offering a premium HDR viewing experience.”

Mark Jaszberenyi

Why is this important for our industry? Jaszberenyi says, “The transition from SDR to HDR aims to enhance visual experiences with improved brightness, contrast and colors. Despite this shift, a significant volume of content and many viewing environments remain SDR-based. The Dolby Vision SDR round-trip solution is vital, as it enables the conversion of original SDR libraries to HDR, incorporating Dolby Vision metadata that aligns with the original content.” He says this process ensures that content is remastered for Dolby Vision HDR viewing while preserving the integrity of the SDR original, all within a single Dolby Vision master file. Importantly, this solution helps content owners and distributors maximize the value of their existing SDR libraries by making them accessible to a wider audience with HDR-capable devices.”

Content owners and distributors can use this solution to produce and deliver content across various devices and viewing conditions. “It facilitates the display of stunning HDR content on HDR-capable devices, ensuring an optimal viewing experience,” according to Jaszberenyi, adding that it also guarantees that the SDR version, derived from HDR content through the Dolby Vision round-tripping process, closely matches the original SDR master.

How does it work from a user perspective? The conversion process balances automation with the option for manual intervention, starting with the transformation of original SDR content into HDR. “This is followed by generating unique Dolby Vision metadata for a seamless SDR conversion,” says Jaszberenyi. “Mastering professionals have the flexibility to fine-tune the Dolby Vision conversion tool based on the specific attributes of the content, ensuring a workflow that not only respects but enhances the creative vision. Importantly, this process is designed to be scalable; it can automatically convert vast amounts of content with ease, whether on-premises or in the cloud, making it a versatile solution for content libraries of any size.”

 

 

 

 

Dalet and Veritone Team on MAM and Monetization Platform

Media technology and service provider Dalet and AI solution provider Veritone have agreed to integrate the Dalet Flex media workflow ecosystem with Veritone’s AI-powered Digital Media Hub (DMH), featuring commerce and monetization capabilities. The integration enables a seamless workflow from content creation through production, curation, packaging and distribution, helping media, sports and entertainment companies to monetize their digital media archives.

The Dalet and Veritone referral partnership enables media and entertainment companies to maximize the return on investment of their content assets to generate new revenue streams. The secure and scalable solution enables media-centric organizations to automatically deliver content to partners while remaining in control of their content catalog.

Key features include:

  • A cloud-native ecosystem to produce, manage, distribute, transact and monetize digital media content and archives.
  • Uniquely advanced rich metadata management to drive content catalog exposure and automated publishing based on business rules.
  • The ability to easily implement branded digital marketplaces with a familiar content shopping experience for B2B clients, partners and affiliates.
  • Customizable B2B portals, flexible monetization business models and granular searches based on extensive metadata, including timecodes.
  • A highly efficient, secure solution with a common vision, a long-term shared road map and outstanding customer service.

“Veritone’s AI-enabled technology has long been the tool of choice for some of the world’s most recognized brands because of its ability to more efficiently and effectively organize, manage and monetize content,” says Sean King, SVP, GM at Veritone. “Veritone and Dalet share a commitment to unlocking the true potential of digital content, and we’re pleased to offer the content monetization capabilities of DMH to complete Dalet’s end-to-end platform and provide endless revenue opportunities to their customer base.”

 

Post Production World Expands: New Conference Pass and AI Training

Future Media Conferences and NAB Show have expanded the Post Production World (PPW) conference slated for April 12-17. This year the organizers introduced a comprehensive pass that covers an expanded suite of tracks along with AI training and certifications, field workshops and more.

In a move to cater to the broad spectrum of roles in the creative industry, PPW has broadened its scope to include additional past FMC conferences under one ticket item. Attendees can now access a diverse array of tracks with a single ticket, exploring creative AI, cinematography and directors of photography, visual storytelling, remote production and more. This expansion reflects PPW’s dedication to keeping pace with the rapid advancements in technology and creative techniques.

In addition to a dedicated Creative AI track within the PPW conference program, FMC is offering an additional pass for an AI Training & Certifications track, an initiative designed to equip professionals with the skills necessary to navigate the burgeoning field of artificial intelligence in content creation. Pass add-ons include exam vouchers available for purchase with registration or a choice between two live and in-person AI training courses:

  • AI Broadcast TV Training Workshop: Revolutionizing Broadcasting
  • AI VFX & Motion Training Workshop: Crafting Visual Wonders

Besides these new additions, PPW continues to offer field workshops and other certifications that provide hands-on learning experiences and opportunities to gain recognized credentials in various aspects of production and post production.

“By expanding our tracks and introducing AI Training & Certifications, we’re not just responding to the industry’s current trends; we’re anticipating its future directions,” says Ben Kozuch, president and co-founder of Future Media Conferences. “Our goal is to empower content professionals with the knowledge, skills and insights they need to succeed in a rapidly evolving landscape.”

Information on the new pass options, AI Training & Certifications, field workshops and registration can be found here.

Pure4D

DI4D’s Updated Facial Performance Capture System, Pure4D 2.0

DI4D, a facial capture and animation provider, has introduced Pure4D 2.0, the latest iteration of its proprietary facial performance capture solution. Pure4D has been used to produce hours of facial animation for many AAA game franchises, including Call of Duty: Modern Warfare II and III and F1 21 and 23.

F1

The Pure4D 2.0 pipeline is purpose-built to directly translate the subtleties of an actor’s facial performance onto their digital double. It delivers nuanced facial animation without the need for manual polish or complex facial rigs.

Pure4D 2.0 is built from DI4D’s proprietary facial capture technology, which combines performance data from an HMC (head-mounted camera) with high-fidelity data from a seated 4D capture system to achieve a scale and quality beyond the capabilities of traditional animation pipelines. Pure4D 2.0 is compatible with the DI4D HMC and third-party dual-camera HMCs as well as the DI4D Pro and third-party 4D capture systems.

Behind this process is DI4D’s machine learning technology, which continually learns an actor’s facial expressions, reducing subjective manual clean-up and significantly increasing both the repeatability and efficiency of the pipeline. This makes Pure4D 2.0 ideally suited to AAA video game production.

Pure4D

Call of Duty

Faithfully recreating an actor’s facial performance is key to Pure4D 2.0’s approach, making it possible to emulate the experience of watching an actor in a live-action film or theatrical performance using their digital double.

A digital double refers to an animated character that shares the exact likeness and performance of a single actor, resulting in highly realistic, performance-driven facial animation. It’s a process that preserves the art form of acting while enhancing the believability of the character.

Pure4D’s approach to facial animation has inspired a new short film, Double, starring Neil Newbon, one of the world’s most accomplished video game actors, who won Best Performance at the 2023 Game Awards. Double will use Pure4D 2.0 to capture the nuance of Newbon’s performance, driving the facial animation of his digital double. Scheduled for release during the summer, Double will highlight the increasingly valuable contribution that high-quality acting makes to video game production.

 

 

 

 

 

Zach Robinson on Scoring Netflix’s Wrestlers Docuseries

No one can deny the attraction of “entertainment” wrestling. From WWE to NXT to AEW, there is no shortage of muscular people holding other muscular people above their heads and dropping them to the ground. And there is no shortage of interest in the wrestlers and their journeys to the big leagues.

Zach Robinson

That is just one aspect of Netflix’s docuseries Wrestlers, directed by Greg Whiteley, which follows former WWE wrestler Al Snow as he tries to keep the pro wrestling league Ohio Valley Wrestling (OVW) going while fighting off mounting debt and dealing with new ownership. It also provides a behind-the-scenes look at these athletes’ lives outside of the ring.

For the series’ score, Whiteley called on composer Zach Robinson to give the show its sound. “Wrestlers was a dream come true,” says Robinson. “Coming into the project, I was such a huge fan of Greg Whiteley’s work, from Last Chance U to Cheer. On top of that, I grew up on WWE, so it was so much fun to work with this specific group of people on a subject that I really loved.”

Let’s find out more from Robinson, whose other recent projects include Twisted Metal and Florida Man (along with Leo Birenberg) and the animated horror show Fright Krewe

What was the direction you were given for the score?
I originally thought that Greg and the rest of the team wanted something similar to what I do on Cobra Kai, but after watching the first couple of episodes and having a few discussions with the team, we wanted to have music that served as a juxtaposition to the burly, muscular, sometimes brutal imagery you were seeing on screen.

Greg wanted something dramatic and beautiful and almost ballet-like. The music ends up working beautifully with the imagery and really complements the sleek cinematography. Like Greg’s other projects, this is a character drama with an amazing group of characters, and we needed the music to support their stories without making fun of them.

What is your process? Is there a particular instrument you start on, or is it dependent on the project?
It often starts with a theme and a palette decision. Simply, what are the notes I’m writing and what are the instruments playing those notes? I generally like to start by writing a few larger pieces to cover a lot of groups and see what gauges the client’s interest.

In the case of Wrestlers, I presented three pieces (not to picture) and shared them with Greg and the team. Luckily for me, those three pieces were very much in the ballpark of what they were looking for, and I think all three made it into the first episode.

Can you walk us through your workflow on Wrestlers?
Sometimes, working on non-fiction can be a lot different than working on a scripted TV show. We would have spotting sessions (meetings where we watch down the episode and discuss the ins and outs of where the score lives), but as the episodes progressed, I ended up creating more of a library for the editors to grab cues from. That became very helpful for me because the turnaround on these episodes from a scoring standpoint was very, very fast.

However, every episode did have large chunks that needed to be scored to picture. I’m thinking of a lot of the fights, which I really had to score as if I was scoring any type of fight in a scripted show. It took a lot of effort and a lot of direction from the creative team to score those bouts, and finding the right tone was always a challenge.

How would you describe the score? What instruments were used? Was there an orchestra, or were you creating it all?
As I mentioned earlier, the score is very light, almost like a ballet. It’s inspired by a lot of Americana music, like from Aaron Copland, but also, I was very inspired by the “vagabond” stylings of someone like Tom Waits, so you’ll hear a lot of trombone, trumpet, bass, flute and drums.

Imagine seeing a small band performing on the street; that’s kind of what was inspiring to me. This is a traveling troupe of performers, and Greg even referred to them as “the Muppets” during one of our first meetings. We also had a lot of heightened moments that used a large, epic orchestra. I’m thinking especially about the last 30 minutes of the season finale, which is incredibly triumphant and epic in scope.

How did you work with the director in terms of feedback? Any examples of notes or direction given?
Greg and producer Adam Leibowitz were dream collaborators and always had incredibly thoughtful notes and gave great direction. I think the feedback I got most frequently was about being careful not to dip into melodrama through the music. The team is very tasteful with how they portray dramatic moments in their projects, and Wrestlers was no exception.

There were a few times I went a bit too far and big in the music, and Greg would tell me to take a step back and let the drama from the reality of the situation speak for itself. This all made a lot of sense to me, especially because I understood that, coming from scoring mostly scripted programming, I would tend to go harder and bigger on my first pass, which wasn’t always appropriate.

More generally, do you write based on project – spot, game, film, TV — or do you just write?
I enjoy writing music mostly to picture, whether that’s a movie or TV or videogame. I enjoy it much more than writing a piece of music not connected to anything, and I find that when I have to do the latter, it’s incredibly difficult for me.

How did you get into composing? Did you come from a musical family?
I don’t come from a musical family, but I come from a very creative and encouraging family. I knew I wanted to start composing from a very young age, and I was incredibly fortunate to have a family that supported me every step of the way. I studied music in high school and then into college, and then I immediately got a job apprenticing for a composer right after college. I worked my way up and through a lot of odd jobs, and now I’m here.

Any tips for those just starting out?
My biggest piece of advice is to simply be yourself. I know it sounds trite, but don’t try to mold your voice into what you think people want to hear. I’m still learning that even with my 10 years in the business, people want to hear unique voices, and there are always great opportunities to try something different.

VFX Supervisor Sam O’Hare on Craig Gillespie’s Dumb Money

By Randi Altman

Remember when GameStop, the aging brick-and-mortar video game retailer, caused a stir on Wall Street thanks to a stock price run-up that essentially resulted from a pump-and-dump scheme?

Director Craig Gillespie took on this crazy but true story in Dumb Money, which follows Keith Gill (Paul Dano), a normal guy with a wife and baby who starts it all by sinking his life savings into the GameStop stock. His social media posts start blowing up, and he makes millions, angering the tried-and-true Wall Street money guys who begin to fight back.Needless to say, things get ugly for both sides.

Sam O’Hare

While this type of film, which has an all-star cast, doesn’t scream visual effects movie, there were 500 shots, many of which involved putting things on computer and phone screens and changing seasons. To manage this effort, Gillespie and team called on New York City-based visual effects supervisor Sam O’Hare.

We reached out to O’Hare to talk about his process on the film.

When did you first get involved on Dumb Money?
I had just finished a meeting at the Paramount lot in LA and was sitting on the Forrest Gump bench waiting for an Uber when I got a call about the project. I came back to New York and joined the crew when they started tech scouting.

So, early on in the project?
It wasn’t too early, but just early enough that I could get a grip on what we’d need to achieve for the film, VFXwise. I had to get up to speed with everything before the shoot started.

Talk about your role as VFX supervisor on the film. What were you asked to do?
The production folks understood that there was enough VFX on the film that it needed a dedicated supervisor. I was on-set for the majority of the movie, advising and gathering data, and then, after the edit came together, I continued through post. Being on-set means you can communicate with all the other departments to devise the best shoot strategy. It also means you can ensure that the footage you are getting will work as well as possible in post and minimize costs in post.

I also acted as VFX producer for the show, so I got the bids from vendors and worked out the budgets with director Craig Gillespie and producer Aaron Ryder. I then distributed and oversaw the shots, aided by my coordinator, Sara Rosenthal. I selected and booked the vendors.

Who were they, and what did they each supply?
Chicken Bone tackled the majority of the bluescreen work, along with some screens and other sequences. Powerhouse covered a lot of the screens, Pete Davidson’s car sequence, the pool in Florida and other elements. Basilic Fly handled the split screens and the majority of the paint and cleanup. HiFi 3D took on the sequences with the trees outside Keith Gill’s house.

I also worked closely with the graphics vendors since much of their work had to be run through a screen look that I designed. Since the budget was tight, I ended up executing around 100 shots myself, mostly the screen looks on the graphics.

There were 500 VFX shots? What was the variety of the VFX work?
The editor, Kirk Baxter, is amazing at timing out scenes to get the most impact from them. To that end we had a lot of split screens to adjust timing on the performances. We shot primarily in New Jersey, with a short stint in LA, but the film was set in Massachusetts and Miami, so there was also a fair amount of paint and environmental work to make that happen. In particular, there was a pool scene that needed some extensive work to make it feel like Florida.

The film took place mostly over the winter, but we shot in the fall, so we had a couple of scenes where we had to replace all of the leafy trees with bare ones. HiFi handled these, with CG trees placed referencing photogrammetry I shot on-set to help layout.

There was a fair amount of bluescreen, both in car and plane sequences and to work around actors’ schedules when we couldn’t get them in the right locations at the right times. We shot background plates and then captured the actors later with matched lighting to be assembled afterward.

Screens were a big part of the job. Can you walk us through dealing with those?
We had a variety of approaches to the screens, depending on what we needed to do. The Robinhood app features heavily in the film, and we had to ensure that the actors’ interaction with it was accurate. To that end, I built green layouts with buttons and tap/swipe sequences for them to follow, which mimicked the app accurately at the time.

For the texting sequence, we set up users on the phones, let the actors text one another and used as much of it as possible. Their natural movements and responses to texts were great. All we did was replace the bubbles at the top of the screen to make the text consistent.

For Roaring Kitty, art department graphics artists built his portfolio and the various website layouts, which were on the screens on the shoot. We used these when we could and replaced some for continuity. We also inserted footage that was shot with a GoPro on-set. This footage was then treated with rough depth matte built in Resolve to give a low-fi cut-out feel and then laid over the top of the graphics for the YouTube section.

The screen look for the close-ups was built using close-up imagery of LED screens, with different amounts of down-rez and re-up-rez to get the right amount of grid look for different screens and levels of zoom. Artists also added aberration, focus falloff, etc.

Any other challenging sequences?
We had very limited background plates for the car sequences that were shot. Many had sun when we needed overcast light, so getting those to feel consistent and without repeating took a fair bit of editing and juggling. Seamlessly merging the leafless CG trees into the real ones for the scene outside Keith Gill’s house was probably the most time-consuming section, but it came out looking great.

What tools did you use, and how did they help?
On-set, I rely on my Nikon D750 and Z6 for reference, HDRI and photogrammetry work.

I used Blackmagic Resolve for all my reviews. I wrote some Python pipeline scripts to automatically populate the timeline with trimmed plates, renders and references all in the correct color spaces from ShotGrid playlists. This sped up the review process a great deal and left me time enough to wrangle the shots I needed to work on.

I did all my compositing in Blackmagic Fusion Studio, but I believe all the vendors worked in Foundry Nuke.