Tag Archives: AI

HPA Tech Retreat 2024: Networking and Tech in the Desert

By Randi Altman

Late last month, many of the smartest brains in production and post descended on the Westin Rancho Mirage Golf Resort & Spa in Palm Springs for the annual HPA Tech Retreat. This conference is built for learning and networking; it’s what it does best, and it starts early. The days begin with over 30 breakfast roundtables, where hosts dig into topics — such as “Using AI/ML for Media Content Creation” and “Apprenticeship and the Future of Post” — while the people at their table dig in to eggs and coffee.

Corridor Digital’s Niko Pueringer

The day then kicks further into gear with sessions; coffee breaks inserted for more mingling; more sessions; networking lunches; a small exhibit floor; drinks while checking out the tools; dinners, including Fiesta Night and food trucks; and, of course, a bowling party… all designed to get you to talk to people you might not know and build relationships.

It’s hard to explain just how valuable this event is for those who attend, speak and exhibit. Along with Corridor Digital’s Niko Pueringer talking AI as well as the panel of creatives who worked on Postcard from Earth for the Las Vegas Sphere, one of my personal favorites was the yearly Women in Post lunch. Introduced by Fox’s Payton List, the panel was moderated by Rosanna Marino of IDC LA and featured Daphne Dentz from Warner Bros. Discovery Content Creative Services, Katie Hinsen from Marvel and Kylee Peña from Adobe. The group talked about the changing “landscape of workplace dynamics influenced by #metoo, the arrival of Gen Z into the workforce and the ongoing impact of the COVID pandemic.” It was great. The panelists were open, honest and funny. A definite highlight of the conference.

We reached out to just a few folks to get their thoughts on the event:

Light Iron’s Liam Ford
My favorite session by far was the second half of the Tuesday Supersession. Getting an in-depth walk-through of how AI is currently being used to create content was truly eye-opening. Not only did we get exposed to a variety of tools that I’ve never even heard of before, but we were given insights on what the generative AI components were actually doing to create these images, and that shed a lot of light on where the potential growth and innovation in this process is likely to be concentrated.

I also want to give a shoutout to the great talk by Charles Poynton on what quantum dots actually are. I feel like we’ve been throwing this term around a lot over the last year or two, and few people, if any, knew how the technology was constructed at a base layer.

Charles Poynton

Finally, my general takeaway was that we’re heading into a bit of a Wild West over the next three years.  Not only is AI going to change a lot of workflows, and in ways we haven’t come close to predicting yet, but the basic business model of the film industry itself is on the ropes. Everyone’s going to have to start thinking outside the box very seriously to survive the coming disruption.

Imax’s Greg Ciaccio
Each year, the HPA Tech Retreat program features cutting-edge technology and related implementation. This year, the bench of immensely talented AI experts stole the show.  Year after year, I’m impressed with the practical use cases shown using these new technologies. AI benefits are far-reaching, but generative AI piqued my interest most, especially in the area of image enhancement. Instead of traditional pixel up-rezing, AI image enhancements can use learned images to embellish artists’ work, which can iteratively be sent back and forth to achieve the desired intent.

It’s all about networking at the Tech Retreat.

3 Ball Media Group’s Neil Coleman
While the concern about artificial intelligence was palpable in the room, it was the potential in the tools that was most exciting. We are already putting Topaz Labs Video AI into use in our post workflow, but the conversations are what spark the most discovery. Discussing needs and challenges with other attendees at lunch led to options that we hadn’t considered when trying to get footage from field back to post. It’s the people that make this conference so compelling.

IDC’s Rosanna Marino
It’s always a good idea to hear the invited professionals’ perspectives, knowledge and experience. However, I must say that the 2024 HPA Tech Retreat was outstanding. Every panel, every event was important and relevant. In addition to all the knowledge and information taken away, the networking and bonding was also exceptional.

Picture Shop colorist Tim Stipan talks about working on the Vegas Sphere.

I am grateful to have attended the entire event this year. I would have really missed out otherwise. The variation of topics and how they all came together was extraordinary. The number of attendees gave it a real community feel.

IDC’s Mike Tosti
The HPA Tech Retreat allows you to catch up on what your peers are doing in the industry and where the pitfalls may lie.

AI has come a long way in the last year, and it is time we start learning it and embracing it, as it is only going to get better and more prevalent. There were some really compelling demonstrations during the afternoon of Supersession.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 25 years. 

Dalet and Veritone Team on MAM and Monetization Platform

Media technology and service provider Dalet and AI solution provider Veritone have agreed to integrate the Dalet Flex media workflow ecosystem with Veritone’s AI-powered Digital Media Hub (DMH), featuring commerce and monetization capabilities. The integration enables a seamless workflow from content creation through production, curation, packaging and distribution, helping media, sports and entertainment companies to monetize their digital media archives.

The Dalet and Veritone referral partnership enables media and entertainment companies to maximize the return on investment of their content assets to generate new revenue streams. The secure and scalable solution enables media-centric organizations to automatically deliver content to partners while remaining in control of their content catalog.

Key features include:

  • A cloud-native ecosystem to produce, manage, distribute, transact and monetize digital media content and archives.
  • Uniquely advanced rich metadata management to drive content catalog exposure and automated publishing based on business rules.
  • The ability to easily implement branded digital marketplaces with a familiar content shopping experience for B2B clients, partners and affiliates.
  • Customizable B2B portals, flexible monetization business models and granular searches based on extensive metadata, including timecodes.
  • A highly efficient, secure solution with a common vision, a long-term shared road map and outstanding customer service.

“Veritone’s AI-enabled technology has long been the tool of choice for some of the world’s most recognized brands because of its ability to more efficiently and effectively organize, manage and monetize content,” says Sean King, SVP, GM at Veritone. “Veritone and Dalet share a commitment to unlocking the true potential of digital content, and we’re pleased to offer the content monetization capabilities of DMH to complete Dalet’s end-to-end platform and provide endless revenue opportunities to their customer base.”

 

Post Production World Expands: New Conference Pass and AI Training

Future Media Conferences and NAB Show have expanded the Post Production World (PPW) conference slated for April 12-17. This year the organizers introduced a comprehensive pass that covers an expanded suite of tracks along with AI training and certifications, field workshops and more.

In a move to cater to the broad spectrum of roles in the creative industry, PPW has broadened its scope to include additional past FMC conferences under one ticket item. Attendees can now access a diverse array of tracks with a single ticket, exploring creative AI, cinematography and directors of photography, visual storytelling, remote production and more. This expansion reflects PPW’s dedication to keeping pace with the rapid advancements in technology and creative techniques.

In addition to a dedicated Creative AI track within the PPW conference program, FMC is offering an additional pass for an AI Training & Certifications track, an initiative designed to equip professionals with the skills necessary to navigate the burgeoning field of artificial intelligence in content creation. Pass add-ons include exam vouchers available for purchase with registration or a choice between two live and in-person AI training courses:

  • AI Broadcast TV Training Workshop: Revolutionizing Broadcasting
  • AI VFX & Motion Training Workshop: Crafting Visual Wonders

Besides these new additions, PPW continues to offer field workshops and other certifications that provide hands-on learning experiences and opportunities to gain recognized credentials in various aspects of production and post production.

“By expanding our tracks and introducing AI Training & Certifications, we’re not just responding to the industry’s current trends; we’re anticipating its future directions,” says Ben Kozuch, president and co-founder of Future Media Conferences. “Our goal is to empower content professionals with the knowledge, skills and insights they need to succeed in a rapidly evolving landscape.”

Information on the new pass options, AI Training & Certifications, field workshops and registration can be found here.

Ketama Collective Merges to Form Experiential Studio Bermuda

Ketama Collective, part of the team that won the Grand Prix for Creative Data at Cannes last year, is merging with its two sister companies, Bitgeyser and Pasto, to form one integrated digital creative, production and technology resource known as Bermuda.

The new entity, which has opened a US office in Miami, spans everything from content production for brands to experiential executions and activations, extended realities, metaverse executions, meta-human creations, AI infusions and prototyping, as well as CG animation and design. It is billed by its founders as a creative technology lab that’s focused on offering proficiencies and specializations that global brands are searching for in today’s social media and experience-based landscape.

According to Nico Ferrero, CEO of Bermuda (and formerly MD at Ketama), this move is a natural evolution: Ketama, Bitgeyser and Pasto have frequently collaborated on complex projects for a roster of global clients, he points out. Collectively, their work has been recognized by the industry’s leading awards shows, including a Grand Prix and Gold Lion at Cannes for Stella Artois and GUT, a Silver Lion for LATAM Airlines and McCann, and a Gold Clio for “The Tweeting Pothole” for Medcom and Ogilvy, to name a few.

As it seeks to expand its footprint in the US market after having a location in Buenos Aires, Bermuda has lined up a national sales operation. On the East Coast, Bermuda will be represented by Minerva, led by Mary Knox and Shauna Seresin. Bermuda has also signed with Marla Mossberg and N Cahoots for West Coast representation and Isabel Echeverry and Kontakto for the US Hispanic market,

Bermuda is led by a group of bilingual executives from the three merged companies whose backgrounds encompass everything from agency creative, production and software engineering to experience design and fabrication. In addition to Ferrero, the company’s leaders include chief creative director Santiago Maiz, head of production Agustín Mende, regional new business director Matias Berruezo and CFO Juan Riva.

“Bermuda has opened for business backed by a combined 30 years of experience creating digital content,” Ferrero explains. “We now have a unified team of 50 experts all under one roof: digital artists, AI engineers, animators, industrial designers, software and fabrication engineers and creative technologists who specialize in multimedia executions, as well as specialists in augmented, virtual, and mixed reality content; metaverse executions; and the use of block chain.”

The new company was born after a whirlwind 2023: In the US, experiential/digital and fabrication projects staged in New Orleans, Miami, San Diego and Chicago were created for such agencies as Area 23, David and McCann, and for clients such as Google, Mastercard and pharmaceutical company Boehringer Ingelheim. It also marked the debut of a 52-episode, five-minute show, Dino Pops, that was created in hyper-real 3D animation fully executed in Unreal Engine for NBC’s streaming platform Peacock.

As a multi-brand platform, Bermuda has developed unique experiences with personalized content for literally hundreds of products distributed in Tetra Pak packaging. To date the studio has created more than 1,000 digital experiences representing over 150 household brands marketed across 28 countries.

“Our goal is to go even bigger, with more work from the US market, as we flex our muscles across all of our disciplines,” Ferrero states. “Operating as Bermuda will allow us to produce projects on a larger scale while working in different countries at the same time and while handling more complex and challenging projects. And it allows our clients, both on the agency and brand sides, to consolidate the number of entities they have to deal with while making internal collaboration easier and more efficient.” Besides the newly opened base in Miami, Bermuda currently has its HQ in Buenos Aires and offices in L.A. and Colombia to oversee projects throughout the Americas.

As for how they came up with the name, “It’s the idea of the unknown, this mysterious world,” he says, referring obliquely to the legendary Bermuda Triangle. “When you arrive at an idea, it basically comes from a magical place. How well you execute that idea, and the process by which you do it, sums up what Bermuda means to all of us.”

 

Oscars: Creating New and Old Sounds for The Creator

By Randi Altman

Director Gareth Edwards’ The Creator takes place in 2055 and tells the story of a war between the human race and artificial intelligence. It follows Joshua Taylor (John David Washington), a former special forces agent who is recruited to hunt down and kill The Creator, who is building an AI super weapon that takes the form of a child.

As you can imagine, the film’s soundscape is lush and helps to tell this futuristic tale, so much so it was rewarded with an Oscar nomination for its sound team: supervising sound editors/sound designers Erik Aadahl and Ethan Van der Ryn, re-recording mixers Tom Ozanich and Dean Zupancic and production sound mixer Ian Voigt.

L-R: Ethan Van der Ryn and Erik Aadahl

We reached out to Aadahl to talk about the audio post process on The Creator, which was shot guerrila style for a documentary feel.

How did you and Ethan collaborate on this one?
Ethan and I have been creative sound partners now for over 17 years. “Mind meld” is the perfect term for us creatively. I think the reason we work so well together is that we are constantly trying to surprise each other with our ideas.

In a sense, we are a lot harder on ourselves than any director and are happiest when we venture into uncharted creative territory with sound. We’ve joked for years that our thermometer for good sound is whether we get goosebumps in a scene. I love our collaboration that way.

How did you split up the work on this one?
We pretty much divide up our duties equally, and on The Creator, we were blessed with an incredible crew. Malte Bieler was our lead sound designer and came up with so many brilliant ideas. David Bach was the ADR and dialogue supervisor, who was in charge of easily one of the most complex dialogue jobs ever, breaking our own records for number of cues, number of spoken languages (some real, some invented), large exterior group sessions and the complexity of robot vocal processing. Jonathan Klein supervised Foley, and Ryan Rubin was the lead music editor for Hans Zimmer’s gorgeous score.

What did director Gareth Edwards ask for in terms of the sound?
Gareth Edwards wanted a sonic style of “retro-futurism” mixed with documentary realism. In a way, we were trying to combine the styles of Terrence Malick and James Cameron: pure expressive realism with pure science-fiction.

Gareth engaged us long before the script was finished — over six years ago — to discuss our approach to this very different film. Our first step was designing a proof-of-concept piece using location scout footage to get the green light, working with Gareth and ILM.

How would you describe the sound?
The style we adopted was to first embrace the real sounds of nature, which we recorded in Cambodia, Laos, Thailand and Vietnam.

For the sound design, Gareth wanted this retro-futurism for much of it, recalling a nostalgia for classic science fiction using analog sound design techniques like vocoders, which were used in the 1970s for films like THX 1138. That style of science fiction could then contrast with the fully futuristic, high-fidelity robot, vehicle and weapon technology.

Gareth wanted sounds that had never been used before and would often make sounds with his mouth that we would recreate. Gareth’s direction for the NOMAD station, which emits tracking beams from Earth’s orbit onto the Earth’s surface, was “It should sound like you’d get cancer if you put your hand in the beam for too long.” I love that kind of direction; Gareth is the best.

This was an international production. What were the challenges of working on different continents and with so many languages?
The Creator was shot on location in eight countries across Asia, including Thailand, Vietnam, Cambodia, Japan and Nepal. As production began, I was in contact with Ian Voigt, the on-location production mixer. He had to adapt to the guerilla-style of filming to invent new methods of wireless boom recording and new methods of working with the novel camera technology, in close contact with Oren Soffer and Greig Fraser, the film’s directors of photography.

Languages spoken included Thai, Vietnamese, Hindi, Japanese and Hindi, and we invented futuristic hybrid languages used by the New Asia AI and the robot characters. The on-location crowds also spoke in multiple languages (some human, some robotic or invented) and required a style of lived-in reality.

Was that the most challenging part of the job? If not, what was?
The biggest challenge was making an epic movie in a documentary/guerilla-style. Every department had to work at the top of its game.

The first giant challenge had to do with dialogue and ADR. Dialogue supervisor David Bach mentioned frequently that this was the most complex film he’d ever tackled. We broke several of our own records, including the number of principle character languages, the number of ADR cues, the amount and variety of group ADR, and the complexity of dialogue processing.

The Creator

Tom Ozanich

Dialogue and music re-recording mixer Tom Ozanich had more radio communication futzes, all tuned to the unique environments, than we’d ever witnessed. Tom also wrangled more robotic dialogue processing channels of all varieties — from Sony Walkman-style robots to the most advanced AI robots — than we’d ever experienced. Gareth wanted audiences to hear the full range of dialogue treatments, from vintage-style sci-fi voices using vocoders to the most advanced tools we now have.

The second big challenge was fulfilling Gareth’s aesthetic goal: Combine ancient and fully futuristic technologies to create sounds that have never been heard before.

What about the tank battle sequence? Walk us through that process.
The first sequence we ever received from Gareth was the tank battle, shot on a floating village in Thailand. For many months, we designed the sound with zero visual effects. A font saying “Tank” or “AI Robot” might clue us in to what was happening. Gareth also chose to use no music in the sequence, allowing us to paint a lush sonic tapestry of nature sounds, juxtaposed with the horrors of war.

He credits editors Joe Walker, Hank Corwin and Scott Morris for having the bravery not to use temp music in this sequence and let the visceral reality of pure sound design carry the sequence.

Our goal was to create the most immersive and out-of-the-box soundscape that we possibly could. With Ethan, we led an extraordinary team of artists who never settled on “good enough.” As is so often the case in any artform, serendipity can appear, and the feeling is magic.

One example is for the aforementioned tanks. We spent months trying to come up with a powerful, futuristic and unique tank sound, but none of the experiments felt special enough. In one moment of pure serendipity, as I was driving back from a weekend of skiing at Mammoth, my car veered into the serrated highway median that’s meant to keep drivers from dozing off and driving off the road. The entire car resonated with a monstrous “RAAAAAAAAHHHHHHMMM!!” and I yelled out, “That’s the sound of the tank!” I recorded it, and that’s the sound in the movie. I have the best job in the world.

The incoming missiles needed a haunting quality, and for the shriek of their descent, we used a recording we did of a baboon. The baboon’s trainer told us that if the baboon witnessed a “theft,” he’d be offended and vocalize. So I put my car keys on the ground and pretended not to notice the trainer snatch the keys away from me and shuffle off. The baboon pointed and let out the perfect shriek of injustice.

What about the bridge sequence?
For this sequence, rudimentary, non-AI bomb robots named G-13 and G-14 (à la DARPA) sprint across the floating village bridge to destroy Alfie, an AI superweapon in the form of a young girl (Madeleine Yuna Voyles). We used the bomb robots’ size and weight to convey an imminent death sentence, their footsteps growing in power and ferocity as the danger approached.

Alfie has a special power over technology, and in one of my favorite moments, G-14 kneels before her instead of detonating. Alfie puts her hand to G-14’s head, and during that touch, we took out all of the sound of the surrounding battle. We made the sound of her special power a deep, humming drone. This moment felt quasi-spiritual, so instead of using synthetic sounds, we used the musical drone of a didgeridoo, an Aboriginal instrument with a spiritual undercurrent.

A favorite sonic technique of ours is to blur the lines between organic and synthetic, and this was one of those moments.

What about the Foley process?
Jonathan Klein supervised the Foley, and Foley artists Dan O’Connell and John Cucci brilliantly brought these robots to life. We have many intimate and subtle moments in the film when Foley was critical in realistically grounding our AI and robot characters to the scene.

The lead character, Joshua, has a prosthetic leg and arm, and there, Foley was vital to contrasting the organic to the inorganic. One example is when Joshua is coming out of the pool at the recovery center — his one leg is barefoot, and his other leg is prosthetic and robotic. These Foley details tell Joshua’s story, demonstrating his physical and, by extension, mental complexity.

What studio did you work out of throughout the process?
We did all of the sound design and editing at our facility on the Warner Bros. studio lot in Burbank.

We broke our own record for the number of mixing stages across two continents. Besides working at WB De Lane Lea in London, we used Stages 5 and 6 at Warner Bros. in Burbank. We were in Stages 2 and 4 at Formosa’s Paramount stages and Stage 1 at Signature Post. This doesn’t even include additional predub and nearfield stages.

The sound team with Gareth Edwards Warner’s Stage 5.

In the mix, both Tom Ozanich and Dean Zupancic beautifully [shifted] from the most delicate and intimate moments, to the most grand and powerful.

Do you enjoy working on VFX-heavy films and sci-fi in particular? Does it give you more freedom in creating sounds that aren’t of this world?
Sound is half of the cinematic experience and is central to the storytelling of The Creator — from sonic natural realism to pure sonic science fiction. We made this combination of the ancient and futuristic for the most unique project I’ve ever had the joy to work on.

Science fiction gives us such latitude, letting us dance between sonic reality and the unreal. And working with amazing visual effects artists allows for a beautiful cross-pollination between sound and picture. It brings out the best in both of our disciplines.

What were some tools you used in your work on The Creator?
The first answer: lots of microphones. Most of the sounds in The Creator are real and organic recordings or manipulated real recordings — from the nature ambiances to the wide range of technologies, from retro to fully futuristic.

Of course, Avid Pro Tools was our sound editing platform, and we used dozens of plugins to make the universe of sound we wanted audiences to hear. We had a special affinity for digital versions of classic analog vocoders, especially for the robot police vocals.

The Oscar-nominated sound team for The Creator pictured with director Gareth Edwards.

Finally, congrats on the nomination. What do you think it was about this film that got the attention of Academy members?
Our credo is “We can never inspire an audience until we inspire ourselves,” and we are so honored and grateful that enough Academy members experienced The Creator and felt inspired to bring us to this moment.

Gareth and our whole team have created a unique cinematic experience. We hope that more of the world not only watches it, but hears it, in the best environment possible.

(Check out this behind-the-scenes video of the team working on The Creator.)

iZotope AI Voice Enhancement Now Available

Native Instruments has released iZotope VEA, a new AI-powered voice enhancement assistant for content creators and podcasters.

VEA features AI technology that listens first and then enhances audio so creators can feel more confident with their voices and deliver better-sounding content. VEA increases clarity, sets more consistent levels and reduces background noise on any voice recording.

VEA works as a plugin within major digital audio workstations and nonlinear editors. For a list of officially supported hosts, see the system requirements at here.

VEA features three simple controls that are intelligently set by iZotope’s AI technology. Those who are more familiar with editing vocal recordings will find a new way to finish productions quickly by consolidating their effects chains and saving on CPU.

Key features:

  • The Shape control ensures audio sounds professional and audience-ready without having to worry about an EQ. Shape is tailored to each voice and matches the sound of top creators or podcasts with the free iZotope Audiolens tool.
  • The Boost control adds loudness and compression as it’s turned up. Users can easily boost the presence and power of voice recordings without spending time struggling with settings. Boost delivers a smooth and even sound to speech for a more engaging listening experience.
  • The Clean control takes background noise out of the spotlight so every voice can shine. VEA learns the noise in the room automatically and preserves speech for light, transparent noise reduction.

VEA is available now for $29.

 

 

Baselight 6.0

FilmLight Baselight 6.0 With ML-Based Face Track Now Available

FilmLight has released the latest version of its Baselight grading software, Baselight 6.0, which includes an updated timeline, a new primary grading tool, X Grade, a new look development tool, Chromogen, plus a new machine learning (ML)-based tool, Face Track.

Baselight 6.0

Face Track

Using an underlying ML model, Face Track finds and tracks faces in a scene, adjusting as each face moves and turns. It attaches a polygon mesh to each face, allowing perspective-aware tools such as Paint and Shapes to distort with the mesh. This enables the colorist to copy corrections and enhancements made in Face Track to the timeline  — with a copy and paste. These corrections can also be applied to the same face across an entire sequence or episode.

FilmLight has developed a framework called Flexi, which enables the integration of future ML-based tools into Baselight. Also included in Baselight 6.0 is the RIFE ML-based retimer, a reworked Curve Grade, integrated alpha for compositing, a new Gallery for improved searching and sorting, as well as new and enhanced color tools such as Sharpen Luma, a built-in Lens Flaretool, Bokeh for out-of-focus camera effects, Loupe magnifying for adjustments, an upgraded Hue Angle and more.

Lobo Uses AI to Create Animated Open for Ciclope Festival

Creative production, design, animation and mixed media studio Lobo created an animated open for Ciclope Festival 2023, which took place in November in Berlin. Blending traditional concepts with AI-enhanced animation techniques, Lobo produced a kaleidoscope of colors and images designed to show off the artistry on display at this year’s show.

The Ciclope Festival is a three-day live event focusing on the advertising and entertainment industries. The recurring theme each year is craft, with 2023 emphasizing artificial intelligence.

“We are all talking about how AI will influence our work and our lives,” explains Francisco Condorelli, founder/organizer of Ciclope. “Lobo developed its titles around that idea using machine learning technology.”

The process began with the creation of 3D models using Autodesk Maya. These initial structures and visual elements were used to craft the basic environment and figures of the animation. Lobo then used Stable Diffusion.

At the core of this process was the use of LoRA, a method known for its efficiency in adapting large neural network models for specific tasks. In this project, LoRA was called on to learn from unique and original artworks created by Lobo’s artists. This method allowed the AI to capture the creative essence and stylistic details of these pieces, effectively using these insights to refine and enhance the 3D models. Through LoRA, the team was able to integrate artistic nuances into the animation, ensuring what they say was a seamless blend of art and technology.

After using LoRA, Lobo used ControlNet as a precision-guiding tool. LoRA meticulously oversaw the translation of artistic vision into the animated models, ensuring each nuance was accurately reflected. This system was key in aligning the final animations with the intended aesthetic objectives, enabling a faithful and resonant representation of the artists’ original concepts.

Lobo is no stranger to incorporating advanced technology in its work. For the Google Pixel Show 2023, Lobo was commissioned to produce a teaser for the event. Uniting five of its directors, Lobo brainstormed challenging concepts inspired by the arrival of AI-based image-making technologies. The subsequent short used different styles and techniques, from the figurative to the completely abstract, but they all shared the use of AI tools.

For Unfair City, a short film created with BBDO Dublin, Lobo used AI to highlight the growing inequality of homelessness.

However, the expanding use of artificial intelligence remains just one tool in Lobo’s tool box.

For VR Vaccine, Lobo was tasked with alleviating a child’s fear of taking vaccine shots. By creating an immersive fantasy world, Lobo was able to position the child as the hero of her own story, using the vaccine as a shield to protect the realm from invaders. The use of a headset and smartphone was integral in creating this environment.

Lobo was also engaged to launch the new Volvo S60. Using WebAR, Lobo simulated a virtual store, including car customizations, test drive scheduling and financing simulations.

 

AI

How AI-Powered Content Storage Can Revolutionize M&E

By Jonathan Morgan

In the world of media and entertainment, the way content is stored and managed is undergoing a profound transformation, thanks to the integration of AI. As the demand for high-quality, diverse and personalized content continues to surge, traditional storage methods are proving inadequate. AI is emerging as the game-changer, redefining how media assets are stored, organized and accessed. This shift in content storage is not just about managing data efficiently; it’s about unleashing the true potential of creativity and innovation in the industry.

Jonathan Morgan

In this article, I will shed light on how AI-powered storage solutions are helping media companies to stay competitive, deliver more compelling content and engage audiences in innovative ways.

The Evolution of Content Storage and AI in Post Production
In the post world, keeping an active archive of original video footage, including additional shots and camera angles, for an extended period of time is crucial for monetization. Archived content can easily be repurposed, personalized based on audience interests, and monetized — and that’s one area where AI is pushing the needle forward.

Traditionally, AI processing services have focused on providing media companies with a way to use AI-embedded products in the cloud. While cloud-enabled apps, services and tools have become invaluable in post production for their ability to help companies meet deadlines and reduce operational costs, the cloud has unpredictable costs. The time and effort needed to upload and download in the public cloud, not to mention the egress fees, have made the cloud more expensive than previously anticipated, offsetting its many benefits through unnecessary complexity. Using AI at the edge saves post houses a substantial amount of money and time. Therefore, many post houses are now using AI at the edge.

How AI-powered Content Storage Is Transforming the M&E Industry
AI-driven content storage solutions offer a multitude of benefits for post. One way AI storage is revolutionizing post production is by enabling content processing at the edge in ways that could never have been imagined. A massive amount of video content is generated on-site rather than in the cloud, whether at a production studio, a film set or a sports arena. Instead of uploading data to the cloud, waiting for off-site decisions to be made and then sending it back, the edge allows informed decisions to be made in real time, right at the point of data collection. By feeding a live camera transmission at a sports event into an AI-powered local storage system, content creators can quickly determine the most important shots and then send them back to the studio for live broadcasting, highlights or distribution at a later stage.

By now, most of us are familiar with personalized content at the level of program selection: Netflix has made a fine art out of recommending programs for us to watch. However, what if the news program we were watching was curated with news items based on our personal interests? Or a program was delivered in the exact language we wanted to hear it? AI is enabling this type of interaction with programming by feeding back our preferences into the algorithms. But all of this increased choice requires state-of-the-art storage and solutions to feed those algorithms.

AI operations (AIOps) is another example of innovation in the post environment. Post houses are continually striving to reduce costs, and a key area that incurs cost and risk is tier 1 storage. With AIOps, post houses can apply big data analytics and machine learning toward determining the best storage tier for the use case at hand. AIOps enables a post house to automatically move video assets to tier-one storage, which offers ultra-fast access for editing. When the editing phase is over, AIOps will transfer the video content to object storage, which provides robust protection against malicious attacks and a reduced cost compared with tier 1 storage. Not only does AIOps decrease costs and risk from attacks, it also reduces the amount of time post houses spend managing the availability of content, freeing them up for more creative tasks.

AI is also empowering advanced capabilities such as automated object recognition, editing and semantic search. Semantic search allows post houses to find relevant content infinitely faster. Editors can instantly locate every slam dunk in a basketball game where the player then turned around to the camera and smiled. Those clips can be made into a personalized highlight reel for audiences who want to watch all of the slam dunks and player reactions. In addition, semantic search improves content discovery, enabling viewers to find the exact content they are looking for.

AI-powered algorithms allow post houses to rapidly identify objects, scenes, faces and text within media files, resulting in faster, more efficient retrieval of media content. By automating repetitive and time-consuming tasks, such as object recognition, post houses can focus on their visions and maximize their creativity.

Generative AI (GenAI) is a term that everyone has heard in relation to ChatGPT, and the technology is bringing innovation to the post space by simplifying the creation of edits, lighting effects and even entire video scenes. Visual effects and lighting are a make or break for post studios. If light is missing in one scene, and then suddenly the next scene is well well-lit, it’s that disruptive disrupts to the viewing experience. With GenAI, editors can automatically pinpoint which scenes require additional lighting and visual effects and then insert what’s needed into a new version with new, rendered raw footage version. GenAI can be faster and more accurate than the human eye at detecting what effects are missing than the human eye —because GenAI is it’s built to process vast amounts of content. By adopting GenAI for visual effects, post houses can deliver a high-quality production at the lowest possible price.

Conclusion
As the technology continues to evolve, post houses that harness the power of AI in content storage will lead the charge into the future of media and entertainment. AI-powered content storage systems allow post houses to access video footage faster, more efficiently and cost-effectively, helping them find the relevant clips they need to create entertaining content.


Jonathan Morgan is senior VP of product and technology at Perifery, a DataCore Company.

Puget Systems Offers Custom Workstations Built on Threadripper 7000

Puget Systems has a new line of custom workstations built on the AMD Ryzen Threadripper 7000 Series and Threadripper Pro 7000 WX Series processors. They offer new levels of computing performance and innovation for users in multiple industries, especially those in high-end content creation, virtual production and game development.

Here’s what’s new and innovative with the Threadripper 7000 Series processors:

AMD Ryzen Threadripper 7000 Series: These processors for the high-end desktop market offer an overclockable desktop experience along with the highest clock speeds achievable on a Threadripper processor. Power, performance and efficiency are all maximized with the 5nm process and “Zen 4” architecture.

The Threadripper 7000 Series is built to enable powerful I/O for desktop users, with up to 48 PCIe Gen 5.0 lanes for graphics, storage and more. AMD says the 7000 Series is capable of twice the memory bandwidth of typical dual-channel desktop systems, and the processors’ quad-channel DDR5 memory controller can support the most intensive workflows.

AMD Ryzen Threadripper Pro 7000 WX Series: These processors expand on the prior generation’s performance and platform features for the workstation market. Also built on the 5nm Zen 4 architecture, this generation offers ultra-high performance for professional applications and complex multi-tasking workloads.

For multi-threaded workloads, Threadripper Pro processors offer up to 96 cores and 192 threads for complex simulation, generative design, rendering and software compilation tasks. They also provide up to 384MB of L3 cache along with eight channels for DDR5 memory for applications that require high memory capacity and bandwidth.

Puget Systems’ new Threadripper 7000 custom workstations are available immediately for configuration for a wide range of applications. The new Threadripper Pro WX-Series workstation will be available for custom configurations in December.

Lenovo ThinkStation P8: Threadripper Pro 7000 WX, Nvidia RTX GPUs

Lenovo’s new ThinkStation P8 tower workstation features AMD Ryzen Threadripper Pro 7000 WX Series processors and Nvidia RTX GPUs. The ThinkStation P8 builds on the P620, one of the first workstations powered by AMD Ryzen Threadripper Pro processors., In addition to its compute power, the ThinkStation P8 features an optimized thermal design in a versatile Aston Martin-inspired chassis.

Designed for high-intensity environments, the Lenovo ThinkStation P8 is powered by the latest AMD Ryzen Threadripper Pro 7000 WX Series processors built on the leading 5nm “Zen 4” architecture and featuring up to 96 cores and 192 threads. The new sleek, sturdy, rack-optimized chassis offers larger Platinum-rated power supply options to handle more demanding expansion capabilities. For example, it can support up to three Nvidia RTX 6000 Ada generation GPUs to help reduce time to completion in graphics-intensive applications like real-time raytracing, video rendering, simulation or computer-aided design. The combined power also opens up immersive environments, including digital worlds, AR/VR content creation and advanced AI model development.

“The Lenovo ThinkStation P620 with AMD Threadripper Pro technology has been an absolute game-changer for our 3D animation and development workflows over the last two years,” says Bill Ballew, CTO from DreamWorks Animation. “We are looking forward to significantly faster iterations due to the increased performance with the new ThinkStation P8 workstation powered by AMD Threadripper Pro 7000 WX Series in this coming year.”

Configurations
In addition to AMD Ryzen Threadripper Pro 7000 WX Series processors and Nvidia RTX Ada generation GPUs, ThinkStation P8 includes ISV certifications and supports Windows 11 and popular Linux operating systems. It features a range of storage and expansion capabilities that provide flexible and tailored configurations. Highly customizable options allow users to select the best components to handle complex and demanding tasks efficiently. Also, easy access and tool-less serviceability provide scalability and quick replacement of many components.

The P8 workstation can accommodate up to seven M.2 PCIe Gen 4 SSDs with RAID support or up to three HDDs for large-capacity storage and up to 2TB of DDR5 memory with octa-channel support and seven PCIe slots, including six PCI Gen5 that offer faster connectivity. The workstation features lower latency and more expansion capability and includes 10 Gigabit Ethernet onboard to help eliminate network bottlenecks.

ThinkStation P8, like all Lenovo desktop workstations, includes built-in hardware monitoring accessible through ThinkStation diagnostics software, and Lenovo Performance Tuner comes with numerous profiles to optimize many ISV applications. ThinkStation P8 also supports Lenovo’s ThinkShield security offerings, which provide protection from BIOS to cloud. Additionally, rigorous testing standards, Premier Support and extended warranty options are available. Users can further manage their investment through Lenovo TruScale, which simplifies procurement, deployment and management of fully integrated IT solutions, all delivered as a service with a scalable, pay-as-you-go model.

ThinkStation P8 will be available starting Q1 2024.

 

 

Embracing AI at NAB New York

By Molly Connolly

At NAB New York 2023, the buzz on the floor was AI – it was here, there and everywhere, and the conversation was about how it needs to be controlled, legislated, harnessed and monetized. Scary stuff … sci-fi happening today.

From my perspective, AI is not an either/or situation, it’s an either/and situation. Moreover, I like taking a human view of AI. Specifically, how it benefits us, especially in post production.

Those of us who are or have been in technical product marketing know the thought process and unfortunate effects of FUD: fear, uncertainty and doubt. AI is now in the FUD business. You can read, listen to podcasts and watch the news, and when the topic comes to AI, the FUD is flying.

On the positive side, AI enables. AI frees creative minds to imagine, to create and to use their high-value skills for amazing visuals. Think about all of those mundane, repetitive tasks — such as rotoscoping and keyframingthat are required. Yup, AI can come to the rescue. While walking the show floor at NAB New York, my ears perked up hearing how AI-enabled software and hardware products are making post production faster, better, easier — creating real time to value.

At the show, I met with Avid’s Dave Colantuoni, and we had a lively discussion about how AI is now enabling two of Avid’s software products: PhraseFind and ScriptSync. It was music to my ears that the very premise of AI is foundational to these new software products. We passionately agreed that AI is a tool, an enabler and a partner in the process.

In a write-up by Avid’s Rob Gonsalves, “Avid and the Future of AI: Faster Media Creation,” Gonsalves sums it up nicely when discussing the ever-present FUD about AI taking jobs away from humans. He likens it to a creative assistant in every step of the creative process. It is additive not subtractive.

Years ago, the Avid tagline was “Make, Manage, Move Media,” and today, with AI-enabled PhraseFind and ScriptSync, post pros can use AI as their own creative assistant to accelerate their time to revenue or their time to going home at a reasonable hour.

Hats off to the Avid, Blackmagic Design (and its Neural Engine) and the other companies at NAB New York who are embracing the positives and not the negatives.


Molly Connolly‘s experience includes roles in strategic alliance solution marketing/sales at Dell Technologies, AMD, HP, Compaq and Digital Equipment, focused heavily on the M&E industry. She is currently happily retired. 

Perifery Parent Buys WIN: Automated Workflows via Cloud and AI

DataCore Software has acquired Workflow Intelligence Nexus (WIN), a workflow services and software firm that helps users deploy and automate media workflows using the latest cloud-based and AI-powered solutions. Acquiring WIN extends DataCore’s Perifery business unit, which specializes in managing data across the core to the cloud and the edge for high-growth markets, including media and entertainment. This acquisition is the second of the year for Perifery, following its purchase of  Object Matrix.

According to Abhi Dey, GM/COO of Perifery, “WIN has deep roots in the media and entertainment sector; combined with our existing core technologies in hybrid cloud and edge, we will deliver an even more powerful solution portfolio to transform the industries we play in.”

Gartner predicts that by 2026, large enterprises will triple their unstructured data capacity across their on-premises, edge and public cloud locations compared to 2023. With the anticipated growth, Perifery says that removing manual processes and operational obstacles is key to optimizing business outcomes. Organizations need to evolve with automated workflows using constructs like AI that enable faster decision-making and accelerate the execution of their goals, and that’s what this partnership is all about.

WIN has already partnered with media and entertainment companies like Iconic Media and Simple DCP. And with WIN and Perifery solutions, media organizations can evolve from using repetitive manual processes to automated workflows.

“We’re excited to join forces with Perifery, who shares our vision to help customers harness the full potential of their data through optimized workflows,” says Jason Perr, CEO of WIN. “By bringing our AI capabilities to Perifery’s arsenal of tools, we’ll be able to provide an even more robust offering on a global scale.”

Main Image: Perifery’s Abhi Dey and WIN’s Jason Perr at NAB New York

 

Foundry Ships Nuke 15.0, Intros Katana 7.0 and Mari 7.0 

Foundry has released Nuke 15.0 and will be releasing Katana 7.0 and Mari 7.0. This coordinated approach, says the company, offers better support for upgrading to current production standards and brings enhancements for artists, including faster workflows and increased performance.

According to Foundry, updates to Nuke result in faster creative iteration thanks to native Apple silicon, offering up to 20% faster processing speeds. In addition, training speeds in Nuke’s CopyCat machine learning tool have been boosted by up to 2x.

Mari 7.0’s new baking tools will help artists create geometry-based maps at speed without the need for a separate application. USD updates in Katana 7.0 will minimize the friction and disruption of switching between applications, enabling a more intuitive and efficient creative experience.

Foundry’s new releases support standards across the industry, including compliance with VFX Reference Platform 2023. Foundry is currently testing its upcoming releases on Rocky 9.1 and on matching versions of Alma and RHEL.

Foundry is offering dual releases of Nuke and Katana, enabling clients to use the latest features in production immediately, while testing their pipelines against the latest Linux releases. Nuke 15.0 is shipping with Nuke 14.1, and Katana 7.0 will release along with Katana 6.5. These dual releases offer nearly identical feature sets but with different VFX Reference Platform support.

Foundry is also introducing a tech preview of OpenAssetIO in Nuke 15.0 and 14.1 to support pipeline integration efforts and streamline workflows. Managed by the Academy Software Foundation, OpenAssetIO is an open-source interoperability standard for tools and content management systems that will simplify asset and version management, making it easier for artists to locate and identify the assets they require.

Summary of New Nuke Features:

  • Native Apple silicon support — Up to 20% faster general processing speeds and GPU-enabled ML tools, including CopyCat, in Nuke 15.0.
  • Faster CopyCat training — With new distributed training, its faster to share the load across multiple machines using standard render farm applications to compress image resolution to reduce file sizes for up to 2x faster training.
  • USD-based 3D system improvements (beta) — Improvements include a completely new viewer selection experience with dedicated 3D toolbar and two-tier selections, a newly updated GeoMerge node, updated ScanlineRender2, a new Scene Graph pop-up in the mask knob, plus USD updated to version 23.05.
  • Multi-pixel Blink effects in the timeline — Only in Nuke Studio and Hiero. Users can apply and view Blink effects, such as LensDistortion and Denoise, at the timeline level, so there’s no need to go back and forth between the timeline and comp environments.
  • OCIO version 2.2 — Adds support for OCIO configs to be used directly in a project in Nuke 15.0.

What’s Coming in Katana:

  • USD scene manipulation — Building on the same underlying architecture as Nuke’s new 3D system, Katana will have the pipeline flexibility that comes with USD 23.05.
  • Multi-threaded Live Rendering — With Live Rendering now multi-threaded and compatible with Foresight+, artists can benefit from improved performance and user experience.
  • Optimized Geolib3-MT Runtime — New caching strategies, prevent memory bloats and minimize downtime, ensuring the render will fit on the farm.

What’s Coming in Mari:

  • New baking tools — They cut out the need for a separate application or plugin, so users can create geometry-based maps including curvatures and occlusions with ease and speed.
  • Texturing content — With new Python Examples and more procedural nodes, users can access an additional 60 grunge maps, courtesy of Mari expert Johnny Fehr.
  • Automatic project backups — With regular autosaving, users can revert to any previously saved state, either locally or across a network.
  • Upgraded USD workflows — Reducing pipeline friction, the USD importer is now more artist-friendly, plus Mari now supports USD 23.05.
  • Shader updates — Shaders for both Chaos Group’s V-Ray 6 and Autodesk’s Arnold Standard Surface have been updated, ensuring what users see in Mari is reflected in the final render.
  • Licensing improvements — Team licensing is now available, enabling organization admins to manage the usage of licenses for Mari.

Nuke Trial Extension
With slates and projects being paused across the industry, Foundry is extending its free Nuke 15.0 trial from 30 to 90 days for a limited period. Sign up here.

 

Adobe Max 2023: A Focus on Creativity and Tools, Part 1

By Mike McCarthy

Adobe held its annual Max conference at the LA Convention Center this week. It was my first time back since COVID, but Adobe hosted an in-person event last year as well. The Max conference is focused on creativity and is traditionally where Adobe announces and releases the newest updates to its Creative Cloud apps.

As a Premiere editor and Photoshop user, I am always interested in seeing what Adobe’s team has been doing to improve its products and improve my workflows. I have followed Premiere and After Effects pretty closely through Adobe’s beta programs for over a decade, but Max is where I find out about what new things I can do in Photoshop, Illustrator and various other apps. And via the various sessions, I also learn some old things I can do that I just didn’t know about before.

The main keynote is generally where Adobe announces new products and initiatives as well as new functions to existing applications. This year, as you can imagine, was very AI-focused, following up on the company’s successful Firefly generative AI imaging tool released earlier this year. The main feature that differentiates Adobe’s generative AI tools from various competing options is that the resulting outputs are guaranteed to be safe to use in commercial projects. That’s because Adobe owns the content that the models are trained on (presumably courtesy of Adobe Stock).

Adobe sees AI as useful in four ways: broadening exploration, accelerating productivity, increasing creative control and including community input. Adobe GenStudio will now be the hub for all things AI, integrating Creative Cloud, Firefly, Express, Frame.io, Analytics, AEM Assets and Workfront. It aims to “enable on-brand content creation at the speed of imagination,” Adobe says.

Firefly

Adobe has three new generative AI models: Firefly Image 2, Firefly Vector and Firefly Design. The company also announced that it is working on Firefly Audio, Video and 3D models, which should be available soon. I want to pair the 3D one with the new AE functionality. Firefly Image 2 has twice the resolution of the original and can ingest reference images to match the style of the output.

Firefly Vector is obviously for creating AI-generated vector images and art.

But the third one, Firefly Design, deserves further explanation. It generates a fully editable Adobe Express template document with a user-defined aspect ratio and text options. The remaining fine-tuning for a completed work can be done in Adobe Express.

FireflyDesign

For those of you who are unfamiliar, Adobe Express is a free cloud-based media creation and editing application, and that is where a lot of Adobe’s recent efforts and this event’s announcements have been focused. It is designed to streamline the workflow for getting content from the idea stage all the way to publishing on the internet, with direct integration to many various social media outlets and a full scheduling system to manage entire social marketing campaigns. It can reformat content for different deliverables and even automatically translate it into 40 different languages.

As more and more of Photoshop and Illustrator’s functionality gets integrated into Express, Express will probably begin to replace them as the go-to for entry-level users. And as a cloud-based app accessed through a browser, it can even be used on Chromebooks and other non-Mac and Windows devices. And Adobe claims that via a partnership with Google, the Express browser extension will be included in all new Chromebooks moving forward.

Photoshop for Web is the next step beyond Express, integrating even more of the application’s functions into a cloud app that users can access from anywhere, once again, also on Chrome devices. Apparently, I’m an old-school guy who has not yet embraced the move to the cloud as much as I could have, but given my dissatisfaction with the direction the newest Microsoft and Mac OS systems are going, maybe browser-based applications are the future.

Similarly, as a finishing editor, I have real trouble posting content that is not polished and perfected, but that is not how social media operates. With much higher amounts of content being produced in narrow time frames, most of which would not meet the production standards I am used to, I have not embraced this new paradigm. That’s why I am writing an article about this event and not posting a video about it. I would have to spend far too much time reframing each shot, color-correcting and cleaning up any distractions in the audio.

Firefly Generative Fill

For desktop applications, within the full version of Photoshop, Firefly-powered generative fill has replaced content-aware fill. You can now use generative fill to create new overlay layers based on text prompts or remove things by overlaying AI-generated background extensions. AI can also add reflections and other image processing. It can “un-crop” images via Generative Expand. Separately, gradients are now fully editable, and there are now adjustment layer presets, including user-definable ones.

Illustrator can now identify fonts in rasterized and vectorized images and can even edit text that has already been converted to outlines. It can convert text to color palettes for existing artwork. It can also AI generate vector objects and scenes that are all fully editable and scalable. It can even take in existing images as input to match to stylistically. There is also a new cloud-based web version of Illustrator coming to public beta.

Text-based editing in Premiere

From the video perspective, the news was mostly familiar to existing public beta users or to those who followed the IBC announcements: text-based editing, pause and filler word removal, and dialog enhancement in Premiere. After Effects is getting true 3D object support, so my session schedule focused on learning more about the workflows for using that feature. You need to create and texture models and then save them as GLB files before you can use them in AE. And you need to set up the lighting environment in AE before they will look right in your scene. But I am looking forward to being able to use that functionality more effectively on my upcoming film postviz projects.

I will detail my experience at Day 2’s Inspiration keynote as well as the tips and tricks I learned in the various training sessions in a separate article. At the time of this writing, I still had one more day to go at the conference. So keep an eye out. The second half of my Max coverage is coming soon.


Mike McCarthy is a technology consultant with extensive experience in the film post production. He started posting technology info and analysis at HD4PC in 2007. He broadened his focus with TechWithMikeFirst 10 years later.

 

Review: AMD Radeon Pro W7800 and W7900 GPUs

By Brady Betzel

The main players in the discrete GPU game, AMD and Nvidia, have released a barrage of new GPUs this past year. From the Nvidia 4090 Founder’s Edition I reviewed last October to the latest AMD W7800 and W7900, technology and energy efficiency have improved dramatically.

With AI on the forefront of everyone’s mind — whether it is because of the questionable deep fake videos or the amazing ability to take hours of work down to minutes when using Magic Mask in Blackmagic’s DaVinci Resolve — one of the most important pieces of hardware you can have is a powerful GPU.

AMD has always been in the race with Nvidia, but once Apple decided to work internally and create its own GPU, AMD struggled to find its footing… until now. The AMD Radeon Pro W7800 and W7900 GPUs are the latest in professional GPUs from the company, and they are powerful. The AMD Radeon Pro W7800 is a 32GB GPU that retails for $2,499 (from online retailer B&H Photo), while the AMD Radeon Pro W7900 48GB GPU retails for $3,999 (also from B&H). Yes, the prices give you a bit of a sticker shock if you are pricing consumer-level cards like the Nvidia 4090, but for those in need of an enterprise-level, professional workstation-compatible GPU, the $3,999 is actually pretty reasonable for the best. For comparison, the Nvidia RTX 6000 ADA retails for just under $7,000. But AMD isn’t trying to beat Nvidia at the moment. They are providing a much more reasonably priced alternative that may quench your GPU thirst without breaking the bank.

A Closer Look
First up is a basic comparison between the AMD Radeon Pro W7800 and W7900 in advertised specs:

AMD Radeon Pro W7800 AMD Radeon Pro W7900
GPU architecture AMD RDNA 3
Hardware Raytracing Yes
Lithography TSMC 5nm GCD 6nm MCD
Stream Processors 4480 6144
Compute Units 70 96
Peak Half Precision (FP16) Performance 90.5 TFLOPS 122.64 TFLOPS
Peak Single Precision Matrix (FP32) Performance 40.5 TFLOPS 61.3 TFLOPS
Transistor Count 57.7B 57.7B
OS Support Windows 11 – 64-Bit Edition

Windows 10 – 64-Bit Edition

Linux x86_64

External Power Connectors 2×8-Pin Power Connectors
Total Board Power (TBP) 260W Peak
PSU Recommendation 650W
Dedicated Memory 32GB GDDR6 48GB GDDR6
AMD Infinity Cache Technology 64MB 96MB
Memory Interface 256-bit 384-bit
Peak Memory Bandwidth Up to 576GB/s Up to 864GB/s
Form Factor PCIe 4.0×16 (3.0 Backwards Compatible) – Active Cooling
DisplayPort 3x DisplayPort 2.1 and 1x Enhanced Mini DisplayPort™ 2.1
Display Configurations 4x 4096 x 2160 (4K DCI) @ 120Hz with DSC

2x 6144 x 3456 (6K) 12-bit HDR @ 60Hz Uncompressed

1x 7680 x 4320 (8K) 12-bit HDR @ 60Hz Uncompressed

1x 12288 x 6912 (12K) @ 120Hz with DSC

DIsplay Support HDR Support

8K Support

10K Support

12K Support

Dimensions Full Height

11-inch (280mm) Length

Double Slot

Full Height

11-inch (280mm) Length

Triple Slot

Additional Features Supported Rendering Formats

1x Encode & Decode (AV1)

2x Decode (H265/HEVC, 4K H264)

2x Encode (H265/HEVC, 4K H264)

Supported Technologies

AMD Viewport Boost

AMD Remote Workstation

AMD Radeon Media Engine

AMD Software: Pro Edition

AMD Radeon VR Ready Creator

AMD Radeon ProRender

10-bit Display Color Output

12-bit Display Color Output

3D Stereo Support

 

Supported Rendering Formats

1x Encode & Decode (AV1)

2x Decode (H265/HEVC, 4K H264)

2x Encode (+AVI Encode and Decode)

Supported Technologies

AMD Viewport Boost

AMD Remote Workstation

AMD Radeon Media Engine

AMD Software: Pro Edition

AMD Radeon VR Ready Creator

AMD Radeon ProRender

10-bit Display Color Output

12-bit Display Color Output

3D Stereo Support

What sets the W7900 apart from the W7800 are the increased dedicated memory of 48GB, increased AMD Infinity Cache technology to 96MB, memory interface boosted to 384-bit, increased peak memory bandwidth up to 864GB/s, triple-slot size and addition of AVI encode and decode.

AMD Radeon Pro W7800
Up first in benchmarking tests is the AMD Radeon Pro W7800 inside of DaVinci Resolve 18.1.2 and Adobe Premiere 2023 as well as a few other apps and plugins. For testing inside of Resolve and Premiere, I used the same UHD (3840×2160) sequences and effects that I have used in previous reviews. The clips include:

  • ARRI RAW: 3840×2160 24fps – 7 seconds, 12 frames
  • ARRI RAW: 4448×1856 24fps – 7 seconds, 12 frames
  • BMD RAW: 6144×3456 24fps – 15 seconds
  • Red RAW: 6144×3072 23.976fps – 7 seconds, 12 frames
  • Red RAW: 6144×3160 23.976fps – 7 seconds, 12 frames
  • Sony a7siii: 3840×2160 23.976fps – 15 seconds

I then duplicated the sequence and added Blackmagic’s noise reduction, sharpening and grain. Finally, I replaced the noise reduction with Neat Video’s noise reduction

From there, I exported multiple versions: DNxHR 444 10-bit OP1a MXF file, DNxHR 444 10-bit MOV, H.264 MP4, H.265 MP4, AV1 MP4 and then an IMF package using the default settings.

AMD Radeon Pro W7800

Resolve 18 Exports

DNxHR 444 10-bit MXF DNxHR 444 10-bit MOV H.264 MP4 H.265 MP4 AV1

MP4

IMF
Color Correction Only  00:24 0:22 00:20 00:18 00:27 00:38
CC + Resolve Noise Reduction 02:21 02:21 02:21 02:22 02:22 02:23
CC, Resolve NR, Sharpening, Grain 03:04 03:04 03:03 03:03 03:03 03:05
CC + Neat Video Noise Reduction 02:59 03:00 03:03 03:01 03:02 03:00

For comparison’s sake, here are the results from the Nvidia RTX 4090:

Nvidia RTX 4090

Resolve 18 Exports

DNxHR 444 10-bit MXF DNxHR 444 10-bit MOV H.264 .mp4 H.265 .mp4 AV1

MP4

IMF
Color Correction Only 00:27 00:27 00:22 00:22 00:23 00:49
CC + Resolve Noise Reduction 00:57 00:56 00:55 00:55 00:55 01:04
CC, Resolve NR, Sharpening, Grain 01:14 01:14 01:12 01:12 01:12 01:19
CC + Neat Video Noise Reduction 02:38 02:38 02:34 02:34 02:34 02:41

 

AMD Radeon Pro W7800

Adobe Premiere Pro 2023 (Individual Exports in Media Encoder)

DNxHR 444 10-bit MXF DNxHR 444 10-bit MOC H.264 MP4 H.265 MP4
Color Correction Only 02:17 01:51 01:18 01:19
CC +  NR, Sharpening, Grain 13:38 34:21 33:54 33:07
AMD Radeon Pro W7800

Adobe Premiere Pro 2023 (Simultaneous Exports in Media Encoder)

Color Correction Only 03:27 03:32 03:32 03:51
CC +  NR, Sharpening, Grain 15:15 37:12 15:14 15:14

Again, here are the results from the Nvidia RTX 4090:

Nvidia RTX 4090

Adobe Premiere Pro 2023 (Individual Exports in Media Encoder)

DNxHR 444 10-bit MXF DNxHR 444 10-bit MOV H.264 MP4 H.265 MP4
Color Correction Only 01:28 01:46 01:08 01:07
CC +  NR, Sharpening, Grain 13:07 34:52 34:12 33:54
Nvidia RTX 4090

Adobe Premiere Pro 2023 (Simultaneous Exports in Media Encoder)

Color Correction Only 02:17 01:44 01:08 01:11
CC +  NR, Sharpening, Grain 13:47 34:13 15:54 15:54

Benchmarks
Blender Benchmark CPU samples per minute:

  1. Monster: 179.475890
  2. Junkshop: 124.988030
  3. Classroom: 86.279909

Blender Benchmark GPU samples per minute:

  1. Monster: 1306.493713
  2. Junkshop: 688.435718
  3. Classroom: 630.02515

 

Blackmagic Proxy Generator (H.265 10-bit, 4:2:0, 1080p):

  • RedR3D: 2 files – 50fps
  • Sony a7iii .mp4: 46 files – 267fps

 

Neat Video HD: GPU-only 69.5 frames/sec

Neat Video UHD: GPU-only 16.4 frames/sec

PugetBench for After Effects 0.95.7, After Effects 23.4×53:

  • Overall Score: 1018
  • Multi-Core Score: 202.6
  • GPU Score: 76.8
  • RAM Preview Score: 101.4
  • Render Score: 106.4
  • Tracking Score: 93.6

PugetBench for Premiere Pro 0.98.0, Premiere Pro 23.4.0:

  • Extended Overall Score: 532
  • Standard Overall Score: 828
  • LongGOP Score (Extended): 79.8
  • Intraframe Score (Extended): 80.9
  • RAW Score (Extended): 26
  • GPU Effects Score (Extended): 47.7
  • LongGOP Score (Standard): 112.9
  • Intraframe Score (Standard): 95.5
  • RAW Score (Standard): 75.6
  • GPU Effects Score (Standard): 57.8

PugetBench for Resolve 0.93.1, DaVinci Resolve Studio 18.5

  • Standard Overall Score: 2537
  • 4K Media Score: 175
  • GPU Effects Score: 123
  • Fusion Score: 463

Those are a ton of numbers and comparisons. The important thing to note is this: The W7800 is a little pricier than the 4090 but requires almost 200W less power and includes DisplayPort 2.1 technology if your display is compatible. Finally, keep in mind that the AMD Radeon Pro W7800 is an enterprise-level card that is made to run flawlessly 24 hours a day, 365 days a year. For similar guarantees, you would need to jump to something like the Nvidia RTX A5000, which currently retails from B&H for $1,899.99 but has less memory and some other differences.

AMD Radeon Pro W7900
Up next, we’ve performed similar benchmarks for the AMD Radeon Pro W7900:

AMD Radeon Pro W7900

Resolve 18 Exports

DNxHR 444 10-bit .mxf DNxHR 444 10-bit .mov H.264 MP4 H.265 MP4 AV1

MP4

IMF
Color Correction Only  00:30 00:28 00:23 00:21 00:31 00:50
CC + Resolve Noise Reduction 01:45 01:41 01:44 01:44 01:45 01:47
CC, Resolve NR, Sharpening, Grain 02:17 02:09 02:18 02:18 02:18 02:19
CC + Neat Video Noise Reduction 03:03 03:00 03:04 03:04 03:05 03:04

 

AMD Radeon Pro W7900

Adobe Premiere Pro 2023 (Individual Exports in Media Encoder)

DNxHR 444 10-bit MXF DNxHR 444 10-bit MOV H.264 MP4 H.265 MP4
Color Correction Only 02:11 01:42 01:05 01:06
CC + NR, Sharpening, Grain 14:12 34:27 33:48 33:54
AMD Radeon Pro W7900

Adobe Premiere Pro 2023 (Simultaneous Exports in Media Encoder)

Color Correction Only 03:20 03:24 02:41 02:42
CC +  NR, Sharpening, Grain 15:21 37:32 15:21 15:22

Benchmarks

Blender Benchmark CPU samples per minute:

  1. Monster: 181.802109
  2. Junkshop: 125.356688
  3. Classroom: 86.608965

Blender Benchmark GPU samples per minute:

  1. Monster: 1095.478227
  2. Junkshop: 969.553103
  3. Classroom: 865.631865

Blackmagic Proxy Generator (H.265 10-bit, 4:2:0, 1080p):

  • Red R3D: 2 files – 27fps
  • Sony a7iii .mp4: 46 files – 266fps

Neat Video HD: GPU Only 89 frames/sec

Neat Video UHD: GPU Only 24.4 frames/sec

PugetBench for After Effects 0.95.7, After Effects 23.4×53:

  • Overall Score: 1038
  • Multi-Core Score: 203.9
  • GPU Score: 82.3
  • RAM Preview Score: 103.4
  • Render Score: 109.4
  • Tracking Score: 93.4

PugetBench for Premiere Pro 0.98.0, Premiere Pro 23.4.0:

  • Extended Overall Score: 567
  • Standard Overall Score: 891
  • LongGOP Score (Extended): 80.3
  • Intraframe Score (Extended): 82.5
  • RAW Score (Extended): 26.6
  • GPU Effects Score (Extended): 58.7
  • LongGOP Score (Standard): 114.9
  • Intraframe Score (Standard): 97.7
  • RAW Score (Standard): 78.3
  • GPU Effects Score (Standard): 71.6

PugetBench for Resolve 0.93.1, DaVinci Resolve Studio 18.5

  • Standard Overall Score: 2847
  • 4K Media Score: 179
  • GPU Effects Score: 173
  • Fusion Score: 502

These benchmarks are heavily favored toward video editors, content creators and even colorists, so some of the benefits — like the 48GB of memory on the W7900 — may not be useful and could be a reason to stick with the W7800. Between the AMD Radeon Pro W7800 and the W7900, a lot of the performance increases will be seen in large designs and renders — heavy Blender scenes or even Unreal creations.

Summing Up
After using the AMD Radeon Pro W7800 and W7900 for a couple of months in and out of DaVinci Resolve (versions 18-18.5) and Premiere 2023, I felt very comfortable in keeping the W7800 as the daily driver. I didn’t experience any GPU-related crashes or errors. I was actually a little surprised at how comfortable I was with the W7800 and W7900 after using the Nvidia RTX 4070 Ti and 4090 for so long.

Keep in mind that the AMD Radeon Pro series of GPUs is certified with certain software application versions to run without error. You can search for specific applications here.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and Uninterrupted: The Shop . He is also a member of the Producers Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.