NBCUni 9.5.23

Category Archives: A.I.

Telestream to Intro AI-Powered Tools at NAB 2024

At NAB 2024, Telestream will introduce a new AI-powered suite of media processing tools designed to change how media pros ingest, enhance, and deliver content, optimizing every step for speed, quality and efficiency across the media production life cycle.

The industry’s shift to remote production presents significant challenges for media companies, particularly in accessing high-resolution, mezzanine content. Telestream’s GLIM as a Service, a new cloud-based solution, addresses this issue by offering instant playback of content in any format through a web browser.

This service streamlines remote content access, eliminating the need for extensive downloads or specialized playback hardware. By enabling quicker access and simplifying the production process, GLIM as a Service, according to Telestream, not only accelerates production workflows but also reduces operational costs by eliminating the reliance on physical hardware and streamlining content review and approval processes.

AI-Powered Tools
Telestream is introducing a suite of AI-powered media processing tools, marking a significant advancement in the production and distribution of media content. These solutions are designed to empower production teams, enabling them to produce and distribute high-quality content more efficiently and swiftly than ever before, aligning with the demand for fast turnaround times across diverse platforms.

  • Automated Workflow Creation: By leveraging artificial intelligence, Telestream’s Vantage Workflow Designer automates the configuration of media processing workflows. Telestream says this drastically reduces manual interventions, streamlines operations and minimizes errors, significantly speeding up the production cycle.
  • Intelligent Quality Control (QC): Telestream’s AI-driven QC tools automate the process of ensuring consistent content quality across large volumes of media. This automation supports the delivery of high-quality content at the speed demanded by multiple platforms, eliminating the scalability challenges of manual QC.
  • Efficient Captioning and Subtitling: The integration of AI also extends to captioning and subtitling processes, making them faster and more efficient. This not only enhances content accessibility and global reach but also ensures that content can be quickly turned around to meet the immediate needs of a diverse and widespread audience.

Simplified Adoption and Integration
Understanding the industry’s hesitation toward complex technology adoption, Telestream says it has focused on making its advanced AI solutions accessible and easy to integrate. This approach lowers the barrier to adopting these technologies, enabling media entities to adapt and innovate quickly.

Quality Control
Telestream is offering updates to its Qualify QC:

  • IMF Compliance: Enhanced with Netflix Photon support, ensuring Interoperable Master Format (IMF) packages meet critical industry standards.
  • Harding FPA Test: Incorporates detection capabilities for potentially epileptic content, prioritizing viewer safety.
  • Dolby E Presence and Dolby Vision Validation: Verifies the inclusion of Dolby E audio and the accuracy of Dolby Vision metadata, guaranteeing top-notch audiovisual experiences.
  • Rude Word Detection: A new tool to screen and flag unsuitable language, ensuring content suitability for all audiences.

These enhancements to Qualify QC reflect Telestream’s commitment to advancing quality control processes, making it simpler for media professionals to deliver content that is not only compliant but is also of the highest quality and delivers the best viewer experience.

Live Capture in the Cloud
Telestream is introducing a new cloud-based Live Capture as a Service offering that is designed to simplify the live capture of content from any location in real time, allowing production teams to bypass the traditional hurdles of remote setup and maintenance. With this solution, media companies can overcome the limitations of traditional physical infrastructure, facilitating a faster transition from live capture to broadcast and optimizing production workflows. This new flexibility not only accelerates content delivery but also empowers companies to capture and monetize additional content.

Emerging Protocols
Telestream is introducing the Inspect Monitoring Platform, a monitoring solution crafted for SMPTE ST 2110, SRT and NDI protocols. The platform offers a comprehensive solution for continuous media stream integrity monitoring, in-depth issue analysis and strategic optimization of broadcasting and streaming operations. This approach enables production companies to detect, diagnose and optimize high-quality content delivery across all platforms and protocols.

Managing Media Storage via Diva 9
As the media industry transitions to cloud storage, organizations have to navigate integrating cloud solutions with existing infrastructures, all while safeguarding their assets and ensuring operational continuity. By adopting a hybrid storage strategy, these organizations can strike a balance between ensuring seamless access to content, optimizing storage costs and implementing robust disaster recovery protocols. The challenge lies in executing this balance effectively.

Diva 9 addresses these critical pain points associated with moving content to the cloud by offering a seamless, hybrid content management solution. It facilitates the smart transition of media assets between on-premises and cloud environments, leveraging intelligent media storage policies, advanced Elasticsearch search functions and integrations with MAM, automation and other cloud systems. This approach ensures the secure and scalable storage of content and improves accessibility and cost-effectiveness.

HPA Tech Retreat 2024: Networking and Tech in the Desert

By Randi Altman

Late last month, many of the smartest brains in production and post descended on the Westin Rancho Mirage Golf Resort & Spa in Palm Springs for the annual HPA Tech Retreat. This conference is built for learning and networking; it’s what it does best, and it starts early. The days begin with over 30 breakfast roundtables, where hosts dig into topics — such as “Using AI/ML for Media Content Creation” and “Apprenticeship and the Future of Post” — while the people at their table dig in to eggs and coffee.

Corridor Digital’s Niko Pueringer

The day then kicks further into gear with sessions; coffee breaks inserted for more mingling; more sessions; networking lunches; a small exhibit floor; drinks while checking out the tools; dinners, including Fiesta Night and food trucks; and, of course, a bowling party… all designed to get you to talk to people you might not know and build relationships.

It’s hard to explain just how valuable this event is for those who attend, speak and exhibit. Along with Corridor Digital’s Niko Pueringer talking AI as well as the panel of creatives who worked on Postcard from Earth for the Las Vegas Sphere, one of my personal favorites was the yearly Women in Post lunch. Introduced by Fox’s Payton List, the panel was moderated by Rosanna Marino of IDC LA and featured Daphne Dentz from Warner Bros. Discovery Content Creative Services, Katie Hinsen from Marvel and Kylee Peña from Adobe. The group talked about the changing “landscape of workplace dynamics influenced by #metoo, the arrival of Gen Z into the workforce and the ongoing impact of the COVID pandemic.” It was great. The panelists were open, honest and funny. A definite highlight of the conference.

We reached out to just a few folks to get their thoughts on the event:

Light Iron’s Liam Ford
My favorite session by far was the second half of the Tuesday Supersession. Getting an in-depth walk-through of how AI is currently being used to create content was truly eye-opening. Not only did we get exposed to a variety of tools that I’ve never even heard of before, but we were given insights on what the generative AI components were actually doing to create these images, and that shed a lot of light on where the potential growth and innovation in this process is likely to be concentrated.

I also want to give a shoutout to the great talk by Charles Poynton on what quantum dots actually are. I feel like we’ve been throwing this term around a lot over the last year or two, and few people, if any, knew how the technology was constructed at a base layer.

Charles Poynton

Finally, my general takeaway was that we’re heading into a bit of a Wild West over the next three years.  Not only is AI going to change a lot of workflows, and in ways we haven’t come close to predicting yet, but the basic business model of the film industry itself is on the ropes. Everyone’s going to have to start thinking outside the box very seriously to survive the coming disruption.

Imax’s Greg Ciaccio
Each year, the HPA Tech Retreat program features cutting-edge technology and related implementation. This year, the bench of immensely talented AI experts stole the show.  Year after year, I’m impressed with the practical use cases shown using these new technologies. AI benefits are far-reaching, but generative AI piqued my interest most, especially in the area of image enhancement. Instead of traditional pixel up-rezing, AI image enhancements can use learned images to embellish artists’ work, which can iteratively be sent back and forth to achieve the desired intent.

It’s all about networking at the Tech Retreat.

3 Ball Media Group’s Neil Coleman
While the concern about artificial intelligence was palpable in the room, it was the potential in the tools that was most exciting. We are already putting Topaz Labs Video AI into use in our post workflow, but the conversations are what spark the most discovery. Discussing needs and challenges with other attendees at lunch led to options that we hadn’t considered when trying to get footage from field back to post. It’s the people that make this conference so compelling.

IDC’s Rosanna Marino
It’s always a good idea to hear the invited professionals’ perspectives, knowledge and experience. However, I must say that the 2024 HPA Tech Retreat was outstanding. Every panel, every event was important and relevant. In addition to all the knowledge and information taken away, the networking and bonding was also exceptional.

Picture Shop colorist Tim Stipan talks about working on the Vegas Sphere.

I am grateful to have attended the entire event this year. I would have really missed out otherwise. The variation of topics and how they all came together was extraordinary. The number of attendees gave it a real community feel.

IDC’s Mike Tosti
The HPA Tech Retreat allows you to catch up on what your peers are doing in the industry and where the pitfalls may lie.

AI has come a long way in the last year, and it is time we start learning it and embracing it, as it is only going to get better and more prevalent. There were some really compelling demonstrations during the afternoon of Supersession.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 25 years. 

NBCUni 9.5.23

Puget Systems Debuts Custom Laptops and SDS Storage

Puget Systems has expanded its product offerings beyond custom desktop workstations into the mobile computing market with the introduction of an entirely new category of custom mobile workstations.

Debuting at this year’s HPA Tech Retreat in Palm Springs, the new Puget Mobile 17-inch will feature high-performance hardware with Intel’s Core i9 14900HX CPU and Nvidia’s GeForce RTX 4090 mobile GPU, all built into a notebook chassis. The 17.3-inch QHD screen has a 240Hz refresh rate and high color accuracy. This combination of high-performance components makes the Puget Mobile 17-inch a  good solution for content creators who demand performance, reliability, quality and ultra-smooth workflows in a mobile form factor.

According to Puget Systems, this move signals the expansion of its strategy to provide broader, more comprehensive solutions for its users’ workflow and performance requirements as they continually seek more flexible, reliable and powerful systems. Based on customer feedback Puget is looking to partner with companies its users trust for white-glove service, support and industry-specific expertise.

Throughout the early development process of the new Puget Mobile 17-inch, the Puget Labs and R&D teams worked closely with select users from multiple industries to collect feedback and ensure they were on track.

“This laptop is about as close as you can get to the performance of a PC tower while actually having something that still works as a laptop,” reports Niko Pueringer, the co-founder of Corridor Digital, who has been using Puget computers for years. “And it provided all the qualities I’d expect out of a Puget system. Oh, and I also like that it’s not loaded up with promotional bloatware…

“There are a lot of machines out there with high specs. Anyone (with enough money) can buy a 4090 and sling it in a case,” continues Pueringer. “What makes Puget special is that all the supporting pieces get the attention they deserve. With Puget, I know that I don’t have any hidden compromises or bottlenecks. All my USB ports will work at the same time. The heat management is capable of handling 100% loads for extended time. I know that all the pipes between the shiny GPUs and CPUs are big and beefy and ready to handle anything I throw at it. This laptop was no exception.”

The Puget Mobile 17-inch custom laptops will be available for configuration for a wide range of applications beginning in Q2.

Embracing Storage, MAM and Archiving
At HPA, Puget has also debuted a new family of custom software-defined storage (SDS) solutions. The new Puget Storage solution— in partnership with OSNexus — uses OSNexus’ QuantaStor platform to provide scalable and agile media asset storage for both on-site and remote users.

Available in a 12-bay and a 24-bay 2U form factor, Puget Storage solutions are capable of up to 1.5TB of RAM and provide growing and established studios with simple, flexible storage with end-to-end security. These scalable, agile media asset storage solutions are ideal for post workflows, media asset management applications and archival services with stringent requirements for the ideal combination of capacity, performance, security and scalability.

Partnering with OSNexus to integrate its QuantaStor platform provides Puget Storage users with a number of key benefits, including:

  • Storage grid technology: Grid technology unifies management of QuantaStor systems across racks, sites and clouds.
  • Security: Advanced RBAC, end-to-end encryption support, complies with NIST 800-53, 800-171, HIPAA, CJIS, and is FIPS 140-2 L1 certified
  • Hardware integration: QuantaStor is integrated with a broad range of systems and storage expansion units, including Seagate, Supermicro and Puget Systems rackmount storage platforms for media and entertainment.
  • Scalable: Integrated with enterprise-grade open storage technologies (Ceph and ZFS)
  • Unified file, block and object: All major storage protocols are supported, including NFS/SMB, iSCSI/FC/NVMeoF and S3.

The new Puget Storage SDS solutions will be available for configuration for a wide range of applications beginning in Q2.

 

 

 


intelligent storage

Wasabi Acquires GrayMeta’s Curio AI: Intelligent Storage Product to Come

Cloud storage company Wasabi Technologies has acquired Curio AI from GrayMeta. The deal includes both the intellectual property and the team behind Curio, including GrayMeta CEO Aaron Edell, who will join Wasabi as SVP of AI and machine learning. Wasabi will incorporate the Curio AI technology into a new class of AI-powered intelligent storage for the media and entertainment industry, which it plans to release this spring.

Curio AI creates a second-by-second index of video stored in Wasabi. “A video archive without detailed metadata is like a library without a card catalog,” says Wasabi CEO David Friend. “This is where AI comes in. AI can find faces, logos, objects and even specific voices. Without it, finding exactly the segments you are looking for requires tedious and time-consuming manual effort. The acquisition of Curio AI will allow us to revolutionize media storage.”

Curio AI is an intelligent data platform that uses AI to generate rich metadata for media libraries and lets editors and producers instantly search and retrieve specific media segments based on people, places, events, emotions, logos, landmarks, background audio and more. Curio AI can also detect and transcribe speech in over 50 spoken languages. Users benefit from more personalized experiences with hyperspecific detail, allowing companies to deliver relevant content as fast as possible.

Wasabi says that Curio AI-powered storage will provide users like Liverpool Football Club with the metadata it needs to manage troves of digital assets quickly.

“AI-powered storage will allow Wasabi customers to instantly find exactly what they need amongst millions of hours of footage and unleash the value in their archives. We believe this will be the most significant advance in the storage industry since the invention of object storage,” says Edell.

“With the acquisition of Curio AI, we are now set to introduce the industry’s first AI-powered intelligent cloud storage,” adds Friend. “Like Wasabi’s standard cloud storage, our Curio AI-powered storage will be simple, fast, reliable and inexpensive.”

 

 


Baselight 6.0

FilmLight Baselight 6.0 With ML-Based Face Track Now Available

FilmLight has released the latest version of its Baselight grading software, Baselight 6.0, which includes an updated timeline, a new primary grading tool, X Grade, a new look development tool, Chromogen, plus a new machine learning (ML)-based tool, Face Track.

Baselight 6.0

Face Track

Using an underlying ML model, Face Track finds and tracks faces in a scene, adjusting as each face moves and turns. It attaches a polygon mesh to each face, allowing perspective-aware tools such as Paint and Shapes to distort with the mesh. This enables the colorist to copy corrections and enhancements made in Face Track to the timeline  — with a copy and paste. These corrections can also be applied to the same face across an entire sequence or episode.

FilmLight has developed a framework called Flexi, which enables the integration of future ML-based tools into Baselight. Also included in Baselight 6.0 is the RIFE ML-based retimer, a reworked Curve Grade, integrated alpha for compositing, a new Gallery for improved searching and sorting, as well as new and enhanced color tools such as Sharpen Luma, a built-in Lens Flaretool, Bokeh for out-of-focus camera effects, Loupe magnifying for adjustments, an upgraded Hue Angle and more.


Lobo Uses AI to Create Animated Open for Ciclope Festival

Creative production, design, animation and mixed media studio Lobo created an animated open for Ciclope Festival 2023, which took place in November in Berlin. Blending traditional concepts with AI-enhanced animation techniques, Lobo produced a kaleidoscope of colors and images designed to show off the artistry on display at this year’s show.

The Ciclope Festival is a three-day live event focusing on the advertising and entertainment industries. The recurring theme each year is craft, with 2023 emphasizing artificial intelligence.

“We are all talking about how AI will influence our work and our lives,” explains Francisco Condorelli, founder/organizer of Ciclope. “Lobo developed its titles around that idea using machine learning technology.”

The process began with the creation of 3D models using Autodesk Maya. These initial structures and visual elements were used to craft the basic environment and figures of the animation. Lobo then used Stable Diffusion.

At the core of this process was the use of LoRA, a method known for its efficiency in adapting large neural network models for specific tasks. In this project, LoRA was called on to learn from unique and original artworks created by Lobo’s artists. This method allowed the AI to capture the creative essence and stylistic details of these pieces, effectively using these insights to refine and enhance the 3D models. Through LoRA, the team was able to integrate artistic nuances into the animation, ensuring what they say was a seamless blend of art and technology.

After using LoRA, Lobo used ControlNet as a precision-guiding tool. LoRA meticulously oversaw the translation of artistic vision into the animated models, ensuring each nuance was accurately reflected. This system was key in aligning the final animations with the intended aesthetic objectives, enabling a faithful and resonant representation of the artists’ original concepts.

Lobo is no stranger to incorporating advanced technology in its work. For the Google Pixel Show 2023, Lobo was commissioned to produce a teaser for the event. Uniting five of its directors, Lobo brainstormed challenging concepts inspired by the arrival of AI-based image-making technologies. The subsequent short used different styles and techniques, from the figurative to the completely abstract, but they all shared the use of AI tools.

For Unfair City, a short film created with BBDO Dublin, Lobo used AI to highlight the growing inequality of homelessness.

However, the expanding use of artificial intelligence remains just one tool in Lobo’s tool box.

For VR Vaccine, Lobo was tasked with alleviating a child’s fear of taking vaccine shots. By creating an immersive fantasy world, Lobo was able to position the child as the hero of her own story, using the vaccine as a shield to protect the realm from invaders. The use of a headset and smartphone was integral in creating this environment.

Lobo was also engaged to launch the new Volvo S60. Using WebAR, Lobo simulated a virtual store, including car customizations, test drive scheduling and financing simulations.

 


AI

How AI-Powered Content Storage Can Revolutionize M&E

By Jonathan Morgan

In the world of media and entertainment, the way content is stored and managed is undergoing a profound transformation, thanks to the integration of AI. As the demand for high-quality, diverse and personalized content continues to surge, traditional storage methods are proving inadequate. AI is emerging as the game-changer, redefining how media assets are stored, organized and accessed. This shift in content storage is not just about managing data efficiently; it’s about unleashing the true potential of creativity and innovation in the industry.

Jonathan Morgan

In this article, I will shed light on how AI-powered storage solutions are helping media companies to stay competitive, deliver more compelling content and engage audiences in innovative ways.

The Evolution of Content Storage and AI in Post Production
In the post world, keeping an active archive of original video footage, including additional shots and camera angles, for an extended period of time is crucial for monetization. Archived content can easily be repurposed, personalized based on audience interests, and monetized — and that’s one area where AI is pushing the needle forward.

Traditionally, AI processing services have focused on providing media companies with a way to use AI-embedded products in the cloud. While cloud-enabled apps, services and tools have become invaluable in post production for their ability to help companies meet deadlines and reduce operational costs, the cloud has unpredictable costs. The time and effort needed to upload and download in the public cloud, not to mention the egress fees, have made the cloud more expensive than previously anticipated, offsetting its many benefits through unnecessary complexity. Using AI at the edge saves post houses a substantial amount of money and time. Therefore, many post houses are now using AI at the edge.

How AI-powered Content Storage Is Transforming the M&E Industry
AI-driven content storage solutions offer a multitude of benefits for post. One way AI storage is revolutionizing post production is by enabling content processing at the edge in ways that could never have been imagined. A massive amount of video content is generated on-site rather than in the cloud, whether at a production studio, a film set or a sports arena. Instead of uploading data to the cloud, waiting for off-site decisions to be made and then sending it back, the edge allows informed decisions to be made in real time, right at the point of data collection. By feeding a live camera transmission at a sports event into an AI-powered local storage system, content creators can quickly determine the most important shots and then send them back to the studio for live broadcasting, highlights or distribution at a later stage.

By now, most of us are familiar with personalized content at the level of program selection: Netflix has made a fine art out of recommending programs for us to watch. However, what if the news program we were watching was curated with news items based on our personal interests? Or a program was delivered in the exact language we wanted to hear it? AI is enabling this type of interaction with programming by feeding back our preferences into the algorithms. But all of this increased choice requires state-of-the-art storage and solutions to feed those algorithms.

AI operations (AIOps) is another example of innovation in the post environment. Post houses are continually striving to reduce costs, and a key area that incurs cost and risk is tier 1 storage. With AIOps, post houses can apply big data analytics and machine learning toward determining the best storage tier for the use case at hand. AIOps enables a post house to automatically move video assets to tier-one storage, which offers ultra-fast access for editing. When the editing phase is over, AIOps will transfer the video content to object storage, which provides robust protection against malicious attacks and a reduced cost compared with tier 1 storage. Not only does AIOps decrease costs and risk from attacks, it also reduces the amount of time post houses spend managing the availability of content, freeing them up for more creative tasks.

AI is also empowering advanced capabilities such as automated object recognition, editing and semantic search. Semantic search allows post houses to find relevant content infinitely faster. Editors can instantly locate every slam dunk in a basketball game where the player then turned around to the camera and smiled. Those clips can be made into a personalized highlight reel for audiences who want to watch all of the slam dunks and player reactions. In addition, semantic search improves content discovery, enabling viewers to find the exact content they are looking for.

AI-powered algorithms allow post houses to rapidly identify objects, scenes, faces and text within media files, resulting in faster, more efficient retrieval of media content. By automating repetitive and time-consuming tasks, such as object recognition, post houses can focus on their visions and maximize their creativity.

Generative AI (GenAI) is a term that everyone has heard in relation to ChatGPT, and the technology is bringing innovation to the post space by simplifying the creation of edits, lighting effects and even entire video scenes. Visual effects and lighting are a make or break for post studios. If light is missing in one scene, and then suddenly the next scene is well well-lit, it’s that disruptive disrupts to the viewing experience. With GenAI, editors can automatically pinpoint which scenes require additional lighting and visual effects and then insert what’s needed into a new version with new, rendered raw footage version. GenAI can be faster and more accurate than the human eye at detecting what effects are missing than the human eye —because GenAI is it’s built to process vast amounts of content. By adopting GenAI for visual effects, post houses can deliver a high-quality production at the lowest possible price.

Conclusion
As the technology continues to evolve, post houses that harness the power of AI in content storage will lead the charge into the future of media and entertainment. AI-powered content storage systems allow post houses to access video footage faster, more efficiently and cost-effectively, helping them find the relevant clips they need to create entertaining content.


Jonathan Morgan is senior VP of product and technology at Perifery, a DataCore Company.

Podcast 12.4

Lenovo ThinkStation P8: Threadripper Pro 7000 WX, Nvidia RTX GPUs

Lenovo’s new ThinkStation P8 tower workstation features AMD Ryzen Threadripper Pro 7000 WX Series processors and Nvidia RTX GPUs. The ThinkStation P8 builds on the P620, one of the first workstations powered by AMD Ryzen Threadripper Pro processors., In addition to its compute power, the ThinkStation P8 features an optimized thermal design in a versatile Aston Martin-inspired chassis.

Designed for high-intensity environments, the Lenovo ThinkStation P8 is powered by the latest AMD Ryzen Threadripper Pro 7000 WX Series processors built on the leading 5nm “Zen 4” architecture and featuring up to 96 cores and 192 threads. The new sleek, sturdy, rack-optimized chassis offers larger Platinum-rated power supply options to handle more demanding expansion capabilities. For example, it can support up to three Nvidia RTX 6000 Ada generation GPUs to help reduce time to completion in graphics-intensive applications like real-time raytracing, video rendering, simulation or computer-aided design. The combined power also opens up immersive environments, including digital worlds, AR/VR content creation and advanced AI model development.

“The Lenovo ThinkStation P620 with AMD Threadripper Pro technology has been an absolute game-changer for our 3D animation and development workflows over the last two years,” says Bill Ballew, CTO from DreamWorks Animation. “We are looking forward to significantly faster iterations due to the increased performance with the new ThinkStation P8 workstation powered by AMD Threadripper Pro 7000 WX Series in this coming year.”

Configurations
In addition to AMD Ryzen Threadripper Pro 7000 WX Series processors and Nvidia RTX Ada generation GPUs, ThinkStation P8 includes ISV certifications and supports Windows 11 and popular Linux operating systems. It features a range of storage and expansion capabilities that provide flexible and tailored configurations. Highly customizable options allow users to select the best components to handle complex and demanding tasks efficiently. Also, easy access and tool-less serviceability provide scalability and quick replacement of many components.

The P8 workstation can accommodate up to seven M.2 PCIe Gen 4 SSDs with RAID support or up to three HDDs for large-capacity storage and up to 2TB of DDR5 memory with octa-channel support and seven PCIe slots, including six PCI Gen5 that offer faster connectivity. The workstation features lower latency and more expansion capability and includes 10 Gigabit Ethernet onboard to help eliminate network bottlenecks.

ThinkStation P8, like all Lenovo desktop workstations, includes built-in hardware monitoring accessible through ThinkStation diagnostics software, and Lenovo Performance Tuner comes with numerous profiles to optimize many ISV applications. ThinkStation P8 also supports Lenovo’s ThinkShield security offerings, which provide protection from BIOS to cloud. Additionally, rigorous testing standards, Premier Support and extended warranty options are available. Users can further manage their investment through Lenovo TruScale, which simplifies procurement, deployment and management of fully integrated IT solutions, all delivered as a service with a scalable, pay-as-you-go model.

ThinkStation P8 will be available starting Q1 2024.

 

 

Podcast 12.4

Embracing AI at NAB New York

By Molly Connolly

At NAB New York 2023, the buzz on the floor was AI – it was here, there and everywhere, and the conversation was about how it needs to be controlled, legislated, harnessed and monetized. Scary stuff … sci-fi happening today.

From my perspective, AI is not an either/or situation, it’s an either/and situation. Moreover, I like taking a human view of AI. Specifically, how it benefits us, especially in post production.

Those of us who are or have been in technical product marketing know the thought process and unfortunate effects of FUD: fear, uncertainty and doubt. AI is now in the FUD business. You can read, listen to podcasts and watch the news, and when the topic comes to AI, the FUD is flying.

On the positive side, AI enables. AI frees creative minds to imagine, to create and to use their high-value skills for amazing visuals. Think about all of those mundane, repetitive tasks — such as rotoscoping and keyframingthat are required. Yup, AI can come to the rescue. While walking the show floor at NAB New York, my ears perked up hearing how AI-enabled software and hardware products are making post production faster, better, easier — creating real time to value.

At the show, I met with Avid’s Dave Colantuoni, and we had a lively discussion about how AI is now enabling two of Avid’s software products: PhraseFind and ScriptSync. It was music to my ears that the very premise of AI is foundational to these new software products. We passionately agreed that AI is a tool, an enabler and a partner in the process.

In a write-up by Avid’s Rob Gonsalves, “Avid and the Future of AI: Faster Media Creation,” Gonsalves sums it up nicely when discussing the ever-present FUD about AI taking jobs away from humans. He likens it to a creative assistant in every step of the creative process. It is additive not subtractive.

Years ago, the Avid tagline was “Make, Manage, Move Media,” and today, with AI-enabled PhraseFind and ScriptSync, post pros can use AI as their own creative assistant to accelerate their time to revenue or their time to going home at a reasonable hour.

Hats off to the Avid, Blackmagic Design (and its Neural Engine) and the other companies at NAB New York who are embracing the positives and not the negatives.


Molly Connolly‘s experience includes roles in strategic alliance solution marketing/sales at Dell Technologies, AMD, HP, Compaq and Digital Equipment, focused heavily on the M&E industry. She is currently happily retired. 

Perifery Parent Buys WIN: Automated Workflows via Cloud and AI

DataCore Software has acquired Workflow Intelligence Nexus (WIN), a workflow services and software firm that helps users deploy and automate media workflows using the latest cloud-based and AI-powered solutions. Acquiring WIN extends DataCore’s Perifery business unit, which specializes in managing data across the core to the cloud and the edge for high-growth markets, including media and entertainment. This acquisition is the second of the year for Perifery, following its purchase of  Object Matrix.

According to Abhi Dey, GM/COO of Perifery, “WIN has deep roots in the media and entertainment sector; combined with our existing core technologies in hybrid cloud and edge, we will deliver an even more powerful solution portfolio to transform the industries we play in.”

Gartner predicts that by 2026, large enterprises will triple their unstructured data capacity across their on-premises, edge and public cloud locations compared to 2023. With the anticipated growth, Perifery says that removing manual processes and operational obstacles is key to optimizing business outcomes. Organizations need to evolve with automated workflows using constructs like AI that enable faster decision-making and accelerate the execution of their goals, and that’s what this partnership is all about.

WIN has already partnered with media and entertainment companies like Iconic Media and Simple DCP. And with WIN and Perifery solutions, media organizations can evolve from using repetitive manual processes to automated workflows.

“We’re excited to join forces with Perifery, who shares our vision to help customers harness the full potential of their data through optimized workflows,” says Jason Perr, CEO of WIN. “By bringing our AI capabilities to Perifery’s arsenal of tools, we’ll be able to provide an even more robust offering on a global scale.”

Main Image: Perifery’s Abhi Dey and WIN’s Jason Perr at NAB New York

 

Foundry Ships Nuke 15.0, Intros Katana 7.0 and Mari 7.0 

Foundry has released Nuke 15.0 and will be releasing Katana 7.0 and Mari 7.0. This coordinated approach, says the company, offers better support for upgrading to current production standards and brings enhancements for artists, including faster workflows and increased performance.

According to Foundry, updates to Nuke result in faster creative iteration thanks to native Apple silicon, offering up to 20% faster processing speeds. In addition, training speeds in Nuke’s CopyCat machine learning tool have been boosted by up to 2x.

Mari 7.0’s new baking tools will help artists create geometry-based maps at speed without the need for a separate application. USD updates in Katana 7.0 will minimize the friction and disruption of switching between applications, enabling a more intuitive and efficient creative experience.

Foundry’s new releases support standards across the industry, including compliance with VFX Reference Platform 2023. Foundry is currently testing its upcoming releases on Rocky 9.1 and on matching versions of Alma and RHEL.

Foundry is offering dual releases of Nuke and Katana, enabling clients to use the latest features in production immediately, while testing their pipelines against the latest Linux releases. Nuke 15.0 is shipping with Nuke 14.1, and Katana 7.0 will release along with Katana 6.5. These dual releases offer nearly identical feature sets but with different VFX Reference Platform support.

Foundry is also introducing a tech preview of OpenAssetIO in Nuke 15.0 and 14.1 to support pipeline integration efforts and streamline workflows. Managed by the Academy Software Foundation, OpenAssetIO is an open-source interoperability standard for tools and content management systems that will simplify asset and version management, making it easier for artists to locate and identify the assets they require.

Summary of New Nuke Features:

  • Native Apple silicon support — Up to 20% faster general processing speeds and GPU-enabled ML tools, including CopyCat, in Nuke 15.0.
  • Faster CopyCat training — With new distributed training, its faster to share the load across multiple machines using standard render farm applications to compress image resolution to reduce file sizes for up to 2x faster training.
  • USD-based 3D system improvements (beta) — Improvements include a completely new viewer selection experience with dedicated 3D toolbar and two-tier selections, a newly updated GeoMerge node, updated ScanlineRender2, a new Scene Graph pop-up in the mask knob, plus USD updated to version 23.05.
  • Multi-pixel Blink effects in the timeline — Only in Nuke Studio and Hiero. Users can apply and view Blink effects, such as LensDistortion and Denoise, at the timeline level, so there’s no need to go back and forth between the timeline and comp environments.
  • OCIO version 2.2 — Adds support for OCIO configs to be used directly in a project in Nuke 15.0.

What’s Coming in Katana:

  • USD scene manipulation — Building on the same underlying architecture as Nuke’s new 3D system, Katana will have the pipeline flexibility that comes with USD 23.05.
  • Multi-threaded Live Rendering — With Live Rendering now multi-threaded and compatible with Foresight+, artists can benefit from improved performance and user experience.
  • Optimized Geolib3-MT Runtime — New caching strategies, prevent memory bloats and minimize downtime, ensuring the render will fit on the farm.

What’s Coming in Mari:

  • New baking tools — They cut out the need for a separate application or plugin, so users can create geometry-based maps including curvatures and occlusions with ease and speed.
  • Texturing content — With new Python Examples and more procedural nodes, users can access an additional 60 grunge maps, courtesy of Mari expert Johnny Fehr.
  • Automatic project backups — With regular autosaving, users can revert to any previously saved state, either locally or across a network.
  • Upgraded USD workflows — Reducing pipeline friction, the USD importer is now more artist-friendly, plus Mari now supports USD 23.05.
  • Shader updates — Shaders for both Chaos Group’s V-Ray 6 and Autodesk’s Arnold Standard Surface have been updated, ensuring what users see in Mari is reflected in the final render.
  • Licensing improvements — Team licensing is now available, enabling organization admins to manage the usage of licenses for Mari.

Nuke Trial Extension
With slates and projects being paused across the industry, Foundry is extending its free Nuke 15.0 trial from 30 to 90 days for a limited period. Sign up here.

 

Adobe Max 2023: A Focus on Creativity and Tools, Part 1

By Mike McCarthy

Adobe held its annual Max conference at the LA Convention Center this week. It was my first time back since COVID, but Adobe hosted an in-person event last year as well. The Max conference is focused on creativity and is traditionally where Adobe announces and releases the newest updates to its Creative Cloud apps.

As a Premiere editor and Photoshop user, I am always interested in seeing what Adobe’s team has been doing to improve its products and improve my workflows. I have followed Premiere and After Effects pretty closely through Adobe’s beta programs for over a decade, but Max is where I find out about what new things I can do in Photoshop, Illustrator and various other apps. And via the various sessions, I also learn some old things I can do that I just didn’t know about before.

The main keynote is generally where Adobe announces new products and initiatives as well as new functions to existing applications. This year, as you can imagine, was very AI-focused, following up on the company’s successful Firefly generative AI imaging tool released earlier this year. The main feature that differentiates Adobe’s generative AI tools from various competing options is that the resulting outputs are guaranteed to be safe to use in commercial projects. That’s because Adobe owns the content that the models are trained on (presumably courtesy of Adobe Stock).

Adobe sees AI as useful in four ways: broadening exploration, accelerating productivity, increasing creative control and including community input. Adobe GenStudio will now be the hub for all things AI, integrating Creative Cloud, Firefly, Express, Frame.io, Analytics, AEM Assets and Workfront. It aims to “enable on-brand content creation at the speed of imagination,” Adobe says.

Firefly

Adobe has three new generative AI models: Firefly Image 2, Firefly Vector and Firefly Design. The company also announced that it is working on Firefly Audio, Video and 3D models, which should be available soon. I want to pair the 3D one with the new AE functionality. Firefly Image 2 has twice the resolution of the original and can ingest reference images to match the style of the output.

Firefly Vector is obviously for creating AI-generated vector images and art.

But the third one, Firefly Design, deserves further explanation. It generates a fully editable Adobe Express template document with a user-defined aspect ratio and text options. The remaining fine-tuning for a completed work can be done in Adobe Express.

FireflyDesign

For those of you who are unfamiliar, Adobe Express is a free cloud-based media creation and editing application, and that is where a lot of Adobe’s recent efforts and this event’s announcements have been focused. It is designed to streamline the workflow for getting content from the idea stage all the way to publishing on the internet, with direct integration to many various social media outlets and a full scheduling system to manage entire social marketing campaigns. It can reformat content for different deliverables and even automatically translate it into 40 different languages.

As more and more of Photoshop and Illustrator’s functionality gets integrated into Express, Express will probably begin to replace them as the go-to for entry-level users. And as a cloud-based app accessed through a browser, it can even be used on Chromebooks and other non-Mac and Windows devices. And Adobe claims that via a partnership with Google, the Express browser extension will be included in all new Chromebooks moving forward.

Photoshop for Web is the next step beyond Express, integrating even more of the application’s functions into a cloud app that users can access from anywhere, once again, also on Chrome devices. Apparently, I’m an old-school guy who has not yet embraced the move to the cloud as much as I could have, but given my dissatisfaction with the direction the newest Microsoft and Mac OS systems are going, maybe browser-based applications are the future.

Similarly, as a finishing editor, I have real trouble posting content that is not polished and perfected, but that is not how social media operates. With much higher amounts of content being produced in narrow time frames, most of which would not meet the production standards I am used to, I have not embraced this new paradigm. That’s why I am writing an article about this event and not posting a video about it. I would have to spend far too much time reframing each shot, color-correcting and cleaning up any distractions in the audio.

Firefly Generative Fill

For desktop applications, within the full version of Photoshop, Firefly-powered generative fill has replaced content-aware fill. You can now use generative fill to create new overlay layers based on text prompts or remove things by overlaying AI-generated background extensions. AI can also add reflections and other image processing. It can “un-crop” images via Generative Expand. Separately, gradients are now fully editable, and there are now adjustment layer presets, including user-definable ones.

Illustrator can now identify fonts in rasterized and vectorized images and can even edit text that has already been converted to outlines. It can convert text to color palettes for existing artwork. It can also AI generate vector objects and scenes that are all fully editable and scalable. It can even take in existing images as input to match to stylistically. There is also a new cloud-based web version of Illustrator coming to public beta.

Text-based editing in Premiere

From the video perspective, the news was mostly familiar to existing public beta users or to those who followed the IBC announcements: text-based editing, pause and filler word removal, and dialog enhancement in Premiere. After Effects is getting true 3D object support, so my session schedule focused on learning more about the workflows for using that feature. You need to create and texture models and then save them as GLB files before you can use them in AE. And you need to set up the lighting environment in AE before they will look right in your scene. But I am looking forward to being able to use that functionality more effectively on my upcoming film postviz projects.

I will detail my experience at Day 2’s Inspiration keynote as well as the tips and tricks I learned in the various training sessions in a separate article. At the time of this writing, I still had one more day to go at the conference. So keep an eye out. The second half of my Max coverage is coming soon.


Mike McCarthy is a technology consultant with extensive experience in the film post production. He started posting technology info and analysis at HD4PC in 2007. He broadened his focus with TechWithMikeFirst 10 years later.

 

Review: AMD Radeon Pro W7800 and W7900 GPUs

By Brady Betzel

The main players in the discrete GPU game, AMD and Nvidia, have released a barrage of new GPUs this past year. From the Nvidia 4090 Founder’s Edition I reviewed last October to the latest AMD W7800 and W7900, technology and energy efficiency have improved dramatically.

With AI on the forefront of everyone’s mind — whether it is because of the questionable deep fake videos or the amazing ability to take hours of work down to minutes when using Magic Mask in Blackmagic’s DaVinci Resolve — one of the most important pieces of hardware you can have is a powerful GPU.

AMD has always been in the race with Nvidia, but once Apple decided to work internally and create its own GPU, AMD struggled to find its footing… until now. The AMD Radeon Pro W7800 and W7900 GPUs are the latest in professional GPUs from the company, and they are powerful. The AMD Radeon Pro W7800 is a 32GB GPU that retails for $2,499 (from online retailer B&H Photo), while the AMD Radeon Pro W7900 48GB GPU retails for $3,999 (also from B&H). Yes, the prices give you a bit of a sticker shock if you are pricing consumer-level cards like the Nvidia 4090, but for those in need of an enterprise-level, professional workstation-compatible GPU, the $3,999 is actually pretty reasonable for the best. For comparison, the Nvidia RTX 6000 ADA retails for just under $7,000. But AMD isn’t trying to beat Nvidia at the moment. They are providing a much more reasonably priced alternative that may quench your GPU thirst without breaking the bank.

A Closer Look
First up is a basic comparison between the AMD Radeon Pro W7800 and W7900 in advertised specs:

AMD Radeon Pro W7800 AMD Radeon Pro W7900
GPU architecture AMD RDNA 3
Hardware Raytracing Yes
Lithography TSMC 5nm GCD 6nm MCD
Stream Processors 4480 6144
Compute Units 70 96
Peak Half Precision (FP16) Performance 90.5 TFLOPS 122.64 TFLOPS
Peak Single Precision Matrix (FP32) Performance 40.5 TFLOPS 61.3 TFLOPS
Transistor Count 57.7B 57.7B
OS Support Windows 11 – 64-Bit Edition

Windows 10 – 64-Bit Edition

Linux x86_64

External Power Connectors 2×8-Pin Power Connectors
Total Board Power (TBP) 260W Peak
PSU Recommendation 650W
Dedicated Memory 32GB GDDR6 48GB GDDR6
AMD Infinity Cache Technology 64MB 96MB
Memory Interface 256-bit 384-bit
Peak Memory Bandwidth Up to 576GB/s Up to 864GB/s
Form Factor PCIe 4.0×16 (3.0 Backwards Compatible) – Active Cooling
DisplayPort 3x DisplayPort 2.1 and 1x Enhanced Mini DisplayPort™ 2.1
Display Configurations 4x 4096 x 2160 (4K DCI) @ 120Hz with DSC

2x 6144 x 3456 (6K) 12-bit HDR @ 60Hz Uncompressed

1x 7680 x 4320 (8K) 12-bit HDR @ 60Hz Uncompressed

1x 12288 x 6912 (12K) @ 120Hz with DSC

DIsplay Support HDR Support

8K Support

10K Support

12K Support

Dimensions Full Height

11-inch (280mm) Length

Double Slot

Full Height

11-inch (280mm) Length

Triple Slot

Additional Features Supported Rendering Formats

1x Encode & Decode (AV1)

2x Decode (H265/HEVC, 4K H264)

2x Encode (H265/HEVC, 4K H264)

Supported Technologies

AMD Viewport Boost

AMD Remote Workstation

AMD Radeon Media Engine

AMD Software: Pro Edition

AMD Radeon VR Ready Creator

AMD Radeon ProRender

10-bit Display Color Output

12-bit Display Color Output

3D Stereo Support

 

Supported Rendering Formats

1x Encode & Decode (AV1)

2x Decode (H265/HEVC, 4K H264)

2x Encode (+AVI Encode and Decode)

Supported Technologies

AMD Viewport Boost

AMD Remote Workstation

AMD Radeon Media Engine

AMD Software: Pro Edition

AMD Radeon VR Ready Creator

AMD Radeon ProRender

10-bit Display Color Output

12-bit Display Color Output

3D Stereo Support

What sets the W7900 apart from the W7800 are the increased dedicated memory of 48GB, increased AMD Infinity Cache technology to 96MB, memory interface boosted to 384-bit, increased peak memory bandwidth up to 864GB/s, triple-slot size and addition of AVI encode and decode.

AMD Radeon Pro W7800
Up first in benchmarking tests is the AMD Radeon Pro W7800 inside of DaVinci Resolve 18.1.2 and Adobe Premiere 2023 as well as a few other apps and plugins. For testing inside of Resolve and Premiere, I used the same UHD (3840×2160) sequences and effects that I have used in previous reviews. The clips include:

  • ARRI RAW: 3840×2160 24fps – 7 seconds, 12 frames
  • ARRI RAW: 4448×1856 24fps – 7 seconds, 12 frames
  • BMD RAW: 6144×3456 24fps – 15 seconds
  • Red RAW: 6144×3072 23.976fps – 7 seconds, 12 frames
  • Red RAW: 6144×3160 23.976fps – 7 seconds, 12 frames
  • Sony a7siii: 3840×2160 23.976fps – 15 seconds

I then duplicated the sequence and added Blackmagic’s noise reduction, sharpening and grain. Finally, I replaced the noise reduction with Neat Video’s noise reduction

From there, I exported multiple versions: DNxHR 444 10-bit OP1a MXF file, DNxHR 444 10-bit MOV, H.264 MP4, H.265 MP4, AV1 MP4 and then an IMF package using the default settings.

AMD Radeon Pro W7800

Resolve 18 Exports

DNxHR 444 10-bit MXF DNxHR 444 10-bit MOV H.264 MP4 H.265 MP4 AV1

MP4

IMF
Color Correction Only  00:24 0:22 00:20 00:18 00:27 00:38
CC + Resolve Noise Reduction 02:21 02:21 02:21 02:22 02:22 02:23
CC, Resolve NR, Sharpening, Grain 03:04 03:04 03:03 03:03 03:03 03:05
CC + Neat Video Noise Reduction 02:59 03:00 03:03 03:01 03:02 03:00

For comparison’s sake, here are the results from the Nvidia RTX 4090:

Nvidia RTX 4090

Resolve 18 Exports

DNxHR 444 10-bit MXF DNxHR 444 10-bit MOV H.264 .mp4 H.265 .mp4 AV1

MP4

IMF
Color Correction Only 00:27 00:27 00:22 00:22 00:23 00:49
CC + Resolve Noise Reduction 00:57 00:56 00:55 00:55 00:55 01:04
CC, Resolve NR, Sharpening, Grain 01:14 01:14 01:12 01:12 01:12 01:19
CC + Neat Video Noise Reduction 02:38 02:38 02:34 02:34 02:34 02:41

 

AMD Radeon Pro W7800

Adobe Premiere Pro 2023 (Individual Exports in Media Encoder)

DNxHR 444 10-bit MXF DNxHR 444 10-bit MOC H.264 MP4 H.265 MP4
Color Correction Only 02:17 01:51 01:18 01:19
CC +  NR, Sharpening, Grain 13:38 34:21 33:54 33:07
AMD Radeon Pro W7800

Adobe Premiere Pro 2023 (Simultaneous Exports in Media Encoder)

Color Correction Only 03:27 03:32 03:32 03:51
CC +  NR, Sharpening, Grain 15:15 37:12 15:14 15:14

Again, here are the results from the Nvidia RTX 4090:

Nvidia RTX 4090

Adobe Premiere Pro 2023 (Individual Exports in Media Encoder)

DNxHR 444 10-bit MXF DNxHR 444 10-bit MOV H.264 MP4 H.265 MP4
Color Correction Only 01:28 01:46 01:08 01:07
CC +  NR, Sharpening, Grain 13:07 34:52 34:12 33:54
Nvidia RTX 4090

Adobe Premiere Pro 2023 (Simultaneous Exports in Media Encoder)

Color Correction Only 02:17 01:44 01:08 01:11
CC +  NR, Sharpening, Grain 13:47 34:13 15:54 15:54

Benchmarks
Blender Benchmark CPU samples per minute:

  1. Monster: 179.475890
  2. Junkshop: 124.988030
  3. Classroom: 86.279909

Blender Benchmark GPU samples per minute:

  1. Monster: 1306.493713
  2. Junkshop: 688.435718
  3. Classroom: 630.02515

 

Blackmagic Proxy Generator (H.265 10-bit, 4:2:0, 1080p):

  • RedR3D: 2 files – 50fps
  • Sony a7iii .mp4: 46 files – 267fps

 

Neat Video HD: GPU-only 69.5 frames/sec

Neat Video UHD: GPU-only 16.4 frames/sec

PugetBench for After Effects 0.95.7, After Effects 23.4×53:

  • Overall Score: 1018
  • Multi-Core Score: 202.6
  • GPU Score: 76.8
  • RAM Preview Score: 101.4
  • Render Score: 106.4
  • Tracking Score: 93.6

PugetBench for Premiere Pro 0.98.0, Premiere Pro 23.4.0:

  • Extended Overall Score: 532
  • Standard Overall Score: 828
  • LongGOP Score (Extended): 79.8
  • Intraframe Score (Extended): 80.9
  • RAW Score (Extended): 26
  • GPU Effects Score (Extended): 47.7
  • LongGOP Score (Standard): 112.9
  • Intraframe Score (Standard): 95.5
  • RAW Score (Standard): 75.6
  • GPU Effects Score (Standard): 57.8

PugetBench for Resolve 0.93.1, DaVinci Resolve Studio 18.5

  • Standard Overall Score: 2537
  • 4K Media Score: 175
  • GPU Effects Score: 123
  • Fusion Score: 463

Those are a ton of numbers and comparisons. The important thing to note is this: The W7800 is a little pricier than the 4090 but requires almost 200W less power and includes DisplayPort 2.1 technology if your display is compatible. Finally, keep in mind that the AMD Radeon Pro W7800 is an enterprise-level card that is made to run flawlessly 24 hours a day, 365 days a year. For similar guarantees, you would need to jump to something like the Nvidia RTX A5000, which currently retails from B&H for $1,899.99 but has less memory and some other differences.

AMD Radeon Pro W7900
Up next, we’ve performed similar benchmarks for the AMD Radeon Pro W7900:

AMD Radeon Pro W7900

Resolve 18 Exports

DNxHR 444 10-bit .mxf DNxHR 444 10-bit .mov H.264 MP4 H.265 MP4 AV1

MP4

IMF
Color Correction Only  00:30 00:28 00:23 00:21 00:31 00:50
CC + Resolve Noise Reduction 01:45 01:41 01:44 01:44 01:45 01:47
CC, Resolve NR, Sharpening, Grain 02:17 02:09 02:18 02:18 02:18 02:19
CC + Neat Video Noise Reduction 03:03 03:00 03:04 03:04 03:05 03:04

 

AMD Radeon Pro W7900

Adobe Premiere Pro 2023 (Individual Exports in Media Encoder)

DNxHR 444 10-bit MXF DNxHR 444 10-bit MOV H.264 MP4 H.265 MP4
Color Correction Only 02:11 01:42 01:05 01:06
CC + NR, Sharpening, Grain 14:12 34:27 33:48 33:54
AMD Radeon Pro W7900

Adobe Premiere Pro 2023 (Simultaneous Exports in Media Encoder)

Color Correction Only 03:20 03:24 02:41 02:42
CC +  NR, Sharpening, Grain 15:21 37:32 15:21 15:22

Benchmarks

Blender Benchmark CPU samples per minute:

  1. Monster: 181.802109
  2. Junkshop: 125.356688
  3. Classroom: 86.608965

Blender Benchmark GPU samples per minute:

  1. Monster: 1095.478227
  2. Junkshop: 969.553103
  3. Classroom: 865.631865

Blackmagic Proxy Generator (H.265 10-bit, 4:2:0, 1080p):

  • Red R3D: 2 files – 27fps
  • Sony a7iii .mp4: 46 files – 266fps

Neat Video HD: GPU Only 89 frames/sec

Neat Video UHD: GPU Only 24.4 frames/sec

PugetBench for After Effects 0.95.7, After Effects 23.4×53:

  • Overall Score: 1038
  • Multi-Core Score: 203.9
  • GPU Score: 82.3
  • RAM Preview Score: 103.4
  • Render Score: 109.4
  • Tracking Score: 93.4

PugetBench for Premiere Pro 0.98.0, Premiere Pro 23.4.0:

  • Extended Overall Score: 567
  • Standard Overall Score: 891
  • LongGOP Score (Extended): 80.3
  • Intraframe Score (Extended): 82.5
  • RAW Score (Extended): 26.6
  • GPU Effects Score (Extended): 58.7
  • LongGOP Score (Standard): 114.9
  • Intraframe Score (Standard): 97.7
  • RAW Score (Standard): 78.3
  • GPU Effects Score (Standard): 71.6

PugetBench for Resolve 0.93.1, DaVinci Resolve Studio 18.5

  • Standard Overall Score: 2847
  • 4K Media Score: 179
  • GPU Effects Score: 173
  • Fusion Score: 502

These benchmarks are heavily favored toward video editors, content creators and even colorists, so some of the benefits — like the 48GB of memory on the W7900 — may not be useful and could be a reason to stick with the W7800. Between the AMD Radeon Pro W7800 and the W7900, a lot of the performance increases will be seen in large designs and renders — heavy Blender scenes or even Unreal creations.

Summing Up
After using the AMD Radeon Pro W7800 and W7900 for a couple of months in and out of DaVinci Resolve (versions 18-18.5) and Premiere 2023, I felt very comfortable in keeping the W7800 as the daily driver. I didn’t experience any GPU-related crashes or errors. I was actually a little surprised at how comfortable I was with the W7800 and W7900 after using the Nvidia RTX 4070 Ti and 4090 for so long.

Keep in mind that the AMD Radeon Pro series of GPUs is certified with certain software application versions to run without error. You can search for specific applications here.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and Uninterrupted: The Shop . He is also a member of the Producers Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

 

New Hush Pro AI-Powered Plugin for Ambient Noise Removal

The newly introduced Hush Pro audio plugin removes ambient noise and room reflections from recorded speech. Powered by machine learning and optimized for Apple Silicon, Hush Pro cleans up dialogue quickly and transparently — with minimal artifacts.

The plugin is designed from the ground up for audio post, integrating seamlessly with Pro Tools. It includes two separate modes — or sub-plugins — that use the same engine under the hood but support different workflows. Hush Mix allows users to rebalance dialogue, noise and reverb with a mixer-style UI, where they can preview the results in real time. Hush Split renders all three elements as separate clips for more fine-grained, nondestructive edits.

Hush Pro offers even cleaner audio than the stand-alone Hush app, especially on outdoor ambience and highly reflective rooms. Under the hood, the plugin uses a new, more powerful AI model, taking advantage of the faster GPU on M series Pro, Max and Ultra chips. The model will continue to evolve, with future updates — including improved support for singing and other vocals — coming later this fall.

After three months of beta testing, Hush Pro is already being used to repair dialogue for film, television, podcasts, and more.

“I’m using Hush on a feature film right now,” says dialogue editor Jason Freeman, whose credits include John Wick: Chapter 4 and Spider-Man: Across the Spider-Verse. “It saved us from having to ADR an entire scene with the lead actor.”

Hush Pro is available now for $249 at hushaudioapp.com along with a 21-day, full-featured free trial. Users who purchased the original Hush app can upgrade for $179.

FMC

AI in Post: Training and Education

Future Media Concepts (FMC) and the NAB Show have launched a new series of training programs dealing with the impact of AI on creative services. Their upcoming AI Creative Summit, which takes place September 14-15, is a virtual event that will feature practical demonstrations as well as discussions with post pros who are using AI in some way to help their workflows.

Ben Kozuch

We recently reached out to FMC president and co-founder Ben Kozuch to find out more about AI and post…

Why was now the time to tackle this topic?
We are doing this now because AI tech is evolving rapidly, infiltrating creative workflows and creating excitement and FOMO as well. When people are concerned about a trend, they should at least learn what it’s about. Many of our clients are seeking AI training to stay competitive. This is similar to the digital editing revolution in the mid-‘90s; if you didn’t adapt and embrace it, you had to exit the industry.

Whether you’re an enthusiast or not, there’s no denying that AI is here to stay and will continue to grow exponentially. This will increase the demand for new skills and new approaches to production, budgeting and workflows among content creators. It’s crucial to empower them with the knowledge to navigate this transformation effectively.

To do this, we offer various training options: Workshops and sessions at NAB Show Las Vegas and NAB Show New York; quarterly online live conferences like the AI Creative Summit; year-round online workshops focusing on AI’s effects on editors, graphic designers and sound professionals; and a new industry certification in AI.

What do you expect to be the top three most important takeaways of AI on production and post?
Efficiency: AI can streamline and automate tasks, saving time and resources from script to delivery.
Creativity: AI tools can enhance creativity by offering new possibilities and freeing up time from lengthy software tasks.
Data-Driven Insights: AI can provide valuable creative insights, such as data analysis of viewers’ demographics, watching habits, etc.

What would you say to those who are scared to jump into this technology at the moment?
I would suggest beginning with small steps, such as enrolling in introductory courses or exploring YouTube for tutorials on AI tools in less critical projects. Building a foundation of the basics can help reduce concern and boost confidence gradually.

What do you think pros should know about using AI in their daily workflows?
AI doesn’t yet provide a 100% finished project and still needs the human touch for final delivery. However, it still saves a lot of time. Others may be surprised to learn how some of their competitive advantages have disappeared, as less experienced people can now write well-researched scripts and proposals and even achieve a higher level of creative projects.

Is the AI Creative Summit also addressing the ethical considerations posed by AI in the workplace and society?
Yes, we’re exploring the ethical side of AI, especially within the creative community. It’s important to discuss how AI impacts things like intellectual property ownership, pricing of creative work, privacy, fairness and transparency when we’re creating with AI tools. We want to ensure that AI is used responsibly and respects the values of the creative world, so we’re diving in to these topics to help creators navigate the AI landscape with confidence and integrity.

AI technology raises new challenges when it comes to visual projects. For example, who owns the rights to AI-created media? And should you inform clients that things take less time to create now, or should you keep charging the old rates?

Mobius Labs’ Multimodal AI Search for Video Content Libraries

Mobius Labs, a developer of next-gen, AI-powered metadata technology, will soon be unveiling its latest Multimodal AI technology that they say represents a breakthrough in how organizations can “chat” with their content libraries in the same way the world has learned to chat with large language model (LLM) systems such as OpenAI’s ChatGPT and Google’s Bard. The Mobius Labs system has been designed specifically for the media and entertainment industry, and it is efficient enough to be hosted locally, in the cloud or a combination of both.

When humans look at a piece of video they use their vision, hearing and language capabilities to understand the content. Mobius Labs has trained foundational models based on computer vision, audio recognition and LLMs to interpret media in the same way.

“Imagine having a private conversation with your content library about what is happening in a scene or episode using natural language prompts,” says Appu Shaji, CEO/chief scientist at Mobius Labs. “Multimodal AI technology lets us combine what the AI sees, hears and reads to create a more nuanced understanding of what is happening within the content. Once AI can summarize and understand what the content is, things like search and recommendation becomes infinitely more powerful.”

As an extension of Mobius Lab’s Visual DNA — the company’s AI-based metadata tagging solution — this new technology changes how content can be described and indexed without any human involvement. In the past few years, AI solutions have begun to address search and recommendation challenges, but these solutions required extensive development, customization and engineering efforts. With new multimodal solutions, the technology works “out of the box” to cover a wide range of use cases.

To ensure that users maintain full ownership of their data, these solutions offer a headless SDK that adheres to the principle of “bringing the code to the data, rather than bringing the data to the code.” This approach, says Mobius Labs, not only reduces expensive network communication but also incorporates privacy by design.

As data volume continues to grow exponentially, a key design tenet of the team is to keep the code efficient to bring the marginal cost of running Mobius Labs’ AI solution to near zero.

“We have some fundamental R&D within the company that can make our model smaller and in some cases as much as 20 times more efficient than the competition,” says Shaji.