NBCUni 9.5.23

Nvidia’s GTC 2023 – New GPUs and AI Acceleration

By Mike McCarthy

This week, Nvidia held its GTC conference and made several interesting announcements. Most relevant in the M&E space are the new Ada Lovelace-based GPUs. To accompany the existing RTX 6000, there is now a new RTX 4000 small form factor and five new mobile GPUs offering various levels of performance and power usage.

New Mobile GPUs
The new mobile options all offer performance improvements that exceed the next higher tier in the previous generation. This means the new RTX 2000 Ada is as fast as the previous A3000, the new RTX 4000 Ada exceeds the previous top-end A5500, and the new mobile RTX 5000 Ada chip with 9,728 CUDA cores and 42 teraflops of single-precision compute performance should outperform the previous A6000 desktop card or the GeForce 3090 Ti. If true, that is pretty impressive, although there’s no word yet on battery life.

New Desktop GPU
The new RTX 4000 small-form-factor Ada takes the performance of the previous A4000 GPU, ups the memory buffer to 20GB and fits it into the form factor of the previous A2000 card, which is a low-profile, dual-slot PCIe card that only uses the 75 watts from the PCIe bus. This allows it to be installed in small-form-factor PCs or in 2U servers that don’t have the full-height slots or PCIe power connectors that most powerful GPUs require. Strangely, it is lower-performing, at least on paper, than the new mobile 4000, with 20% fewer cores and 40% lower peak performance (if the specs I was given are correct). This is possibly due to power limitations of the 75W PCIe bus slot.

The naming conventions across the various product lines continue to get more confusing and less informative, which I am never a fan of.  My recommendation is to call them the Ada 19 or Ada 42 based on the peak teraflops. That way it is easy to see how they compare, even over generations against the Turing 8 or the Ampere 24. This should work at least for the next four to five generations until we reach petaflops, when the numbering will need to be reset again.

New Server Chips
There are also new announcements targeted at supercomputing and data centers. The Hopper GPU is focused on AI and large language model acceleration, usually installed in sets of 8 SXM modules in a DGX server. Also, Nvidia’s previously announced Grace CPU Superchip is in production as its new ARM-based CPU. Nvidia offers these chips as dual-CPU processing boards or combined as an integrated Grace-Hopper Superchip, with shared interface bus and memory between the CPU and GPU. The new Apple Silicon processors use the same unified memory approach.

There are also new PCIe-based accelerator cards, starting with the H100 NVL, which has Hopper architecture in a PCIe card offering 94GB of memory for transformation processing.  “Transformation” is the “T” in ChatGPT, by the way. There are also Lovelace architecture-based options, including the single-slot L4 for AI video processing and the dual-slot L40 for generative AI content generation.

Four of these L40 cards are included in the new OVX-3 servers, designed for hosting and streaming Omniverse data and applications. These new servers from various vendors will have options for either Intel Sapphire Rapids- or AMD Genoa-based platforms and will include the new BlueField-3 DPU cards and ConnectX-7 NICs. They will be also available in a predesigned Superpod of 64 servers and a Spectrum-3 switch for companies that have a lot of 3D assets to deal with.

Omniverse Updates
On the software side, Omniverse has a variety of new applications that support its popular USD data format for easier interchange, and it now supports the real-time, raytraced, subsurface scattering shader (maybe, RTRTSSSS for short?) for more realistic surfaces. Nvidia is also partnering closely with Microsoft to bring Omniverse to Azure and to MS 365, which will allow Microsoft Teams users to collaboratively explore 3D worlds together during meetings.

Generative AI
Nvidia Picasso — which uses generative AI to convert text into images, videos or 3D objects — is now available to developers like Adobe. So in the very near future, we will reach a point where we can no longer trust the authenticity of any image or video that we see online. It is not difficult to see where that might lead us. One way or another, it will be much easier to add artificial elements to images, videos and 3D models. Maybe I will finally get into Omniverse myself when I can just tell it what I have in mind, and it creates a full-3D world for me. Or maybe if I need it to just add a helicopter into my footage for a VFX shot with the right speed and perspective. That would be helpful.

Some of the new AI developments are concerning from a certain perspective, but hopefully these new technologies can be harnessed to effectively improve our working experience and our final output. Nvidia’s products are definitely accelerating the development and implementation of AI across the board.


Mike McCarthy is a technology consultant with extensive experience in the film post production. He started posting technology info and analysis at HD4PC in 2007. He broadened his focus with TechWithMikeFirst 15 years later.


Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.