NBCUni 9.5.23

Adobe Max 2023: A Focus on Creativity and Tools, Part 1

By Mike McCarthy

Adobe held its annual Max conference at the LA Convention Center this week. It was my first time back since COVID, but Adobe hosted an in-person event last year as well. The Max conference is focused on creativity and is traditionally where Adobe announces and releases the newest updates to its Creative Cloud apps.

As a Premiere editor and Photoshop user, I am always interested in seeing what Adobe’s team has been doing to improve its products and improve my workflows. I have followed Premiere and After Effects pretty closely through Adobe’s beta programs for over a decade, but Max is where I find out about what new things I can do in Photoshop, Illustrator and various other apps. And via the various sessions, I also learn some old things I can do that I just didn’t know about before.

The main keynote is generally where Adobe announces new products and initiatives as well as new functions to existing applications. This year, as you can imagine, was very AI-focused, following up on the company’s successful Firefly generative AI imaging tool released earlier this year. The main feature that differentiates Adobe’s generative AI tools from various competing options is that the resulting outputs are guaranteed to be safe to use in commercial projects. That’s because Adobe owns the content that the models are trained on (presumably courtesy of Adobe Stock).

Adobe sees AI as useful in four ways: broadening exploration, accelerating productivity, increasing creative control and including community input. Adobe GenStudio will now be the hub for all things AI, integrating Creative Cloud, Firefly, Express, Frame.io, Analytics, AEM Assets and Workfront. It aims to “enable on-brand content creation at the speed of imagination,” Adobe says.

Firefly

Adobe has three new generative AI models: Firefly Image 2, Firefly Vector and Firefly Design. The company also announced that it is working on Firefly Audio, Video and 3D models, which should be available soon. I want to pair the 3D one with the new AE functionality. Firefly Image 2 has twice the resolution of the original and can ingest reference images to match the style of the output.

Firefly Vector is obviously for creating AI-generated vector images and art.

But the third one, Firefly Design, deserves further explanation. It generates a fully editable Adobe Express template document with a user-defined aspect ratio and text options. The remaining fine-tuning for a completed work can be done in Adobe Express.

FireflyDesign

For those of you who are unfamiliar, Adobe Express is a free cloud-based media creation and editing application, and that is where a lot of Adobe’s recent efforts and this event’s announcements have been focused. It is designed to streamline the workflow for getting content from the idea stage all the way to publishing on the internet, with direct integration to many various social media outlets and a full scheduling system to manage entire social marketing campaigns. It can reformat content for different deliverables and even automatically translate it into 40 different languages.

As more and more of Photoshop and Illustrator’s functionality gets integrated into Express, Express will probably begin to replace them as the go-to for entry-level users. And as a cloud-based app accessed through a browser, it can even be used on Chromebooks and other non-Mac and Windows devices. And Adobe claims that via a partnership with Google, the Express browser extension will be included in all new Chromebooks moving forward.

Photoshop for Web is the next step beyond Express, integrating even more of the application’s functions into a cloud app that users can access from anywhere, once again, also on Chrome devices. Apparently, I’m an old-school guy who has not yet embraced the move to the cloud as much as I could have, but given my dissatisfaction with the direction the newest Microsoft and Mac OS systems are going, maybe browser-based applications are the future.

Similarly, as a finishing editor, I have real trouble posting content that is not polished and perfected, but that is not how social media operates. With much higher amounts of content being produced in narrow time frames, most of which would not meet the production standards I am used to, I have not embraced this new paradigm. That’s why I am writing an article about this event and not posting a video about it. I would have to spend far too much time reframing each shot, color-correcting and cleaning up any distractions in the audio.

Firefly Generative Fill

For desktop applications, within the full version of Photoshop, Firefly-powered generative fill has replaced content-aware fill. You can now use generative fill to create new overlay layers based on text prompts or remove things by overlaying AI-generated background extensions. AI can also add reflections and other image processing. It can “un-crop” images via Generative Expand. Separately, gradients are now fully editable, and there are now adjustment layer presets, including user-definable ones.

Illustrator can now identify fonts in rasterized and vectorized images and can even edit text that has already been converted to outlines. It can convert text to color palettes for existing artwork. It can also AI generate vector objects and scenes that are all fully editable and scalable. It can even take in existing images as input to match to stylistically. There is also a new cloud-based web version of Illustrator coming to public beta.

Text-based editing in Premiere

From the video perspective, the news was mostly familiar to existing public beta users or to those who followed the IBC announcements: text-based editing, pause and filler word removal, and dialog enhancement in Premiere. After Effects is getting true 3D object support, so my session schedule focused on learning more about the workflows for using that feature. You need to create and texture models and then save them as GLB files before you can use them in AE. And you need to set up the lighting environment in AE before they will look right in your scene. But I am looking forward to being able to use that functionality more effectively on my upcoming film postviz projects.

I will detail my experience at Day 2’s Inspiration keynote as well as the tips and tricks I learned in the various training sessions in a separate article. At the time of this writing, I still had one more day to go at the conference. So keep an eye out. The second half of my Max coverage is coming soon.


Mike McCarthy is a technology consultant with extensive experience in the film post production. He started posting technology info and analysis at HD4PC in 2007. He broadened his focus with TechWithMikeFirst 10 years later.

 


Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.