NBCUni 9.5.23

Adobe Max 2023 Part 2: Tips, Sneaks and Inspiration

By Mike McCarthy

My first article reporting on Adobe Max covered new releases and future announcements. This one will focus on other parts of the conference, and what it was like to attend in person once more. After the opening keynote, I spent the rest of that day in various sessions. The second day featured an Inspiration Keynote that was, as you can imagine, less technical in nature, but more on that later. Then more sessions and Adobe Sneaks, where they revealed new technology still in development.

In addition to all of these events, Adobe hosted the Creativity Park, a hall full of booths showcasing hardware solutions to extend the functionality of Adobe’s offerings.

Individual Sessions
With close to 200 different Max sessions and a finite amount of time, it can be a challenge to choose what you want to explore. I, of course, focused on video sessions, and within that, specifically the topics that would help me better use After Effects’ newest 3D object functionality. My one take away came from Robert Hranitzky in his session Get the Hollywood Look on a Budget Using 3D session, where he talked about how GLB files will work better than the OBJ files I had been using because they embed the textures and other info directly into the file for After Effects to use. He also showed how to break a model into separate parts to animate directly in After Effects. It was the way that I had envisioned, but I haven’t yet had a chance to try in the beta releases.

Ian Robinson’s Move Into the Next Dimension With After was a bit more beginner-focused, but pointed out that one unique benefit of Draft3D mode is that the render is not cropped to the view window, giving you an over-scan effect, which allows you to see what you might be missing in your render perspective. He also did a good job of covering how the Cinema 4D and Advanced 3D render modes allow you to bend and extrude layers and edit materials, while the Classic 3D render mode does not. I have done most of my AE work in Classic mode for the past two decades, but I may use start using the new Advanced 3D renderer for adding actual 3D objects to my videos for postviz effects.

Nol Honig and Kyle Hamrick had a Leveling Up in AE session where they showed all sorts of shortcuts and unique ways to use Essential Properties to create multiple varying copies of a single subcomp. One of my favorite shortcuts was hitting “N” while creating a rectangle mask. It sets its mode to None, which allows you to see the layer you are masking while you are drawing the rectangle. (Honestly, it should default to None until you release the mouse button, in my opinion.) A couple other favorites: Ctrl+Home will center objects in the comp, and, even more useful, Ctrl+Alt+Home will recenter the anchor point if it gets adjusted by accident. But they skipped “U,” which reveals all keyframed properties. When pressed again (“UU”), it reveals all adjusted properties. (I think they assumed everyone knew about the Uber-Key.)

I also went to Rich Harrington’s Work Faster in Premiere Pro session, and while I didn’t learn many new things about Premiere (besides copying the keyboard shortcuts to the Clipboard results in a readable text list), I did learn some cool things in Photoshop that can be used in Premiere-based workflows.

Photoshop can export LUTs (look-up tables) that can be used to adjust the color on images in Premiere via the Lumetri color effects. It generates these lookup tables via adjustment layers applied to the image. While many of the same tools are available directly within Premiere, Photoshop has some further options that Premiere does not, and this is how you can use them for video

First export a still of a shot you want corrected and bring it into Photoshop as a background image. Then you apply adjustment layers — in this case, curves, which is a powerful tool that is not always intuitive to use. For one thing, Alt-clicking the “Auto” button gives you more detailed options in a separate window that I had never even seen.  Then the top left button in the curves panel is the “Targeted Adjustment Tool,” which allows you to modify the selected curve by clipping on the area of the image that you want to change. When you do that, the tool will adjust that point on the curve. In this way, you can use Photoshop to make your still image look the way you want it, and then export a LUT for use in Premiere or anywhere else you can use LUTs. (Hey Adobe, I want this in Lumetri.)

Adobe Sneaks
In what is a staple event of the Max conference, Adobe Sneaks brings the  company’s engineers together to present technologies they are working on that have not yet made it into specific products. The technologies range from Project Primrose, a digital dress that can display patterns on black and white tiles, to Project Dub Dub Dub, which automatically dubs audio in multiple foreign languages via AI.  The event was hosted by comedian Adam Devine, who offered some less technical observations about the new functions.

Illustration isn’t really my thing, but it could be once Project Draw & Delight comes to market.  It uses the power of AI to convert the artistic masterpiece on the left into the refined images to the right, with the simple prompt “cat.” I am looking forward to how much better my storyboard sketches will soon look with this simple and accessible technology.

Adobe always has a lot with fonts, and Project Glyph Ease continues that tradition with complete AI-generated fonts based on a user’s drawing of two or three letters.  This is a natural extension of the new type-editing features demonstrated in Illustrator the day before, whereby any font can be identified and matched from a couple letters, even from vectorized outlines.  But unlike the Illustrator feature, this tool can create whole new fonts instead of matching existing ones.

Project See Through was all about removing reflections from photographs, and the technology did a pretty good job on some complex scenes while preserving details.  But the part that was really impressive was when engineers showed how the computer could also generate a full image based on the image in the reflection.  A little scary when you think about the fact that the photographer taking the photo will frequently be the one in the reflection.  So much for the anonymity of being “behind the camera.”

Project Scene Change was a little rough in its initial presentation but it’s a really powerful concept. It extracts a 3D representation of a scene from a piece of source footage, and then uses that to create a new background, for a different clip, but with the background rendered to match the perspective of the foreground clip. The technology is not really limited to background; that is just the easiest way to explain it with words.  As you can see by the character in the scene behind the coffee cup, the technology really is creating an entire environment, not just a background.  It will be interesting to see how this gets fleshed out with user controls for higher-scale VFX processes.

Project Res Up appears to be capable of true AI-based generative resolution improvements in video. I have been waiting for this ever since Nvidia demonstrated live AI-generated upscaling of 3D rendered images, which is what allows real-time raytracing to work, but haven’t seen it in action until now. If we can create something out of thin air from generative AI, it stands to reason that we should be able to improve something that already exists. But in another sense, I recognize that it is more challenging when you have a specific target to match.  This is also why generative video is much harder to do than stills. Each generated frame has to smoothly match the ones before and after it, and any artifacts will be much more noticeable to humans when in motion.

This is why the most powerful demo, by far from my perspective, was the AI-based generative fill for video, called Project Fast Fill. This was something I expected to see, but I did not anticipate it to be so powerful yet. It started off with a basic removal of distractions from elements in the background. But it ended with adding a necktie to a strutting character walking through a doorway with complex lighting changes and camera motion… all based on a simple text command and a vector shape to point the AI in the right place. The results were stunning and if seeing it believing, it will revolutionize VFX much sooner than I expected.

Creative Park
There was also a hall of booths hosting Adobe’s various hardware and software partners, some of whom had new announcements of their own. The hall was divided into sections, with a quarter of it devoted to video, which might be more than in previous years.

Samsung was showing off its ridiculously oversized wraparound LCD displays, in the form of the 57-inch double wide UHD display, and the 55-inch curved TV display that can be run in portrait mode for an overhead feel. I am still a strong proponent of the 21:9 aspect ratio, as that is the natural shape of human vision, and anything wider requires moving your head instead of your eyes.LudidLink Filespaces

Logitech showed its new Action Ring function for its MX line of productivity mice.  I have been using gaming mice for the past few years, and after talking with some of the reps in the booth, I believe I should be migrating back to the professional options.  The new Action Ring is similar to a feature in my Logitech Triathlon mouse, where you press a button to bring up a customizable context menu with various functions available.  It is still in beta, but it has potential.

LucidLink is a high-performance cloud storage provider that presents to the OS as a regular mounted hard drive. LucidLink demonstrated a new integration with Premiere Pro, as a panel in the application that allows users to control which files maintain a local copy, based on which projects and sequences they are used in. I have yet to try LucidLink myself, as my bandwidth was too low until this year, but I can it envision being a useful tool now that I have a fiber connection at home.

Inspiration Keynote
Getting back to the Inspiration Keynote, I usually don’t have much to report from the Day 2 keynote presentation, as it is rarely technical in detail, and mostly about soft skills that are hard to describe. But this year’s presentation stood out in a number of ways.

There were four different presenters with very different styles and messages. First off was Aaron James Draplin, a graphic designer from Portland with a very unique style, who would appear to pride himself on not fitting the mold of corporate success. His big, loud and autobiographical presentation was entertaining, whose message was if you work hard, you can achieve your own unique success.

Second was Karen X Cheng, a social media artist with some pretty innovative art, the technical aspects of which I am better able to appreciate. Her explicit mix of AI and real photography was powerful. She talked a lot about the algorithms that rule the social media space, and how they skew our perception of value. I thought her five defenses against the algorithm were important ideas:

Money and passion don’t always align – pursue both, separately if necessary
Be proud of your flops – likes and reshares aren’t the only measure of value
Seek respect, not attention, it lasts longer – this one is self explanatory
Human+AI > AI – AI is a powerful tool, but even more so in the hands of a skilled user
Take a sabbath from screens – It helps keep in perspective that there is more to life

Up next was Walker Noble, an artist who was able to find financial success selling his art when the pandemic pushed him out of his day job… which he had previously been afraid to leave. He talked about taking risks and self-perception, asking, “Whynot me?” He also talked about finding your motivation, in his case his family, but there are other possible ways. He also pointed out that he has found success without mastering “the algorithm,” in that he has few social media followers or influence in the online world.  So, “Why not you?”

Last up was Oak Felder, a music producer spoke about channeling emotions through media, specifically music. He made a case for the intrinsic emotional value within certain tones of music, as opposed to learned associations from movies and alike. The way he sees it, there are “kernels of emotion” within music that are then shaped by a skilled composer or artist. He said that the impact it has on others is the definition of truly making music. He ended his segment showing a special needs child being soothed by one of his songs during a medical procedure.

The entire combined presentation was much stronger than the celebrity-interview format they have previously hosted at Max.

That’s It!
That wraps up my coverage of Max, and hopefully gives readers a taste of what it would be like to attend in person, instead of just watching the event online for free, which is still an option.


Mike McCarthy is a technology consultant with extensive experience in the film post production. He started posting technology info and analysis at HD4PC in 2007. He broadened his focus with TechWithMikeFirst 10 years later.

 

 


Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.