Tag Archives: animation

Creating Titles for Netflix’s Avatar: The Last Airbender

Method Studios collaborated with Netflix on the recently released live-action adaptation of the series, Avatar: The Last Airbender. The series, developed by Albert Kim, follows the adventures of a young Airbender named Aang, and his friends, as they fight to end the Fire Nation’s war and bring balance to the world. Director and executive producer Jabbar Raisani approached Method Studios to create visually striking title cards for each episode — titles that not only nodded to the original animated series, but also lived up to the visuals of the new adaptation.

The team at Method Studios, led by creative director Wes Ebelhar, concepted and pitched several different directions for the title before deciding to move forward with one called Martial Arts.

“We loved the idea of abstracting the movements and ‘bending’ forms of the characters through three-dimensional brushstrokes,” says Ebelhar. “We also wanted to create separate animations to really highlight the differences between the elements of air, earth, fire and water. For example, with ‘Air,’ we created this swirling vortex, while ‘Earth’ was very angular and rigid. The 3D brushstrokes were also a perfect way to incorporate the different elemental glyphs from the opening of the original series.”

Giving life to the different elemental brushstrokes was no easy task, “We created a custom procedural setup in Houdini to generate the brushstrokes, which was vital for giving us the detail and level of control we needed. Once we had that system built, we were able to pipe in our original previz , and they matched the timing and layouts perfectly. The animations were then rendered with Redshift and brought into After Effects for compositing. The compositing ended up being a huge task as well,” explains Ebelhar. “It wasn’t enough to just have different brush animations for each element, we wanted the whole environment to feel unique for each — the Fire title should feel like its hanging above a raging bonfire, while Water should feel submerged with caustics playing across its surface.”

Ebelhar says many people were involved in bringing these titles to life and gives “a special shout out to Johnny Likens, David Derwin, Max Strizich, Alejandro Robledo Mejia, Michael Decaprio and our producer Claire Dorwart.”

Pure4D

DI4D’s Updated Facial Performance Capture System, Pure4D 2.0

DI4D, a facial capture and animation provider, has introduced Pure4D 2.0, the latest iteration of its proprietary facial performance capture solution. Pure4D has been used to produce hours of facial animation for many AAA game franchises, including Call of Duty: Modern Warfare II and III and F1 21 and 23.

F1

The Pure4D 2.0 pipeline is purpose-built to directly translate the subtleties of an actor’s facial performance onto their digital double. It delivers nuanced facial animation without the need for manual polish or complex facial rigs.

Pure4D 2.0 is built from DI4D’s proprietary facial capture technology, which combines performance data from an HMC (head-mounted camera) with high-fidelity data from a seated 4D capture system to achieve a scale and quality beyond the capabilities of traditional animation pipelines. Pure4D 2.0 is compatible with the DI4D HMC and third-party dual-camera HMCs as well as the DI4D Pro and third-party 4D capture systems.

Behind this process is DI4D’s machine learning technology, which continually learns an actor’s facial expressions, reducing subjective manual clean-up and significantly increasing both the repeatability and efficiency of the pipeline. This makes Pure4D 2.0 ideally suited to AAA video game production.

Pure4D

Call of Duty

Faithfully recreating an actor’s facial performance is key to Pure4D 2.0’s approach, making it possible to emulate the experience of watching an actor in a live-action film or theatrical performance using their digital double.

A digital double refers to an animated character that shares the exact likeness and performance of a single actor, resulting in highly realistic, performance-driven facial animation. It’s a process that preserves the art form of acting while enhancing the believability of the character.

Pure4D’s approach to facial animation has inspired a new short film, Double, starring Neil Newbon, one of the world’s most accomplished video game actors, who won Best Performance at the 2023 Game Awards. Double will use Pure4D 2.0 to capture the nuance of Newbon’s performance, driving the facial animation of his digital double. Scheduled for release during the summer, Double will highlight the increasingly valuable contribution that high-quality acting makes to video game production.

 

 

 

 

 

New Atlux λ Plugin for Lighting, Cinematics, Rendering in Unreal 

Indie software company Vinzi has released Atlux λ, a plugin for Unreal Engine that helps 3D artists produce hyperrealistic images and animations with ease. Formerly known as MetaShoot, Atlux λ has been reimagined with an array of features designed to simplify lighting and rendering workflows and achieve real-time results.

Built on Epic Games robust Unreal Engine platform, Atlux λ serves as a digital-twin photo studio with highly realistic lighting assets, camera animation presets and a one-click render interface. The plugin’s intuitive design and simplified workflow make it an ideal entry point for 3D artists seeking to harness Unreal Engine’s real-time capabilities without being encumbered by technical intricacies.

Early adopters of Atlux include Hashbane Interactive, Sentient Arts, FD Design and R3plica.

“Atlux λ is a labor of love based on my years of experience working in the 3D industry as an artist and engineer,” says Vinzi founder Jorge “Valle” Hurtado. “The goal is to make lighting and visualization in Unreal Engine as creative and fast as possible. We have customers using Atlux λ for games, architecture, character development, product viz and even automotive.

“Atlux λ is not just a rebrand of MetaShoot; it’s a fully rewritten and optimized plugin that now introduces light painting, a sequence tab for animation and even an AI-based studio randomizer. There’s a lot there! With Atlux λ, 3D artists can create showcase animations from camera motion presets without the complexity of the Unreal render queue or sequencer modules.”

Early adopters of Atlux have quickly incorporated the tool into their workflow. According to Anthony Carmona, founder of 3D production studio Sentient Art, “MetaShoot, and now Atlux, blows away all our expectations. Having access to assets and instant lighting results speeds up our ability to produce amazing work for our clients. It’s perfect for rendering our highly detailed models and material work — from concepts to flawless portfolios.”

What’s New in Atlux λ:

  • AI Studio Randomizer: New studio randomizer builds unlimited photo studio setups in seconds.
  • Light Painting: A new interactive way to place lights based on visual feedback, cursor placement and keyboard shortcuts.
  • Sequence Tab: A new sequence tab with assets and Rig Rail presets to quickly build animations. Includes a NeRF maker and automatic level sequence creation.
  • New lighting and Photo Studio presets.
  • Keyboard shortcuts: easy camera selection and toggling between targets and lights.
  • Optimized render settings and UI.

Atlux λ Features:

  • 12 Photo Studio presets with lighting setups, cyclorama backdrops.
  • 14 realistic assets, including studio lighting with rigs and rail systems.
  • 360-degree turntable for product and model animations and visualization.
  • One-click render workflow with simplified interface.
  • 360-degree camera for HDRI creation.
  • Light painting, batch rendering, shortcuts and more workflow efficiencies.
  • Support for Engine versions 5.1 to 5.3 on Windows. (Mac version coming soon.)

Atlux λ is available as a one-time purchase for $349.50 at atlux.ai. The rental option is $29.50 per month.

Sarofsky Creates Title Sequence for Marvel’s Echo

The first series under the Marvel Studios Spotlight banner, Marvel’s Echo follows Maya Lopez as she faces her past, reconnects with her Native American roots and embraces the meaning of family and community in the hope of moving forward. The series is directed by Sydney Freeland (also a producer) and Catriona McKenzie alongside Kevin Feige, Brad Winderbaum, Stephen Broussard and Richie Palmer as producers.

The producers called on Chicago’s Sarofsky to create Echo’s main title sequence. Creative director Stefan Draht and producer Kelsey Hynes led the project for Sarofsky, which created a 90-second sequence that is scored with the anthemic track “Burning” from Yeah Yeah Yeahs.

For the main title’s storytelling foundation, Freeland and the series’ producers wanted to establish a strong sense of place, emotionally connecting Tamaha, Oklahoma, and New York City. Next, to introduce Maya, her ancestors and Kingpin, the briefing called for themes of duality, tension, danger and Maya’s deafness and use of American Sign Language (ASL).

“One of the first visual themes we explored was using magical reality to express duality – using imagery that was sometimes consonant and other times dissonant,” explains Draht. “By blending various footage sources into visuals that stand outside of literal reality, we were able to bring a sense of mystery to the images.”

Working with designers and animators, including Ariel Costa, Matthew Nowak, Jens Mebes, Dan Moore, João Vaz Oliveira, Mollie Davis, and Andrei Popa, the Sarofsky team also developed a second visual theme: using hands and shadows in their storytelling. “Hands play an essential role in the series as Maya’s means of communicating using ASL – and in the telling of the creation story of the Choctaw Nation, which is told using shadow puppets in the series,” says Draht. “Developing these visual motifs amplified the core story and characters while allowing us to add meaning and tone. We use shadows to express history, danger and Maya’s ancestral connections.”

In Sarofsky’s contributions to Marvel Studios projects, the design pipeline involves visual effects, color and finishing. For Echo’s main titles, the team used Adobe After Effects with Maxon Cinema 4D.

“Because the meaning and structure of shots was so specific and carefully designed, we leaned quite heavily on intense compositing and reconstruction of images using Adobe After Effects,” says Draht.

With most shots consisting of a combination of show footage, stock and original designs, the team used Cinema 4D to recreate scenes in three dimensions, projecting 2D imagery against CG elements. “This approach aided in building shots with camera motion and a dramatic sense of depth,” explains Draht.

As the final touch, artists used Blackmagic DaVinci Resolve to align the color palette across every shot and apply a signature look to the sequence.

“This is one of my favorite types of projects; it exists somewhere in the middle between pure design and visual effects,” concludes Draht. “This series has been produced with so much attention to detail. Being allowed to explore and create something so fantastical to introduce the project is a great honor.”

 

Pixel 8

Bespoke Digital Helps Launch Google’s Mint Pixel 8

Creative studio Bespoke Digital continues to grow its innovative approach to content production with its latest for tech giant Google and the Pixel 8, marking its third consecutive launch collaboration with the brand.

Bespoke’s team of CG artists, editors and colorists, handled every facet of the project, from inception to post, as well as the final behind-the-scenes film. Because of the scope of the job, the studio worked for months on the execution. Along with Brooklyn-based artist Ricardo Gonzalez, aka It’s A Living, who hails from Durango, Mexico, Bespoke teased the expansion of Google’s color options for its Pixel 8 and Pixel 8 Pro, culminating in a live mural event in SoHo.

Pixel 8The deliverables for this Google Mint Pixel 8 launch included the aforementioned behind-the-scenes film; social media assets; original CG animation for 250 digital-out-of-home electronic kiosks across Manhattan and Brooklyn; an artist-painted static billboard at 389 Canal Street; commercial teasers (including CG elements), editorial; sound; color grading; sourcing and coordinating the artist; production of 100 custom phone cases for a giveaway and 100 custom paint canvases for a painting class at the event; locations and permits; media recording; live-streaming; and the art installation at the Google store.

“Having worked in advertising for a number of years, I find it exciting to witness the evolution beyond conventional commercials, seamlessly transitioning into the realm of experiential marketing,” says Eui-Jip Hwang, Bespoke’s EP on the project. “Our journey has not merely involved crafting traditional CG commercials; rather, we’ve pushed the boundaries, crafting immersive experiences that redefine advertising creation, revolutionizing how it is consumed.”

In terms of tools, Bespoke called on Adobe Photoshop for retouching, Adobe Premiere for editing, Blackmagic DaVinci Resolve for color grading, SideFX Houdini for 3D and Adobe After Effects for animation.

Lobo Uses AI to Create Animated Open for Ciclope Festival

Creative production, design, animation and mixed media studio Lobo created an animated open for Ciclope Festival 2023, which took place in November in Berlin. Blending traditional concepts with AI-enhanced animation techniques, Lobo produced a kaleidoscope of colors and images designed to show off the artistry on display at this year’s show.

The Ciclope Festival is a three-day live event focusing on the advertising and entertainment industries. The recurring theme each year is craft, with 2023 emphasizing artificial intelligence.

“We are all talking about how AI will influence our work and our lives,” explains Francisco Condorelli, founder/organizer of Ciclope. “Lobo developed its titles around that idea using machine learning technology.”

The process began with the creation of 3D models using Autodesk Maya. These initial structures and visual elements were used to craft the basic environment and figures of the animation. Lobo then used Stable Diffusion.

At the core of this process was the use of LoRA, a method known for its efficiency in adapting large neural network models for specific tasks. In this project, LoRA was called on to learn from unique and original artworks created by Lobo’s artists. This method allowed the AI to capture the creative essence and stylistic details of these pieces, effectively using these insights to refine and enhance the 3D models. Through LoRA, the team was able to integrate artistic nuances into the animation, ensuring what they say was a seamless blend of art and technology.

After using LoRA, Lobo used ControlNet as a precision-guiding tool. LoRA meticulously oversaw the translation of artistic vision into the animated models, ensuring each nuance was accurately reflected. This system was key in aligning the final animations with the intended aesthetic objectives, enabling a faithful and resonant representation of the artists’ original concepts.

Lobo is no stranger to incorporating advanced technology in its work. For the Google Pixel Show 2023, Lobo was commissioned to produce a teaser for the event. Uniting five of its directors, Lobo brainstormed challenging concepts inspired by the arrival of AI-based image-making technologies. The subsequent short used different styles and techniques, from the figurative to the completely abstract, but they all shared the use of AI tools.

For Unfair City, a short film created with BBDO Dublin, Lobo used AI to highlight the growing inequality of homelessness.

However, the expanding use of artificial intelligence remains just one tool in Lobo’s tool box.

For VR Vaccine, Lobo was tasked with alleviating a child’s fear of taking vaccine shots. By creating an immersive fantasy world, Lobo was able to position the child as the hero of her own story, using the vaccine as a shield to protect the realm from invaders. The use of a headset and smartphone was integral in creating this environment.

Lobo was also engaged to launch the new Volvo S60. Using WebAR, Lobo simulated a virtual store, including car customizations, test drive scheduling and financing simulations.

 

Puget Systems Offers Custom Workstations Built on Threadripper 7000

Puget Systems has a new line of custom workstations built on the AMD Ryzen Threadripper 7000 Series and Threadripper Pro 7000 WX Series processors. They offer new levels of computing performance and innovation for users in multiple industries, especially those in high-end content creation, virtual production and game development.

Here’s what’s new and innovative with the Threadripper 7000 Series processors:

AMD Ryzen Threadripper 7000 Series: These processors for the high-end desktop market offer an overclockable desktop experience along with the highest clock speeds achievable on a Threadripper processor. Power, performance and efficiency are all maximized with the 5nm process and “Zen 4” architecture.

The Threadripper 7000 Series is built to enable powerful I/O for desktop users, with up to 48 PCIe Gen 5.0 lanes for graphics, storage and more. AMD says the 7000 Series is capable of twice the memory bandwidth of typical dual-channel desktop systems, and the processors’ quad-channel DDR5 memory controller can support the most intensive workflows.

AMD Ryzen Threadripper Pro 7000 WX Series: These processors expand on the prior generation’s performance and platform features for the workstation market. Also built on the 5nm Zen 4 architecture, this generation offers ultra-high performance for professional applications and complex multi-tasking workloads.

For multi-threaded workloads, Threadripper Pro processors offer up to 96 cores and 192 threads for complex simulation, generative design, rendering and software compilation tasks. They also provide up to 384MB of L3 cache along with eight channels for DDR5 memory for applications that require high memory capacity and bandwidth.

Puget Systems’ new Threadripper 7000 custom workstations are available immediately for configuration for a wide range of applications. The new Threadripper Pro WX-Series workstation will be available for custom configurations in December.

Lenovo ThinkStation P8: Threadripper Pro 7000 WX, Nvidia RTX GPUs

Lenovo’s new ThinkStation P8 tower workstation features AMD Ryzen Threadripper Pro 7000 WX Series processors and Nvidia RTX GPUs. The ThinkStation P8 builds on the P620, one of the first workstations powered by AMD Ryzen Threadripper Pro processors., In addition to its compute power, the ThinkStation P8 features an optimized thermal design in a versatile Aston Martin-inspired chassis.

Designed for high-intensity environments, the Lenovo ThinkStation P8 is powered by the latest AMD Ryzen Threadripper Pro 7000 WX Series processors built on the leading 5nm “Zen 4” architecture and featuring up to 96 cores and 192 threads. The new sleek, sturdy, rack-optimized chassis offers larger Platinum-rated power supply options to handle more demanding expansion capabilities. For example, it can support up to three Nvidia RTX 6000 Ada generation GPUs to help reduce time to completion in graphics-intensive applications like real-time raytracing, video rendering, simulation or computer-aided design. The combined power also opens up immersive environments, including digital worlds, AR/VR content creation and advanced AI model development.

“The Lenovo ThinkStation P620 with AMD Threadripper Pro technology has been an absolute game-changer for our 3D animation and development workflows over the last two years,” says Bill Ballew, CTO from DreamWorks Animation. “We are looking forward to significantly faster iterations due to the increased performance with the new ThinkStation P8 workstation powered by AMD Threadripper Pro 7000 WX Series in this coming year.”

Configurations
In addition to AMD Ryzen Threadripper Pro 7000 WX Series processors and Nvidia RTX Ada generation GPUs, ThinkStation P8 includes ISV certifications and supports Windows 11 and popular Linux operating systems. It features a range of storage and expansion capabilities that provide flexible and tailored configurations. Highly customizable options allow users to select the best components to handle complex and demanding tasks efficiently. Also, easy access and tool-less serviceability provide scalability and quick replacement of many components.

The P8 workstation can accommodate up to seven M.2 PCIe Gen 4 SSDs with RAID support or up to three HDDs for large-capacity storage and up to 2TB of DDR5 memory with octa-channel support and seven PCIe slots, including six PCI Gen5 that offer faster connectivity. The workstation features lower latency and more expansion capability and includes 10 Gigabit Ethernet onboard to help eliminate network bottlenecks.

ThinkStation P8, like all Lenovo desktop workstations, includes built-in hardware monitoring accessible through ThinkStation diagnostics software, and Lenovo Performance Tuner comes with numerous profiles to optimize many ISV applications. ThinkStation P8 also supports Lenovo’s ThinkShield security offerings, which provide protection from BIOS to cloud. Additionally, rigorous testing standards, Premier Support and extended warranty options are available. Users can further manage their investment through Lenovo TruScale, which simplifies procurement, deployment and management of fully integrated IT solutions, all delivered as a service with a scalable, pay-as-you-go model.

ThinkStation P8 will be available starting Q1 2024.

 

 

Behind the Title: Ataboy Head of Production Rasha Clark

Rasha Clark is head of production at Ataboy, a New York City-based studio that provides design and animation. Her responsibility is to make sure “my team is able to do their best work and have what they need when they need it and that our clients feel heard, supported and are happy.”

Let’s find out more from Clark…

What is your typical day filled with?
I’m involved in creative calls when we first hear about a project, I create bids and schedules, I create the team and I oversee the day-to-day running of Ataboy. I check in with clients and add input when needed. I also get to go to events, dinners and drinks — so there are those perks too (laughs).

What would surprise people the most about what falls under that title?
Hmm, that I also change the toilet paper in the office? I think anyone familiar with an HOP knows what I do.

Vitamin Water

How has your section of the industry changed since COVID? The good and the bad?
Living in Lancaster, Pennsylvania, and having two young kids, I am grateful for the fact that remote work is an option. I remember the days of having to head back into the office after a short maternity leave and just wishing I could had the opportunity to work and be with my son — or at least be home for dinner. Since we don’t work a traditional 9 to 5, the WFH option is really important to me. I trust everyone I work with to put in 100% of effort regardless of where they are sitting.

I also think you need to be more intentional with how you communicate and how you train/mentor others and how and when you have video meetings. It might mean a little more work or planning for those in senior/leadership roles, but for me, it’s worth it.

The bad? I miss hanging with grown-ups on a regular basis. Seeing my colleagues and going to events occasionally, is a real treat and I love that everyone is so happy to be around everyone else.

Do you see some of these workflow changes remaining with us going forward?
Yes, I do. Work/life balance has always been important to me, and now it’s a common thing. Back in the day, I’d have to put up a bit of a stink to ensure that staff had the time to indulge in other passions outside of work. We’ve proven that we can be just as productive and creative wherever we are if we care about our work — it’s not about where we spend those working hours.

Rasha Clark

Munchkin

What’s your favorite part of the job?
Working with my team and seeing their ideas blow clients’ minds.

What is your least favorite?
Excel, and making prepro decks. Ugh.

What is your most productive time of the day?
Whenever. I’m a morning, afternoon and night person. It definitely helps if my kids are in school.

If you didn’t have this job, what would you be doing instead?
I’d be acting on stage, doing VO work. Or I’d be leading divers out into the Caribbean waters. Maybe I can do both one day.

Can you name some recent projects you have worked on?
We really enjoyed creating an animation around Halsey’s original art for a recent Coke spot. I’m really proud of how that turned out, and we were singing that song for ages!

Rasha Clark

Coke

We designed and animated a beautiful piece for Munchkin that emphasizes its commitment to green practices. And we worked on something superfun and catchy for Vitamin Water.

We also finished up an animated piece with insightful, meaningful and powerful content, explaining Native American economic practices and ideology.

It’s been really refreshing. We’ve been given a lot of creative freedom recently.

Why did you choose this profession?
I fell into this position. I came across a receptionist posting for a VFX company; it paid more than my other two jobs, and the hours allowed me to study for the college courses I was taking. So I applied for it and started to work in the “biz.”

After a year I was promoted to assistant producer and then worked my way up and through VFX, edit and production houses. Before that receptionist job, I didn’t know anything about the industry at all; it wasn’t something in my consciousness. But once I started, I knew I’d be doing this forever.

Rasha Clark

Munchkin

Do you listen to music while you work? Care to share your favorite music to work to?
Yes, I do! It depends on what I’m doing and my mood, though. If I’m lagging and have a lot to do, pop is my go-to: Harry Styles, Dua Lipa, Dominic Fike, etc.

If I really need to concentrate, Indian or Arabic instrumental music is what I choose, or the album “Awake” by Dream Theater (it’s a long story!).

If I’m feeling goofy, then the old crooners are what I blast and sing along to!

Name three pieces of technology you can’t live without.
The app How much Phe? which helps us track our son’s rare metabolic disorder. I’d cry without it!

The kids’ Kindles. They get to use them when we travel long distances, and it makes things much easier.

Oh, and my phone!

Rasha Clark

Vitamin Water

What do you do to de-stress from it all?
I remind myself that there’s only so much I can control and prepare for, then I have to go with the flow — it’s just work.

I’ve started taking Taekwondo. I love that I need to really think as well as move — that pushes work thoughts out. And venting helps too.

Making sure that my free time is filled with people and things I love is key.

Would you have done anything different along your path?
There was a moment early in my career when I was asked if I wanted to take a junior VFX supervision job in New Zealand, but I had just accepted a job with a post house in NYC. I had to turn the New Zealand job down, and I later found out that the gig was for the first Lord of the Rings!

My life would have been totally different because that path would have changed everything, but I don’t regret it. I appreciate all the experiences I’ve had, even the crappy ones.

The people that I’ve met and the things I’ve done have all led me here to this place and to the person I am. So no, I wouldn’t have done anything differently.

Finally, any tips for others who are just starting out?
I would tell them to trust their gut. You can never predict the future, so think about that and do what feels right in the moment… and make a change if/when it doesn’t feel right anymore. There are no mistakes, just lessons to learn from.

Matthias Hoene

Behind the Title: Director Matthias Hoene

Matthias Hoene is a director at WTP Pictures, a creative production studio that produces commercials, feature films, documentaries, TV series and music videos. The company is based in Detroit and LA but shoots worldwide with filmmakers and creators working in Spain, New York, Mexico and more. The Berlin-bred Hoene directs spots and films.

Let’s find out more…

How would you describe directing?
To me, making a film is like cooking a meal. You need to think about the flavor, visual appeal and texture, and then find the right ingredients to make the “dish” come to life. Do you want it to be sweet? Spicy? Umami? Hearty? Light? Nourishing? Vegan? Low-carb? You gather your spices and your carbs and veggies. Then you prepare it and serve it up with an exciting presentation, steaming hot: an olfactory journey for the senses, an adventure for the taste buds, a titillating theme with surprises and an emotional finish… and, most importantly, remember to leave some space for dessert.

Matthias Hoene

Adidas

What was it about directing that attracted you?
The act of storytelling is a primal and important part of human life. As an artist, I always wanted to move people, inspire them and make them feel alive. This can be done through a unique way of looking at the world, an insightful comment on a current matter or just a playful take on an everyday situation. I love creating worlds or surreal situations or just showing the audience something heartfelt, funny or emotional.

Everyone in my family used to tinker in a workshop making furniture, soldering custom hi-fi equipment together or making handmade fireworks (please don’t try this at home). That, combined with my interest in comic books, drawing, painting and photography, led me to filmmaking.

What I love about directing is that it sits at the intersection of technology and art. To be successful, you have to be intuitive and creative, working from gut instinct while also being tech-savvy, super-organized and methodical. At times, you need to know how to improvise and stick it all together with spit and chewing gum, all in the service of creating something wonderful.

Chanel

What continues to keep you interested?
Filmmaking is an art that keeps us humble. There is always more to learn, try out, experiment and express. I love working and am excited about how storytelling keeps evolving across new platforms and media. The bottom line is that people will never run out of the need to hear stories to help them make sense of the world (or escape it for a moment), and I’m excited to be part of that journey… and I would love to win an Academy Award one day (laughs).

How do you pick the people you work with on a project?
Directing is teamwork, and I love the families we create to bring each project to fruition. Picking your team is like casting actors. You want to make sure everyone’s unique talent brings out the right flavors in the project. I have a regular go-to crew, but I also pick and choose specific talent when appropriate for specific jobs.

The metaphor is that everyone should have a sandbox to play in and have fun, but within the parameters the story requires. The goal is to combine our varied talents and make something that is bigger than the sum of its parts.

Adidas

How do you work with your DP? How do you describe the look you are after?
I am very specific about the visual style of each film and spend a lot of time taking photos and filming the locations in prep for the shoot. I share those photos along with visual references and movies with my DP so we can develop the look together.

I always have a camera with me, as a visual sketchbook, to train my eye, discover the world and hone my craft. Plus, I love taking pictures. Because a picture says more than a thousand words, how you stage, frame and light each shot is an intrinsic part of the storytelling and can enhance every commercial or film.

Do you get involved with post at all?
My work can be post-heavy, so I like to be part of the process, especially if it involves character animation. I love bringing extra nuance and a bit of joyful spirit to CGI characters. So when it comes to fine-tuning the details of a performance, you might catch me acting out the performance of an animated ogre or a tap-dancing penguin or whatever else is required.

Chanel

Music and sound design are also crucial to my storytelling. I usually like to work closely with my composer to evolve the music. We keep going until we find the perfect sound and melody, trying to create something cool, unique and memorable. Of course, I also understand that in commercials, sometimes it’s good to step back and let everyone else work on the final polish, so I’ll adapt to each situation as appropriate.

How did the pandemic affect your process and your work?
I remote-directed a few commercials during the pandemic, and I shot the first season of my TV show, Theodosia, during lockdown. I have to say that I don’t miss Zoom-directing or working with masks and having to stick to our social bubbles. But going through this has made a few aspects of the craft easier to organize, and using video conferencing certainly helps with the carbon footprint.

That said, I love the hustle and bustle of a film set, and as a director, I believe that my energy helps shape great performances and get the best out of the crew, so I’m glad we’re back to IRL.

Can you name some recent projects?
I recently completed my third feature film, Little Bone Lodge, which is a contained thriller and has some cracking performances from the entire cast, including Joely Richardson (Nip/Tuck). I recently directed a spot for Lenovo Legion that shows a great crossover between live action and animation, and a film for Adidas about cliff diver Anna Bader that is beautiful and has a worthwhile message. There was also a Chanel spot.

What project are you most proud of?
My first commercial for Club 18-30 won a Golden Lion at Cannes. I was fresh out of college and totally blown away by its success, but the film holds up and still makes me giggle. I directed a couple of 3- to 5-minute shorts for cell phone network Giffgaff that are a lot of fun. I love the magical world of my McCain commercials and I love the escapist world-building of my Lenovo Legion spot.

Was there a particular film or show that inspired you to get into filmmaking?
I have a weird and eclectic bunch of influences, from Terminator 2 and Aliens, Fight Club, The Insider, Best in Show and Amélie. Nevertheless, you’ll find traces of those disparate influences in my work, which ranges from dark and action-packed to whimsical and sweet.

What’s your favorite part of the job?
My favorite part of the job is that there are so many different parts… throughout pitching, development, financing, prep, shoot, post production and release, you constantly have to shift gears and get to do so many different things that it never gets boring. For me, it’s priceless when you see an actor bring a special moment to life; your heart beats a little faster and you remember why you got into this business in the first place.

What’s your least favorite?
My least favorite part of the job is the empty-nester feeling when the project is over and I have to let it go. That said, that’s when promotion starts, and you share it with the rest of the world, so it’s not so bad.

If you didn’t have this job, what would you be doing?
I would work for NASA and build a spaceship to take us beyond our solar system into deep space, marking the beginning of mankind’s journey to explore the rest of the universe.

Matthias Hoene

Matthias Hoene

How early did you know this would be your path?
I grew up in Berlin in a family of scientists. I knew no one in the industry, nor did I have any close role models who had made it in the film industry. But I loved movies, especially science-fiction and fantasy. So I started drawing and painting everything that popped into my head before picking up my first film camera at St. Martin’s College in London.

Name three pieces of technology you can’t do without.
The truth is boring: My laptop. My phone. My camera. But, looking beyond that, I love vintage lenses, the mechanical beauty of a Bolex 16mm camera, and my Nikon FM2.

What do you do to de-stress from it all?
Before I made my first feature film, I ran the New York City Marathon. Committing to one thing for that long — the training and then the run itself — was such an empowering experience that it still gives me strength now, and I’ve been running ever since. Nothing is better for de-stressing than a double endorphin hit, feet to the ground, fresh air and nature.

While filming in China, I picked up meditation and now use a pick ‘n’ mix of techniques ranging from mindfulness via transcendental meditation to using the Waking Up app.

And finally, I love traveling, reading, cooking and hanging out with friends… everything that grounds me in reality.

AMD Intros New Radeon Pro W7000 Graphics Cards

AMD has added two new products to the AMD Radeon Pro W7000 Series product line: the AMD Radeon Pro W7600 and AMD Radeon Pro W7500 workstation graphics cards. They are designed to tackle mainstream workloads across a range of industries, including media and entertainment. With these new cards, creators now have a broader selection of AMD Radeon Pro workstation graphics offerings.

AMD Radeon Pro W7600 and Radeon Pro W7500 graphics cards accelerate everyday professional workflows by focusing on efficiency and increasing performance. The new graphics cards take advantage of AMD RDNA 3 architecture and are optimized for price/performance as well as outstanding stability and reliability. Both cards feature 8GB of high-speed GDDR6 memory to support data-intensive tasks and enable raytraced renderings with detail and realism.

“Our goal is to offer more choice for professional users, and these graphics cards do exactly that – built to address the largest market segment focusing on mainstream workloads,” says Scott Herkelman, senior VP/GM, Graphics Business Unit at AMD. “AMD Radeon Pro W7600 and Radeon Pro W7500 graphics cards provide exceptional performance for a variety of professional applications while offering incredible levels of visual fidelity and setting a new performance standard for midrange professional graphics.”

Key features include:

  • AMD RDNA 3 Architecture – Features redesigned compute units with unified raytracing and AI accelerators, second-generation AMD Infinity Cache technology and second-generation raytracing technology. It also offers optimizations for 3D modeling, animation, rendering, video editing and general multitasking workflows.
  • Dedicated AI Acceleration – New AI instructions and increased AI throughput deliver more than twice the performance on average than the previous AMD RDNA 2 architecture.
  • 8GB GDDR6 Memory – Allows creators to handle data-intensive tasks and enable raytraced renderings with incredible detail and realism.
  • AMD Radiance Display Engine With DisplayPort 2.1 – With 12-bit HDR color support and over 68 billion colors, display outputs support next-generation displays and multi-monitor configuration options, creating an ultraimmersive visual environment.
  • AV1 Encode/Decode – Dual encode/decode media engines unlock new multimedia experiences with full AV1 encode/decode support designed for high resolutions, wide color gamut and high-dynamic range enhancements.
  • Optimized Driver Performance and Professional Application Certification – All AMD Radeon Pro workstation graphics are supported by AMD Software: Pro Edition, which provides a modern and intuitive user interface. AMD continues to work with leading professional software application vendors on a comprehensive certification program. The company is also working to ensure AMD Radeon Pro graphics cards are built for demanding 24/7 environments and tested to meet exceptional standards. The list of certified applications is here.

The new AMD Radeon Pro W7000 Series workstation graphics cards are available now. The Radeon Pro W7600 sells for $599, while the Radeon Pro W7500 model costs $429. Availability in OEM workstations is expected to begin later this year.

AMD will display the new cards at SIGGRAPH 2023.`

SIGGRAPH Technical Paper: Advancing Digital Humans

Innovation in computer graphics over the past 50 years has given us realistic CGI that helped blast traditional filmmaking and gaming into a new era. One example of CGI innovation is the ongoing advances in character animation. Many of these advances are driven by motion capture technology that consistently delivers efficient, state-of-the-art visual effects prevalent in our beloved superhero blockbusters.

The creative potential in the field of character animation research remains endless. “Character animation represents a truly unique field within computer graphics. The goal of character animation is to replicate the intelligence and behavior of living beings, and this extends not only to humans and animals, but also to imaginary creatures,” says Libin Liu, assistant professor at Peking University, who will be presenting new research along with his team, as part of the SIGGRAPH 2023 Technical Papers program.

“Over the years, the research community has explored many approaches toward achieving this goal… and with the exciting progress we’ve seen in AI, there will be a boom in research that utilizes large language models or more comprehensive multi-modal models as the ‘brain’ for the character, coupled with the development of new motion representation and generation frameworks to translate the ‘thoughts’ of this brain into realistic actions,” says Liu. “It’s an exciting time for all of us in this field.”

As a preview of the Technical Papers program, here is a spotlight of three unique approaches that showcase innovation in advancing the character animation field even further.

Body Language
Many of us unconsciously converse or express ourselves using physical gestures. Some of us may gesture with our hands, shift our body posture to make a point, or put into action another body part (eyes or legs) while we talk. Indeed, speech and communication go hand in hand with physical gesturing — a complicated sequence to represent digitally.

A team of researchers from Peking University-China and National Key Lab of General AI have introduced a sophisticated computational framework that captures the detailed nuances of physical human speech gestures. And this framework does so while allowing users to control those details using a broad range of input data, including a piece of text description, a short clip of demonstration or even data representing animal gestures such as video of a bird flapping or spreading its wings.

The key component underpinning the team’s new system is a novel representation of motions, specifically the quantized latent motion embeddings, coupled with diffusion models — one of the key components behind recent AI-driven image-generation techniques. This representation significantly reduces ambiguity and ensures the naturalness and diversity of movements. Additionally, researchers enhanced the CLIP model developed by OpenAI with the ability to interpret style descriptions in multiple forms, and they have developed an efficient technique to analyze sentences, enabling the digital character to understand the speech’s semantics and determine the optimal time to gesture.

Building on the advances made in the digital human space, this new work addresses the challenge of producing digital characters that have the capability to perform physical gestures during conversations, and with minimal direction or instruction. The system supports style prompts in the form of short texts, motion sequences or video clips and provides body part-specific style control, such as combining the gesture of a yoga pose (warrior one) with gestures of feeling happy or sad.

“With this work, we’ve moved another step closer to making digital humans behave like their real-life counterparts. Our system equips these virtual characters with the capability to perform natural and diversified gestures during conversations, thereby considerably enhancing the realism and immersion of interactions,” says Liu, a lead author of the research and assistant professor at Peking University’s School of Intelligence Science and Technology.

“Perhaps the most exciting aspect of this technology is its ability to let users intuitively control the character’s motion using language and demonstrations. This also allows the system to interface seamlessly with advanced artificial intelligence like ChatGPT, bringing an increased level of intelligence and lifelikeness to our digital characters.”

Liu and his collaborators, Tenglong Ao and Zeyi Zhang, both at Peking University, are set to demonstrate their new work at SIGGRAPH 2023. View the team’s paper and accompanying video on their project page.

Realistic Robots in Motion
Who doesn’t love a dancing robot? But how to easily replicate or simulate legged robots and their dynamic motions remains a challenge in the field of character animation. In new research, an international team of researchers from Disney Research Imagineering and ETH Zürich describe an innovative technique that enables the optimal retargeting of expressive physical motions onto freely walking robots.

Retargeting motion, or editing existing motions, either from motion capture data or other sources of digital artistic creations, is a quicker way to simulate physical motion in the digital world. Accounting for the significant differences in proportions, mass distributions, and number of degrees of freedom of the motion data makes editing them onto a different system most challenging.

To that end, this new technique enables the retargeting of captured or artist-provided motion onto legged robots of vastly different proportions and mass distributions.

“We can take an input motion and then automatically solve for the best possible way that a robot can execute that motion,” note the researchers. Their method takes into account the robot dynamics and the robot’s actuation limits, which means that even highly dynamic motions can be successfully retargeted. The result is that the robot can perform the motion without losing its balance — not an easy feat.

The latter is a major hurdle the team has overcome with this new approach. Due to the significant differences in sizes and shapes between animals or artist-created rigs and a legged robot, the retargeting of motions is difficult to achieve with standard optimal control techniques and manual trial-and-error methods.

The researchers’ approach is a differentiable optimal control (DOC) technique that allows them to solve for a comprehensive set of parameters to make the retargeting agnostic to changes in proportions, mass distributions and differences in the number of degrees of freedom between the source of input motion and the actual physical robot.

The team behind DOC includes Ruben Grandia, Espen Knoop, Christian Schumacher and Moritz Bächer at Disney Research and Farbod Farshidian and Marco Hutter at ETH Zürich. They will showcase their work as part of SIGGRAPH 2023 Technical Papers program. For the paper and video, visit the team’s project page.

Tennis, Anyone?
The ultimate dream of computer gaming enthusiasts is to be able to control their players in the virtual world in a way that mirrors the players’ athleticism and movement in the physical world. The authenticity of the game is what counts.

A global team of computer scientists, one of whom also is a tennis expert and NCAA tennis champion, has developed a physics-based animation system that can produce diverse and complex tennis-playing skills while only using motion data from videos. The team’s computational framework empowers two simulated characters to engage in extensive tennis rallies, guided by controllers learned from match videos featuring different players.

“Our work demonstrates the exciting possibility of using abundant sports videos from the internet to create virtual characters that can be controlled. It opens up a future where anyone can bring virtual characters to life using their own videos,” says Haotian Zhang, first author of the new research and a Ph.D. student studying under Kayvon Fatahalian, associate professor of computer science at Stanford University. Zhang and Fatahalian are set to present their work at SIGGRAPH 2023, along with collaborators from Nvidia, University of Toronto, Vector Institute, and Simon Fraser University.

“Imagine the incredible creativity and convenience as people from all backgrounds can now take charge of animation and make their ideas come alive,” adds Zhang. “The potential is limitless, and the ability to animate virtual characters is now within easy reach for everyone.”

The researchers demonstrate the system via a number of examples, including replicating different player styles, such as a right-handed player using two-handed backhand and vice versa, and capturing diverse tennis skills such as the serve, forehand topspin, and backhand slice, to name a few. The novel system can produce two physically simulated characters playing extended tennis rallies with simulated racket- and ball-handling dynamics.

Along with Zhang and Fatahalian, collaborators Ye Yuan, Viktor Makoviychuk and Yunrong Guo of Nvidia, Sanja Fidler of Nvidia and University of Toronto, and Xue Bin Peng of Nvidia and Simon Fraser University will demonstrate their work at SIGGRAPH 2023. For the full paper and video, visit the team’s page.

Get a glimpse of this year’s content by watching the SIGGRAPH 2023 Technical Papers trailer or visit the SIGGRAPH 2023 website to learn more about the program and registration details.

Roof Studio Animates 3D Short for 4-H Canada

Animation studio Roof Studio has produced a 3D animated short for 4-H Canada, a not-for-profit organization focused on positive youth experiences. Conceived by Edelman Canada, 4-H Forever demonstrates the impact the organization has on preparing youth for the future and the role volunteers play in making that happen.

 4-H Canada

From the film’s inception, there was a clear need to create a world that both complemented and highlighted the journey of the four young people at the film’s core, says Lucas Camargo, who led the project with Roof CD’s Vinicius Costa and Guto Terni.

“Their unique, stylized forms spawned the world-building around them. The spaces they interact with are never squared perfectly; we let the forms flow to infuse realism in our render with an organic spatial dynamic,” explains Camargo, who also took on the roles of director and art director on the film. “The overall mood we wanted was to feel like a dream, a memory.”

One of the challenges that Roof faced was interpreting this written story with constantly jumping timelines, “showing how four friends grew and transformed in a visual story within 90 seconds,” says Costa. “We spent a lot of time figuring out how to condense the actions and scenes to their essential meanings so viewers could easily understand. Each child had a distinct characteristic, and for each timeline jump, we had to highlight their unique traits through each shot. The advanced concept of the story during the pitch phase helped us a lot later in production, when several of our initial ideas were executed.”

Another challenge was time. “Given our limited time frame, we created models and designed characters spanning various ages,” says Camargo. “How we used and dressed them was crucial. Drawing from prior experience, planning and detailed design were essential. Anticipating every requirement helped streamline 3D production. For instance, grooming hair for over 20 characters in just a few weeks seemed impossible. However, we developed a unique approach by blending stop-motion aesthetics with 3D grooming.”

Roof also used timelapse to showcase the youths’ teamwork and evolution in building a community center. This visual device allowed the camera to be a silent observer, providing a sense of continuity and continuous creation.

Sunlight was just one of the natural elements featured in the short. Nature and all its components were pivotal throughout the piece, with landscapes acting as story elements. “This focused approach crafts an immersive world, where sunlight not only enhances realism but also warmly illuminates the children’s journey, adding depth and symbolizing hope and unity,” concludes Costa.

The studio called on Autodesk Maya and Redshift.

 

 

SIGGRAPH 2023: Challenges in 3D Printing, Textile Production and Beyond

Research expected to debut as part of the SIGGRAPH 2023 conference Technical Papers program are “ones to watch” — exciting new technologies, ideas and algorithms that span all areas of graphics and interactive techniques. Particularly noticeable this year is the emergence of research methods that extend beyond the digital world and address creation of real-life content.

“A lot of research in computer graphics has been about determining the best way to visualize various real-world phenomena using computers, and in doing so, there are often details that are elided to fit such complicated things into the computational framework,” says Jenny Lin, a lead author of one of the featured new research projects that will be showcased at SIGGRAPH 2023. Lin and her collaborators have devised a formal semantics framework for machine knitting programs, applying mathematics to describe anything a knitting machine can make.

“Going from virtual representations to the physical world involves addressing details that we often take for granted when going about our daily lives,” she adds. “There’s a very natural connection between these two directions, and there’s something very gratifying about using the language of computers to understand and improve something as tactile and grounded as knitting.”

As a preview of the Technical Papers program, here is a sampling of three novel computational methods and their unique approaches to real-world applications.

Equipped to Make It Fit
Additive manufacturing, better known as 3D printing, brings digitally designed products to life. It allows for unprecedented freedom of 3D geometries and it gives manufacturers the ability to produce parts on demand, and locally. With 3D printing, supply chain management can be simplified, and it is easier to transition from engineering iterations to full manufacturing.

Left: They densely pack the benchmark into a cuboid with a packing density of 35.77%. The packing is free of interlocking. Right: An in-focused view highlighting densely packed objects. “Dense, Interlocking-free and Scalable Spectral Packing of Generic 3D Objects” © 2023 Cui, Rong, Chen, Matusik

3D printing is undergoing a transition from being a prototyping technology to being a manufacturing technology. However, the main roadblock is the overall cost of the manufactured part. 3D printing hardware, materials, and human labor all drive the cost of the technology. The drive for higher cost efficiency requires printing in batches where parts are tightly packed in the 3D printer’s build volume to maximize the number of printed parts per batch. One of the main limitations of this process is the limited utilization of the build volume due to the computational complexity of the packing process.

In a collaboration between MIT and Inkbit, a 3D manufacturer specializing in polymer parts, researchers are addressing the complex problem — and headache — of digitally packing many parts in a single container with multiple constraints. To date, many part models are virtually placed in the printing tray, also referred to as “nesting”, and the printer executes the job printing the whole tray. The problem with this process is that the container isn’t densely packed and there isn’t an efficient method to automate and ensure the 3D printers are printing the maximum volume of parts in a designated container.

The team of researchers, led by Wojciech Matusik, CTO at Inkbit and professor of electrical engineering and of computer science at MIT, developed a novel computational method to maximize the throughput of 3D printers by packing objects as densely as possible and accounting for interlocking-avoidance (between many parts with different shapes and sizes) and scalability. Their approach leverages the Fast Fourier Transform, or FFT, a powerful algorithm that has made it possible to quickly perform complex signal processing operations that were previously impossible or prohibitively expensive.

Coupled with FFT, “our work is making the individual placement of a 3D part into a partially filled build volume as fast as possible,” says Matusik. “Our algorithms are not only extremely fast but they can now achieve print volumes with much higher densities (40% or more). The higher print efficiency will unlock lower cost of parts manufactured.”

The lead author Qiaodong Cui is set to present the new work at SIGGRAPH 2023. The team also includes Victor Rong of MIT, and Desai Chen, research engineer at Inkbit. Visit the team’s page for the full paper and video.

The Refined Knitting Machine
Some may say that knitting is a relatively easy technique, or craft, to learn — and could even serve as a relaxing stress-reducer. Automating the needle-and-yarn technique with machine knitting is well established in the fashion and textiles industries. This has seen a recent surge in popularity due to increased understanding of the scope and complexity of the objects — fabrics, patterns — that can be automatically generated.

While the technique of knitting has been automated, the systems are still struggling with the ability to support everything that a knitting machine can make, as well as generate precisely what a user wants. To date, say the researchers, there is no such system that guarantees correctness on the complete scope of machine knitting programs.

“Semantics and Scheduling for Machine Knitting Compliers” © 2023 Lin, Narayanan, Ikarashi, Ragan-Kelley, Bernstein, McCann

A multi-institutional team of computer scientists from Carnegie Mellon University, MIT and University of Washington, have created a novel computational framework to optimize machine knitting tasks. Their formal semantics for the low-level Domain Specific Language used for knitting machines provides a sophisticated definition of correctness on the exponentially large space of knitting machine programs.

The researchers applied knot theory to develop their new framework and addressed the key properties humans care about in knitting that are poorly captured by existing concepts from knot theory. To that end, they devised an extension to knot theory called “fenced tangles” as a mathematical basis for defining machine knit object equivalence.

Our method “can describe anything a knitting machine can make: not just your standard sweaters and hats, but also dense, shaped structures useful in architecture, and multi-yarn structures that allow for colorwork and soft actuation,” says Jenny Lin, the paper’s lead author and PhD student at Carnegie Mellon in the lab of James McCann, assistant professor of robotics at Carnegie Mellon and another author of the work.

She adds, “This is important, because as we develop more nuanced systems for generating more complicated knitting machine programs, we can now always answer the question of whether two machine knit objects — the object you want and the object your program makes — are truly the same.”

As a proof of concept, the team has implemented a foundational computational tool for applying program rewrites that preserve knit program meaning. This approach could be expanded for characterizing machine knitting to hand knitting, which is both more flexible and variable as a fabrication technique.

The team behind “fenced tangles” also includes Vidya Narayanan, applied scientist at Amazon who was advised by James McCann at Carnegie, Yuka Ikarashi, PhD candidate at MIT Computer Science & Artificial Intelligence Laboratory, Jonathan Ragan-Kelley of MIT Computer Science & Artificial Intelligence Laboratory, and Gilbert Bernstein, assistant professor of computer science and engineering at University of Washington, and they will present their work at SIGGRAPH 2023. The paper and team page can be found here.

Linked and Characterized
Medieval chainmail armor, small metal rings linked together in a pattern to form a mesh, have been used for thousands of years as protective gear for soldiers in battle. Picture a knight in their metal “suit” wearing chainmail armor as an additional layer of protection. Fast forward to the wide landscape of materials and fabrics in the modern era and chainmail-like materials remain a physical structure that is challenging to computationally represent, accounting for all of its unique mechanical properties.

An international team of researchers from ETH Zürich in Switzerland and Université de Montréal in Canada draws inspiration from medieval chainmail armor, generalizing it to the concept of discrete interlocking materials, or DIM.

“Beyond Chainmail: Computational Modeling of Discrete Interlocking Materials” © 2023 Tang, Coros, Thomaszewski

“These materials possess remarkable flexibility, allowing them to adapt to necessary shapes, while also demonstrating impressive strength beyond a certain range of deformation,” says Pengbin Tang, the lead author of the research and PhD student advised by Bernhard Thomaszewski, a senior scientist at ETH Zürich and adjunct professor at the Université de Montréal.

“These unique properties make DIM attractive in robotics, orthotics, sportswear and many other areas of application,” adds Stelian Coros, collaborator and head of the computational robotics lab (CRL) at ETH Zürich.

The researchers have developed a method for computational modeling, mechanical characterization, and macro-scale simulation of these 3D-printed chainmail fabrics made of quasi-rigid interlocking elements (the connectivity of rings or links in chainmail-like material).

A key challenge the new method addresses is accurately representing the deformation limits the quasi-rigid fabric exhibits when it bends and folds and adopts different shapes. Unlike conventional elastic materials, the mechanics of DIM are governed by contacts between individual elements. Their particular structure leads to extremely high contrast in deformation resistance. To obtain the deformation limits from a given DIM, the researchers’ developed a computational approach involving thousands of virtual deformation tests across the entire deformation space.

The novel method offers an intuitive, systematic way for macro-mechanical characterization which can pave the way to using DIM for garment design, note the researchers. Their analysis has largely focused on kinematic motion and, consequently, doesn’t consider friction nor elastic deformations in the structure. In future work, an extension of their macro-scale model could account for internal friction to simulate friction-dominated scenarios as well as explore geometric detail at the element level, which may be important for additional applications.

Pengbin Tang is excited to present this work at SIGGRAPH 2023. View the paper and video on the team page.

Each year, the SIGGRAPH Technical Papers program spans research areas from animation, simulation and imaging to geometry, modeling, human-computer interaction, fabrication, robotics,and more. Visit the SIGGRAPH 2023 website to learn more about the program and for registration details.

Nexus

Nexus Studios Adds Commercial Director Emily Dean

London- and LA-based Nexus Studios, which works with directors across animation, live-action and immersive, has signed Emily Dean to its roster for commercial representation globally. Dean’s experience working with animation for adult audiences was showcased recently through her directorial work on the Emmy Award-winning Netflix animated series Love, Death & Robots.

Dean is an Asian Australian writer, director and artist living in LA. Alongside her directorial work, she has made significant story artist and visual consultant contributions to major films such as The Lego Batman Movie, The Lego Movie 2, Scoob! Hotel Artemis and the Oscar-winning Hair Love. During this time, Dean worked closely with Pixar, Warner Bros., Animal Logic, Lionsgate and Sony Pictures Animation.

In 2023 Emily was awarded an Annie Award for Best Storyboarding TV/Media for her episode of Love, Death & Robots called “The Very Pulse of the Machine.”

Dean’s journey in filmmaking began in rural Australia, where she developed a passion for drawing and storytelling at a young age. Driven by her dedication to animation, she pursued further education at the Australian Film TV and Radio School and later at the California Institute of the Arts.

Her independent animated short film Forget Me Not was inspired by her family’s experience with Alzheimer’s disease and earned her a nomination for Best Short Animation at the Australian Academy Awards.

Dean also works with live action, as evidenced by her live-action sci-fi short film Andromeda, which toured film festivals including LA Shorts International Film Festival and was picked up by sci-fi streaming platform Dust.

 

 

MPC Creates 896 VFX Shots for Transformers: Rise of the Beasts

Paramount’s Transformers: Rise of the Beasts, directed by Steven Caple Jr., transports audiences to a world where Optimus Prime and the Autobots take on their biggest challenge yet. When a new threat capable of destroying the entire planet emerges, they must team up with a powerful faction of Transformers known as the Maximals to save Earth.

Visual effects studio MPC was called on to help tell that story. Over 1,000 artists and production crew collaborating across MPC’s studios in London, Montreal, Bangalore, LA, Toronto and Adelaide delivered 896 shots, including 18 of the movie’s characters, such as Arcee, Bumblebee, Mirage, Optimus Primal, Optimus Prime, Rhinox, Scourge and the planet-eating character Unicron.

Major sequences included the start of the story in New York and the abandoned warehouse scene, where the film’s heroes meet the Autobots; the Ellis Island battle; the Switchback mountain chase; and the pivotal sequence where the Autobots meet the Maximals.

The film’s overall VFX supervisor, Gary Brozenich, worked with MPC VFX supervisors Richard Little and Carlos Caballero Valdés and MPC VFX producers Cindy Deringer and Nicholas Vodicka.

Brozenich met with the filmmakers in 2021 to meticulously plan how to bring the director’s vision of a new Transformers film to life. MPC’s on-set crew traveled to Montreal, New York and Peru to gather data from the shoot for the VFX work. Meanwhile, in LA, MPC’s visualization team, supervised by Abel Salazar, worked alongside the director and VFX supervisor to help craft the previz for many of the film’s sequences. They then helped to ensure a smooth transition into VFX by providing postviz for shots. Salazar and a team of artists continued into post, delivering over 2,000 postviz shots that helped provide a solid foundation for MPC’s VFX teams to build upon.

Character development began with concept art created by the production’s art department. Over the course of production, MPC’s art department worked further on some of the designs. MPC art director Leandre Lagrange and a team of six artists worked on concepts for volcano environments, details of Unicron’s design, Arcee’s face design, Optimus Prime’s weapon and various holograms, including Arcee’s scan hologram. For the transformations, MPC developed a new proprietary tool that allowed animators to slice, separate and transform geometry on a model in any given shot on any asset. The transformations were a joint effort between multiple departments including R&D, animation mechanic TDs and CG lighters.

MPC’s environments team built multiple large-scale, full-CG and digital set extensions, from jungle environments to mountains to cities. One of largest tasks was to change the present-day New York skyline back to 1994. “It was really interesting to see how much Manhattan has changed over the last 30 years,” says MPC’s Little.

“We created a huge CG build of Manhattan based on images from photography and footage gathered from the early ‘90s. We had some incredible images of the skyline given to me by New Yorkers I worked closely with on the shoot in Manhattan. Some of these images came from their family’s personal photography collections. The Williamsburg Bridge, which is heavily featured in the sequence when Noah meets the Autobots, was scanned and photographed to help our environments team with the build. We were very fortunate that the Manhattan authorities were so helpful in allowing us to collect the photography we needed.”

“From asset creation and design to the creation of highly complex, full-screen environments, the [MPC] teams nailed the brief with flair,” reports Brozenich. “The robots were exceptionally crafted and animated. I was pleased to uphold the legacy of the franchise with them.”

In addition to its own tools, MPC called on Maya, Houdini, RenderMan, Katana, ZBrush, Unreal and Nuke.