Tag Archives: Avatar: The Way of Water

HPA Tech Retreat Virtual Roundtable: Tech in the Desert

By Randi Altman

The HPA Tech Retreat is a yearly destination for tech heads working in post and production. It’s not a trade show, it’s a conference — one that limits the number of attendees in order to keep that “summer camp” feel. It’s always held in and around Palm Springs, with the most recent gatherings located in Rancho Mirage.

In addition to sessions that often offer a deep dive into a project — this year was Avatar: The Way of Water — there are other set “events” designed to spur conversation. (Here is a link to the session schedule so you can get a feel for what was covered.) There are the breakfast roundtables, where attendees can talk about a specific technology or trends; group lunches, dinners, and cocktails; and the Innovation Zone, which is the only place the Tech Retreat even slightly resembles NAB or IBC or your traditional trade show.

At any moment in time, you can look around and see people — some of whom you might have worked with in another life — sipping coffee, catching up and developing relationships, all while surrounded by pretty mountains and golfers.

This year we reached out to some of the attendees to get their take on what’s become a yearly destination.

Payton List, Director, Production & Post Technology, Fox Entertainment

What were some highlights of the show for you in terms of the sessions?
I really enjoyed seeing the presentations of the MovieLabs papers. Observing how the technology and workflows were implemented in real scenarios was helpful. It was also very cool to see that sustainability was a topic people are starting to recognize, along with accessibility and the talent deficit in the industry.

What other parts of the Tech Retreat did you enjoy and why?
I had a great time networking with similar minds across the industry, from studios to vendors. We’re all working on the same problems, and by gathering in this space (the HPA Tech Retreat), we have recognized that sharing information and working together in order to solve them is important. Having disparate systems, workflows and technology isn’t going to work for us much longer.

Josh Rizzo, VP, Technical Operations, Sony Pictures Television

What were some highlights of the show for you in terms of the sessions?
While the Avatar: The Way of Water deep dive was rich and well-executed, it deserves its own category for highlights.

For the main sessions, it was both daunting and exciting to see 1) the rapid realization of an industry-wide dearth of engineering talent, and 2) the slow realization that AI maybe, kinda, sorta, one day can fill that gap — but only if we find more really smart folks (I think someone said grad students) to create bespoke, entertainment-first expressions of the tools.

What other parts of the Tech Retreat did you enjoy and why?
The Innovation Zone is always a favorite. The ability to get one-on-one time with engineers and subject matter experts in such a relaxed and collegiate environment is unparalleled.

Marc Zorn, Content Protection & Production Security, Marvel Studios

What were some highlights of the show for you in terms of the sessions?
The Tuesday Supersession is always a highlight. With the focus this year on Avatar: The Way of Water, it’s obvious now that remote collaboration is the way to accomplish a project of this magnitude and complexity.

The absence of a central theme [for the conference] is actually an advantage. The variety of subject matter kept me engaged with every session. From the 34 different roundtables each morning to sessions of just the right length, the Tech Retreat somehow finds that elusive balance.

What other parts of the Tech Retreat did you enjoy and why?
The most important reason I come to the Tech Retreat is for the networking. In between the sessions is a meal, a break or some sort of reception. The Tech Retreat is just full of opportunities to meet or reconnect with friends from all over the industry. Hands down, it’s my favorite professional event.

Renard Jenkins, SVP, Production Integration & Creative Technology Services, Warner Bros. Discovery

What were some highlights of the show for you in terms of the sessions?
The MovieLabs 2030 Showcase was packed with industry leading tech and processes. Ron Gonsalves’ Year in Review of AI/ML Developments for Media Production was so much fun and so informative. He taught us that ChatGPT has a sense of humor or maybe an inflated ego…it was hard to tell, but very funny.

What other parts of the Tech Retreat did you enjoy and why?
I enjoyed the morning roundtables and the Women in Post Luncheon. That panel was so authentic and honest. It showed us how far we have to go, but it also gave us a good glimpse of how far we have come and the incredible effects that betting on women can net.

Neil Coleman, EVP Post Production, 3BMG

What were some highlights of the show for you in terms of the sessions?
The deep dive into Avatar was fascinating. I was really blown away by how many versions needed to be created/delivered in such a short amount of time. It was a master class in organization and workflow on a large scale.

What other parts of the Tech Retreat did you enjoy and why?
I always find the breakfast roundtables to be of great interest. They’ve consistently been a way to have great conversations about specific areas of interest. That said, as the years progress, they seem to be veering into more of a sales pitch from potential vendors rather than discussion topics for fellow attendees.

Out of the three that I attended, Cloud Workflows for Reality/Nonfiction TV with Steve Marshall from Moxion was a bit helpful, if only to confirm our workflows are current and working well.

Sarah Xu, Associate Project Manager Production & Post Technology, Fox Entertainment

What were some highlights of the show for you in terms of the sessions?
The MovieLabs 2030 Panel and Showcase were among the many highlights of the retreat. The retreat presented an opportunity to share innovative ideas and collaborate to work toward a few common goals. The MovieLabs sessions offered a glimpse into the industry’s future, where technology and innovation will increasingly shape the landscape.

From the Royal Opera House’s adoption of a more efficient work process to developing Black Panther in the cloud, the MovieLabs Showcase provided insights into how the industry is adapting to a new way of working and its strategic adoption of new technologies. The productive discussion with technology leaders and the case study showcase offered valuable knowledge on the industry’s progress toward achieving the 2030 Vision as well as a roadmap for future success.

What other parts of the Tech Retreat did you enjoy and why?
The networking opportunities were endless. Whether it was chatting during a refreshment break or over a bowl of ice cream, I found it immensely beneficial to be surrounded by professionals in the same industry to connect, build relationships and share ideas that can lead to new collaborations and opportunities. Through these conversations, I also gained a deeper understanding of the latest trends and innovations and walked away with valuable insights and practical solutions that can be applied to both my current work and future projects. Overall, I believe the HPA Tech Retreat continues to foster a strong sense of community within the industry and provide an incredibly enriching experience.

Greg Ciaccio, Senior Director, Post Production Original Content, IMAX

What were some highlights of the show for you in terms of the sessions?
Avatar: The Way of Water, for sure. I loved all the anecdotes and seeing behind the scenes, not to mention the sheer magnitude of effort to produce so many versions to ensure the highest quality images, sound and localization for the widest audience.

Also, I always look forward to Mark Schubin’s presentation and the many cloud success stories that show how remote workflows are bringing the world closer together while leveraging a worldwide talent pool.

What other parts of the Tech Retreat did you enjoy and why?
Always love how everyone’s in one room — no multiple tracts to choose from. The breakfast roundtables are always a good way to increase your knowledge in an intimate way while ensuring you start off nourished.

The Innovation Zone is like a tiny NAB show floor and highly accessible. Last, the hotel is a very relaxing place to hang out with many industry colleagues and friends. A captive setting in only the best way.

Mike Tosti, Director, Production Engineering, IDC LA

The Supersession! They had fantastic talent on-site and remote working to put Avatar together.

Many kudos to Kevin Rosenberger (KDR Designs) and the Christie engineer who setting up a fantastic 3D and 2D screening room in a hotel ballroom. They outdid themselves. I was excited to reconnect with Kevin.

I always enjoy the CES data dump. I never get to go to CES, so I don’t know the themes and weird products they have there. I wish that presentation had gone longer.

The networking and breakfast roundtables are the best part of the retreat, to be honest. The only problem with the roundtables is the tables fill fast. Plus, the font on the sandwich board is too small for us old duffers to read quicky.

The first roundtable I sat in on was about on-set workflows by Avid.

The second one was about security and run by Juan Reyes and Mathew [Gilliat-Smith] from Convergent Risks. That was a nice, lively discussion. They are our security consultants at IDC.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 25 years. 

 

 

Avatar: The Way of Water Visual Effects Roundtable

By Ben Mehlman     

James Cameron’s 2009 film Avatar reinvented what we thought movies could accomplish. Now, after 13 years, the first of four announced sequels, Avatar: The Way of Water, finally hit theaters. Original cast members Sam Worthington, Zoë Saldana, Sigourney Weaver and Stephen Lang are returning, joined by new additions Kate Winslet and Edie Falco.

While postPerspective spoke to Weta FX’s Joe Letteri about his role as VFX supervisor on the film back in January, we felt the Oscar-nominated visual effects were worth a deeper look. During a recent press day at Disney, I was able to sit down with six of the VFX department heads from Weta and Lightstorm Entertainment — including Letteri once more — to discuss how they accomplished the visual effects that appear in Avatar: The Way of Water. I met with them in pairs, so this article is broken down into three sections.

The Jungles of Pandora and Things That Move

The first pair I interviewed was Weta FX senior animation supervisor Dan Barrett, who, in his own words, was “in charge of most things that moved,” such as characters, creatures and vehicles. Joining him was Weta FX VFX supervisor Wayne Stables, who oversaw a team of artists to accomplish the jungle scenes that make up the first third of the film.

Wayne Stables

Where does one even begin tackling something like the jungles, where the details feel literally never-ending?
Wayne Stables: I mean, we’re lucky, right? We had a really strong basis to work from with the template that got turned over to us from the motion capture stage. The good thing about that is, with Jim, he has pretty much composed and choreographed his shots, so you know if there’s a big tree inside the frame, he wants a big tree there because he’s framing for it.

Then you look at what we did in the first film, and we also look to nature and spend an awful lot of time doing that research to dress it out.

There are also the details that make it a Pandoran jungle, like the luminescent plant life. What goes into making those choices?
Stables: That’s about the amount of exotic plants and how many slightly odd color elements, like purple or red, you like in the shot. We got good at knowing that you need a couple of big splashes of color here and there to remind the audience that they’re on Pandora, and Jim also had us put bugs everywhere.

Dan Barrett

Dan Barrett: Our amazing layout team would hand-dress those plants in.

Dan, is this where you would come in to have the wildlife interact with the jungle?
Barrett: Exactly. That’s the department I’ll complain to (laugh). “You can’t put a plant there; something’s supposed to be walking through there.” But yes, we work quite closely with the layout team. That’s the terrain where our characters are going to be climbing a tree or walking across dirt.

When it comes to movement, what makes something feel more realistic?
Barrett: In terms of creatures, there’s a couple of things. Their physiology needs to make sense; it needs to look like something that could’ve evolved. That’s something that the art department at Lightstorm does an amazing job of. We also do a lot of motion tests during design to make sure it can move properly.

And the characters’ faces were a giant focus. Obviously, you want a body to move naturally, and hands are also a big focus for us. But for an audience to connect, you can’t get away with missing even the subtlest detail in a face.

Wayne, when you’re building these environments, are you only building as much as the camera can see, or are you building the entire environment?
Stables: Typically, we’ll build what we call a “master layout,” because that’s how Jim works as well. He decides on the environment he wants to do a scene in, then, on a set, he shoots the performance capture around that location through a number of different setups. Then we break things down shot by shot.

Can you both talk about the software and hardware you used?
Barrett: For years and years, we used the same facial system. We call it the FACS, the Facial Action Coding System, and it worked well. It’s a system where essentially the surface of the face is what moves. This tends to be more expression-based than muscle-based. It’s also a system that, unless you’re very careful, can start breaking things — or what we call “going off model.” That’s when you over-combine shapes, and all of a sudden it doesn’t look like the character you’re supposed to be animating.

For this film we spent a lot of time working out how to do it differently. Now the face has been basically broken down into muscles, meaning the muscles have separated from the skin. So when we get an actor’s performance, we now know what the muscles themselves are doing, and that gets translated to the character. The beauty of this is that we can still go for all of the emotional authenticity while staying much more anatomically plausible.

How about you, Wayne?
Stables: Our biggest in-house software that drives everything is the renderer we created called Manuka, which is a specific path-trace renderer. The reason that’s become a cornerstone for us is it drives all our lighting, camera, shading and surfacing tools. We developed much more physically accurate lighting models, which let our people light shots by adjusting stops and exposure so that everything fits into real-world photography that we understand.

Tashi Trieu

Barrett: One of the other things, since there’s obviously a lot of water in the film, is a coupled simulation system we’ve been developing where you can put characters into a body of water. These simulations couple the water against the hair, against the clothes. It’s a very powerful tool.

Stables: We create a lot of fire and explosions, so we start with the simple thing first. Like for fire, we started with a candle. That way you start to understand that if you have a candle burning, you’ve got an element that’s generating heat and affecting the gas around it. This causes other effects to come through, like low pressure zones, and it shows the coupling effect.

It’s through that understanding that we were able to couple everything, whether it was water to gas or other simulations. That’s what really got us to where we needed to be for the film. But that’s a pretty big step to take on a film because you can’t just rush into it straight away and say, “What’s our final picture?” We first need to figure out how to get there and what we need to understand. Because if you can’t make a candle work, it’s going to be pretty hard to make an explosion work.

Dan, the character of Kiri is Grace’s daughter, and they’re both played by Sigourney Weaver. How did you differentiate the characters even though they’re meant to look similar?
Barrett: Once we’re given a character design, the essence of which we’re ultimately going to keep, we start testing it and seeing how the face moves. One of the things we did very early on was to study Sigourney when she was younger. (Sigourney gave us access to family photographs of when she was young.) We also referred to her body of work from early in her career.

The animation team spent many hours with early facial rigs, trying to match what we were seeing in Sigourney’s earliest work to see if we believed it. That meant the model started to evolve from what was given to us at the start so that it moved in ways that felt like a young Sigourney.

All the things we learned there meant we could then take her performance for this film and apply it to the motions we built for the younger character. But it’s still an incredible performance by Sigourney Weaver, who can play a 14-year-old girl like you wouldn’t believe.

Since Pandora is its own planet, does it have its own rules about when the sun sets or how the light hits?
Stables: It’s really driven by Jim. Obviously, things like the eclipse and time of day are all narrative-driven. Sometimes we strongly followed the template. For example, there’s a scene where Neteyam, Jake and Neytiri are landing in the forest during an eclipse, with these beautiful little orange pits of light coming through. When I talked about it with Jim, we both agreed that we liked the template and were going to stick with it.

But then there were other moments, like when Quaritch and his team are going through the jungle, that we broke away from the template because there were other films Jim referenced that he really liked. So he had us do some experiments to see what happens when we give the jungle a different look, even if it’s just for this one scene. I believe the reference he had was Tears of the Sun. So we created a very misty jungle look.

Basically, we stray as much as Jim allows us. Sometimes he lets us experiment a bit more, and other times he lets us know that he very much likes what he worked out.

Speaking of homages, did you work on the Apocalypse Now shot of Jake Sully coming out of the water? I assume this was a conscious homage.
Barrett: I did. Often when an animator submits something, they’ll have picture and picture references. So we certainly have versions of that shot of Martin Sheen popping out of the water in the picture, except it’s Sam [Worthington] popping out of the water.

Stables: I think even if it was never explicitly mentioned, everybody knew what that shot was. It’s a beautiful homage.

What’s an individual moment you worked on that you’re most proud of?
Barrett: I look back fondly at the sequence in the tent, when Jake is insisting that they need to leave high camp. We basically took these rigs we already had, threw them away and built a whole new system. So that was a sequence where a lot of development took place, with a lot of iterations of those shots. They were also done really early, and I hadn’t looked at those shots in a couple of years. So seeing how good it looked when we watched the film last night after having worked on that sequence is something that’ll long live with me.

Stables: For me, I really enjoyed the stuff we did with the nighttime attack inside the jungle with the rain. It’s a lot of fun to do big guns in the rain inside a jungle while also blowing stuff up.

The funny thing is, the two parts of the film that are my absolute favorite are ones I had nothing to do with. I just loved the part where Kiri has the anemone attack the ship. I thought that was phenomenal. The other moment toward the end with Jake, Lo’ak, Neytiri, Tuk and Kiri — hands down my favorite part. I wish I’d worked on that because it was just beautiful.

From Template Prep to the Final Image

My second interview was with executive producer and Lightstorm Entertainment VFX supervisor Richie Baneham, who helped prep the movie and produce a template and then worked directly with Weta FX to take the film to completion. He was joined by Weta FX senior VFX supervisor Joe Letteri, who took the templates Baneham handed over to create everything we see on the screen in its final form.

Richie Baneham

Avatar productions feel unique. Can you talk about the workflow and how it may differ from other productions you’ve worked on?
Joe Letteri: It starts with Jim working out the movie in what we call a template form, where he’s working on a stage with minimal props — before actor performance capture — to block it out and virtual cameras to lay the whole thing out. Richie has a big part in that process, working directly with Jim.

Richie Baneham: Yes, it is very different and unique. I’d actually call it a filmmaking paradigm shift. We don’t storyboard. We do what we call “a scout,” where we block scenes with a troop. Once we stand up the scout — by figuring out if the blocking works and developing the environment — then we look at it from a production design standpoint, and then we bring in our actors.

Once we get the performance capture, we have to down-select to focus on the real performances we want. That is an editorial process, which is different from the norm because we introduce editorial into the pipeline before we have shots. This also includes working with our head of animation, Erik Reynolds, who works under Dan Barrett, to create a blocking pass for every element we would see before we get into shot construction. It’s a very unusual way to make movies.

Joe Letteri

Then we get into shot creation, which is when we start to do proxy lighting. We try to realize as much as possible before we have the editors reintroduced, and once they get involved, it becomesa cut sequence. Then that cut sequence can be turned over to Weta.

Letteri: It’s designed upfront to be as fast and interactive as possible. We want Jim to be able to move things around like he’s moving something on-set. If you want to fly a wall out, no problem. Move a tree? A vehicle? No problem. It’s designed for fast artistic feedback so we can get his ideas out there as quickly as possible… because our part is going to take a lot longer.

We have to work in all the details, like fine-tuning the character moments, translating the actors’ expressions onto their characters, finish all the lighting and rendering — going from the virtual cinematography to the cinematography you’ll see in the final image. The idea is being able to be as creatively engaged as possible while still giving us the room to add the kind of detail and scope that we need.

So the performance capture allows you to make whatever shots you might want once they’re in the world you’ve created?
Baneham: Correct. There’s no camera on-set in the same way you would have in live action. Our process is about freeing up the actors to give the best possible performance and then protect what they’ve done all the way until the final product.

As far as shot creation is concerned, it’s completely limitless. Think of it as a play. On any given night, one actor could be great, and the next night, the opposing actor is great. We’re able to take all of our takes and combine the best moments so we can see the idealized play. It’s a plus being able to add in a camera that can give exactly what you want to tell the story. That’s the power of the tool. 

How does that kind of limitless potential affect what your relationship looks like?
Letteri: It doesn’t. That’s the whole point of the front part of the process. It’s to work out the best shots, and then we’ll jump in once Richie lets us know they’re close on something. We then try to start working with it as soon as we know nothing needs to go back to Richie and his team.

Baneham: Being down to that frame edit allows for the world to be built. The action can go forward once we know we’re definitely working with these performances, and then Weta can get started. Even after we hand that off, we still evolve some of the camera work at Weta because we may see a shot and realize it would work better, for example, if it were 15 degrees to the right and tilted up slightly or have a slow push-in. This allows us a second, third or fourth bite at the cherry. As long as the content and environment don’t change, we’re actually really flexible until quite late in the pipeline.

Letteri: That happened a lot with the water FX shots because you can’t do simulations in real time. If you’ve got a camera down low in the water with some big event happening, like a creature jumping up or a ship rolling over, then it’s going to generate a big splash. Suddenly the camera gets swamped by this huge wave, and you realize that’s not going to work. You don’t want to shrink the ship or slow down the creature because that will lessen the drama. So instead, we find a new camera angle.

Can you tell us about the software and hardware you used?
Baneham: One of the great advantages of this show is that we integrated our software with Wētā. First time around, we shot in a stand-alone system that was outside of the Wētā pipeline. This time around, we were able to take the virtual toolset Wētā employs across all movies and evolve it to be a relatively seamless file format that can be transferred between Lightstorm and Wētā. So when we were done shooting the proxy elements, they could be opened up at Weta directly.

Letteri: We wrote two renderers. One is called Gazebo, which is a real-time renderer that gets used on the stage. The other is Manuka, which is our path tracer. We wrote them to have visual parity within the limits of what you can do on a GPU. So we know everything Richie is setting up in Gazebo can be translated over to Manuka.

We tend to write a lot of our own software, but for the nuts and bolts, we’ll use Maya, Houdini, Nuke and Katana because you need a good, solid framework to develop on. But there’s so much custom-built for each show, especially this one.

Baneham: We’re inside a DCCP, which is a motion builder, but it’s a vessel that now holds a version of the Weta software that allows us to do virtual production.

With a movie like this, are you using a traditional nonlinear editing system, or is it a different process entirely?
Baneham: We edit in Avid Media Composer. Jim’s always used Avid. Even when we’re doing a rough camera pass, or when Jim is on the stage, we would do a streamed version of it, which is a relatively quick capture. It’s got flexible frame buffering. It isn’t synced to timecode, so it would have to be re-rendered to have true sync, but it gives pretty damn close to a real-time image. We can send the shot to the editors within five minutes, which allows Jim or I to request a cut. It’s a rough edit, but it allows the editors to get involved as early as possible and be as hands-on as possible.

What was your most difficult challenge? What about your proudest moment?
Baneham: One of the more difficult things to do upfront was to evolve the in-water capture system. Ryan Champney and his team did an amazing job with solving that. From a technical standpoint, that was a breakthrough. But ultimately, the sheer volume of shots that we have at any given time is a challenge in and of itself.

As far as most proud, for me, it’s the final swim-out with Jake and Lo’ak. There’s something incredibly touching about the mending of their relationship and Lo’ak becoming Jake’s savior. I also think visually it worked out fantastically well.

Letteri: What Richie is touching on is character, and to me that’s the most important thing. The water simulations were technically, mathematically and physically hard, but the characters are how we live and die on a film like this. It’s those small moments that you may not even be aware of that define who the characters are. Those moments where something changes in their life and you see it in their eyes, that’s what propels the story along.

Metkayina Village and Water Simulations

My final interview was with Weta FX’s head of effects, Jonathan Nixon, who oversaw the 127-person FX team. Their responsibilities included all the simulations for water, fire and plant dynamics. He was joined by VFX supervisor and WetaFX colleague Pavani Boddapati, who supervised the team responsible for the Metkayina Village.

Can you talk about your working relationship, given how intertwined the Metkayina Village is to water and plant life?
Jonathan Nixon: We worked very closely; we started on what was called the “Water Development Project.” This was created to look at the different scenarios where you’re going to have to simulate water and what needs to be done, not only just in FX, but how it works with light, shaders, animation and how the water looks. So we were working together to make sure that all the sequences Pavani was going to deliver had all the technology behind it that she was going to need.

Pavani Boddapati: The movie is called The Way of Water (laughs), so there is some component of water in every shot. I mean, even the jungle has rain and waterfalls.

Jonathan Nixon

What is it like working for the director of The Abyss, a film that basically invented water visual effects?
Nixon: It’s inspiring to have a director that understands what you do. We’ve learned so much from Jim, like what a specific air entrapment should look like, or what happens when you have a scuba mask on and are doing this type of breathing. So our department goes by his direction. He understands what we do, he understands how simulations work and he understands the time it takes.

It’s a once-in-a-lifetime chance to work on a film like this. And I think most of the FX team was here because they wanted to work with Jim and wanted to deliver a movie that has this much emphasis on what we do and things that we’re interested in. There’s not a better director to work for who knows what he wants and what to expect.

 

Boddapati: I’m obviously a repeat offender since I worked on the first film, the Pandora ride Flight of Passage at Disney and this film, and I’ve signed up for the next one. For me, the world of Pandora is really fascinating. I haven’t been able to get my head out of this work.

As far as Jim goes, he’s amazing and very collaborative. He knows exactly what he wants, but he wants your ideas, and he wants to make it better. All the artists on the show really enjoyed being a part of that process.

What is it like having to jump — forgive my terrible pun — into the deep end on this?
Nixon: We’ve got tons of water puns. “Get your feet wet,” all that. When I watched the first film in 2009, I was just a few years out of college. I remember sitting in that theater in New York watching the film and thinking, “This is why I’m in this industry, because of films like this.”

Pavani Boddapati

Fast forward a decade later, and I not only get to work on the sequel, but I get to be a pretty important part of steering a team of people to generate this work. It’s surreal. There’s no better way to describe getting a chance to work in this universe with a lot of people from the first one, like Pavani, who can help guide you and steer you away from problems they encountered before. It’s also great to have new people with new ideas who have a similar story to mine.

Boddapati: What’s also interesting is we had some artists from Wētā who’ve been working at Lightstorm since the first Avatar — some of whom came over to New Zealand and are now working on production. It’s helpful because they have a history of on-set work that we maybe weren’t exposed to, and that’s pretty awesome.

What were the influences in developing the Metkayina Village?
Boddapati: [Production designer] Dylan Cole was very instrumental, as was Jim himself, who draws, paints and approves all the designs. It takes inspiration from a lot of different cultures around the world. Take something small, like the weaving pattern. There was a lot of attention brought to what people use for materials when they live in places with no access to something like a supermarket. What are these materials made of? How do they weave them? Every single detail in the village was thought of like a working village. There are bottles, gourds, storage, stoves.

There was a huge amount of work that Lightstorm had done before we got involved, and then on our side, we built this thing from the ground up so it feels like a living and breathing place.

What is it like having to manage teams on something this huge when you want to stay creative and also make your schedule?
Boddapati: I’ve been on this movie for about six years, and from the beginning I’ve told every artist that this is a marathon, not a sprint. We aren’t just trying to put something together and get shots out quickly. It’s the principle of measuring twice and cutting once. Plan everything beforehand and pace yourself because we know how much preparation we need, as the short turnovers happen.

The most important thing for artists coming on is keeping that timeline in mind. Knowing that people are going to be on a show for five years, four years, three years — when an average show could be six months to a year.

Nixon: It’s tough, especially since the FX team at Weta is 160 people, and by the end of this film, we had about 127 of them working on it. As Pavi said, it’s a tricky show because of the length. I said the same thing to artists: We may have short sprints, short targets or short deadlines, but it’s still a marathon. We’d move people onto different teams or environments to give them some diversity of thought and technique. That was really important in keeping our teams happy and healthy.

Can you tell me about the software and hardware you used?
Nixon: The FX team uses Houdini, and our simulation R&D team built a framework called Loki, which is what we’re using for all of our water sims, combustion sims and fire sims on plant solvers. Loki is pretty important because of how it interfaces with Houdini.

Houdini, an industry standard, allows us to get a lot of artists into Wētā who can do the work they do at other places, while Loki enhances their work by being able to plug standard processes into it. It allows for things like higher fidelity of water sims or more material-based combustions. You can ask it if it’s a cooking fire or a big explosion, which has a lot of different types of fuels in it. It also allows plants to be moved by the water sims in a way that would be more difficult in off-the-shelf software like Houdini.

How does the film’s 48fps and 3D affect what you do?
Boddapati: A huge amount, with the stereo being the primary one. Almost everything is designed by Jim with stereo in mind, and he tells you that in the turnover. Starting with the little particles in the water, how close they are and how dense they are to show depth and scale, to water simulations, where you need lens splashes to look as if there is a dome housing on the camera.

Stereo is a huge component of the design — how close things are, how pleasing they look on the screen. We worked closely with Geoff Burdick and Richie Baneham from Lightstorm to make sure that was realized.

Regarding the 48fps, it’s critical for QC since there are now twice the amount of frames, and it also means it’s twice the amount of data.

Nixon: That’s what it is for us, especially in FX. We’ve got water simulations that are terabytes per frame. So when you increase that to 48, you’re doubling your footprint. But it also gives you flexibility when Jim decides a shot needs to go from 24 to 48.

Since Pandora has its own gravity and atmosphere, does that play into how you approach your water and fire simulations?
Nixon: We had a very big discussion about what gravity is on Pandora. You’ve got these 9-foot-tall creatures and multiple moons, but we just based everything on our reality as the starting point. If you don’t start with what people can recognize, then something that might be mathematically plausible for Pandora won’t be bought into by the audience. That’s why we start with what would look real on Earth and then push or pull where we need, based on Jim’s direction.

Boddapati: This even applies to characters. For example, if you’re looking at 9-foot-tall people, and you’re thinking about what the pore detail on the skin should be, we base that on human skin because we know we can capture it. We know we make a texture of it. We know how it should look and light, and we know we can produce that. It’s surprising how smoothly that translates to characters that are much bigger in scale.

How do the water simulations interact with the skin and hair of the characters?
Boddapati: For example, you have underwater shots, above-water shots and shots that transition between the two. That interaction between the water and the skin is critical to making you believe that person is in the water. We rendered those shots as one layer. There was no layer compositing, so when the kids are in the water learning how to swim, that’s one image.

We do have the ability to select and grade components of it, but for all practical purposes, we simulate it in a tank that’s got the characters in it. We make sure water dripping down a character falls into the water and creates ripples. Everything is coupled. Then we pass that data onto creatures, and they’ll make sure the hair and costume moves together. Then we render the whole thing in one go.

Nixon: It’s the coupling of it that matters for us because we tend to do a basic bulk sim, a free surface sim with motion, so a motion we get from stage looks correct. The waves and timing are lapping against the skin properly. Then we work tightly with creatures for hair. If you have long hair, that’s going to affect wave detail.

A lot of our process is coming up with new fin film simulations, which are like millimeter-scale sims that give you all the components you’d traditionally do in pieces. So you’ve got a rivulet of water that starts somewhere, comes down the side of the skin and then drips off.

Generally, when you do that in any other film, those are separate pieces — someone’s doing the droplet, someone’s doing the path, someone’s doing a separate sim on the drip itself. A lot of what we aimed to do had a process that does all that together so it can be rendered all together with the character, and Loki is what gives us the power to do that coupling.

Boddapati: Building off what Jonathan was saying, we actually take the map of all the displacements on the skin and displace that falling drop to make sure it’s actually going along pores because it would be affected if the skin was rough or if someone had facial hair.


Ben Mehlman, currently the post coordinator on the Apple TV+ show Presumed Innocent, is also a writer/director. His script “Whittier” was featured on the 2021 Annual Black List after Mehlman was selected for the 2020 Black List Feature Lab, where he was mentored by Beau Willimon and Jack Thorne.  

Tashi Trieu

Avatar: The Way of Water Colorist Tashi Trieu on Making the Grade

By Randi Altman

Working in post finishing or 10 years, colorist Tashi Trieu also has an extensive background in compositing as well as digital and film photography. He uses all of these talents while working on feature films (Bombshell), spots (Coke Zero) and episodics (Titans). One of his most recent jobs was as colorist on the long-awaited Avatar follow-up, Avatar: The Way of Water, which has been nominated for a Best Picture Oscar.

Tashi Trieu

We reached out to Trieu, who has a long relationship with director James Cameron’s production company Lightstorm Entertainment, to learn more about how he got involved in the production and his workflow.

We know James Cameron has been working on this for years, but how early did you get involved on the film, and how did that help?
I was loosely involved in preproduction after we finished Alita [produced by Cameron and Jon Landau] in early 2019. I was the DI editor on that film. I looked at early stereo tests with the DP Russell Carpenter [ASC], and I was blown away by the level of precision and specificity of those tests.

Polarized reflections are a real challenge in stereo as they result in different brightnesses and textures between the eyes that degrade the stereo effect. I remember them testing multiple swatches of black paint to find the one that retained the least polarization. I had never been a part of such detailed camera tests before.

What were some initial directions that you got from DP Russell Carpenter and director Cameron? What did they say about how they wanted the look to feel?
Jim impressed on me the importance of everything feeling “real.” The first film was photographic and evoked reality, but this had to truly embody it photorealistically.

Avatar: The Way of WaterWas there a look book? How do you prefer a director or DP to share their looks for films?
They didn’t share a look book with me on this one. By the time I came onboard (October 2021), WetaFX was deep into their work. For any given scene, there is usually a key shot that really shines and perfectly embodies the look and intention Jim’s going for and that often served as my reference. I needed to give everything else that extra little push to elevate to that level.

Did they want to replicate the original or make it slightly different? The first one takes place mostly in the rain forest, but this one is mostly in water. Any particular challenges that went along with this?
Now that the technology has progressed to a point where physically based lighting, subsurface scattering and realistic hair and water simulations are possible on a scale as big as this movie, the attention to photorealism is even more precise. We worked a lot on selling the underwater scenes in color grading. It’s important that the water feel like a realistic volume.

People on Earth haven’t been to Pandora, but a lot of people have put their head underwater here at home. Even in the clearest Caribbean water, there is diffusion, scattering and spectral filtering that occur. We specifically graded deeper water bluer and milked out murkier surface conditions when it felt right to sell that this is a real, living place.

This was done just using basic grading tools, like lift and gamma to give the water a bit of a murky wash.

The film was also almost entirely a visual effect. How did you work with these shots?
We had a really organized and predictable pipeline for receiving, finalizing and grading every shot in the DI. For as complex and daunting as a film like this can be, it was very homogeneous in process. It had to be, otherwise it could quickly devolve into chaos.

Every VFX shot came with embedded mattes, which was an incredible luxury that allowed me to produce lightning-fast results. I’d often combine character mattes with simple geometric windows and keys to rapidly get to a place that in pure live-action photography would have required much more detailed rotoscoping and tracking, which is only made more difficult in stereo 3D.

Did you create “on-set” LUTs? If so, how similar were those to the final look?
I took WetaFX’s lead on this one. They were much closer to the film early on than I was and spent years developing the pipeline for it. Their LUT was pretty simple, just a matrix from SGamut3.Cine to something just a little wider than P3 to avoid oversaturation, and a simple S-Curve.

Usually that’s all you need, and any scene-specific characteristics can be dialed in through production design, CGI lighting and shaders or grading. I prefer a simpler approach like this for most films — particularly on VFX films, rather than an involved film-emulation process that can work 90% of the time but might feel too restrictive at times.

WetaFX built the base LUT and from there I made several trims and modifications for various 3D light-levels and Dolby Cinema grades.

Tashi Trieu

Park Road Post

Where were you based while working on the film, and what system did you use? Any tools in that system come in particularly handy on this one?
I’m normally in Los Angeles, but for this project I moved to Wellington, New Zealand for six months. Park Road Post was our home base and they were amazing hosts.

I used Blackmagic DaVinci Resolve 18 for the film. No third-party plugins, just out-of-the-box stuff. Resolve’s built-in ResolveFX tools keep getting more and more powerful, and I used them a lot on this film. Resolve’s Python API was a big part of our workflow and streamlined shot-ingest and added a lot of little quality-of-life improvements to our specific workflow.

How did your workflow differ, if at all, from a traditionally shot film?
Most 3D movies are conversions from 2D sources. In that workflow, you’re spending the majority of your time on the 2D version and then maybe a week at the end doing a trim grade for 3D.

On a natively 3D movie that is 3D in both live-action and visual effects production, the 3D is given the proper level of attention that really makes it shine. When people come out of the theater saying they loved the 3D, or that they “don’t” have a headache from the 3D and they’re surprised by that, it’s because it’s been meticulously designed for years to be that good.

In grading the film, we do it the opposite way the conversion films do. We start in 3D and are in 3D most of the time. Our primary version was Dolby Cinema 3D 14fL in 1.85:1 aspect ratio. That way we’re seeing the biggest image on the brightest screen. Our grading decisions are influenced by the 3D and done completely in that context. Then later, we’d derive 2D versions and make any trims we felt necessary.

Tashi TrieuThis film can be viewed in a few different ways. How did your process work in terms of the variety of deliverables?
We started with a primary grading version, Dolby Cinema 3D 14fL. Once that was dialed in and the bulk of the creative grading work was done, I’d produce a 3.5fL version for general exhibition. That version is challenging, but incredibly important. A lot of theaters out there aren’t that bright, and we still owe those audiences an incredible experience.

As a colorist, it’s always a wonderful luxury to have brilliant dynamic range at your fingertips, but the creative constraint of 3.5fL can be pretty rewarding. It’s tough, but when you make it work it’s a bit of an accomplishment. Once I have those anchors on either end of the spectrum, I can quickly derive intermediate light levels for other formats.

The film was released in both 1.85:1 and 2.39:1, depending on each individual theater’s screen size and shape to give the most impact. On natively cinema-scope screens, we’d give them the 2.39:1 version so they would have the biggest and best image that can be projected in that theater. This meant that from acquisition through VFX and into the DI, multiple aspect ratios had to be kept in mind.

The crew!

Jim composed for both simultaneously while filming virtual cameras as well as live action.

But there’s no one-size-fits-all way to do that, so Jim did a lot of reframing in the DI to optimize each of the two formats for both story and aesthetic composition. Once I had those two key light-levels and framing for the two aspect ratios, I built out the various permutations of the two, ultimately resulting in 11 simultaneous theatrical picture masters that we delivered to distribution to become DCPs.

Finally, what was the most memorable part of working on Avatar: The Way of the Water from a work perspective?
Grading the teaser trailer back in April and seeing that go live was really incredible. It was like a sleeping giant awoke and announced, “I’m back” and everybody around the world and on the internet went nuts for it.

It was incredibly rewarding to return to LA and take friends and family to see the movie in packed theaters with excited audiences. It was an amazing way for me to celebrate after a long stint of challenging work and a return to movie theaters post-pandemic.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 25 years. 

Avatar: The Way of Water

Avatar: The Way of Water: Weta’s Joe Letteri on VFX Workflow

By Iain Blair

In 2009, James Cameron’s Avatar became the highest grossing film of all time. Now with the first sequel, Avatar: The Way of Water, he has returned to Pandora and the saga of the Sully family — Jake (Sam Worthington), his mind now permanently embedded in his blue, alien Na’vi body; his wife, Neytiri (Zoe Saldaña); and their four children.

Avatar: The Way of Water

Joe Letteri

To create the alien world of The Way of Water, Cameron reteamed with another visionary — Weta’s VFX supervisor, Joe Letteri, whose work on Avatar won him an Oscar. (He’s also won for The Lord of the Rings: The Two Towers, The Lord of the Rings: The Return of the King and King Kong). The film was nominated for Best Picture, Best Sound, Best Production Design and Best Visual Effects for Letteri and his team, which included Richard Baneham, Eric Saindon and Daniel Barrett.

I spoke with Letteri, who did a deep dive — pun intended — into how the team created all the immersive underwater sequences and cutting-edge visual effects.

This has to be the most complex film you’ve ever done. Give us some nuts and bolts about the VFX and what it took to accomplish them.
Yes, this is the largest VFX film Weta FX has ever worked on and the biggest I’ve ever done. I believe there were only two shots in the whole film that didn’t have any VFX in them. We worked on over 4,000 shots, and there were 3,289 shots in the final film. Weta FX alone worked on 3,240 VFX shots, 2,225 of which were water shots. Some of the shots — about 120 — were done by ILM.

It was huge in every way. For instance, the total amount of data stored for this film was 18.5 petabytes, which is 18.5 times the amount used on the original Avatar, and close to 40% of the rendering was completed in the cloud. In terms of all the crew needed to accomplish this, we had a core team of probably around 500, and then we had close to 1,800 artists and people working on it all over the world at any given time.

On a typical film, we have might one or two VFX supervisors and animation supervisors, but for this film, we needed 10 VFX supervisors and nine animation supervisors. We began working on the motion capture way back in 2017 and started on all the VFX work right away because there was so much prep to do — including building the models, doing character studies and so on. I’d say we really ramped up in earnest over the past three years.

What were the big challenges of creating so many characters and VFX for this?
The biggest thing for us was in a way evolutionary. Ever since Avatar and going back to LOTR’s Gollum and King Kong, we’ve been working very hard on character and trying to get the performance and emotion to come through. When we saw what we’d need to create for this film – the sheer number of characters, the scope of the work and scope of the emotions — we decided we needed a better and deeper understanding of just how emotions get conveyed through a character’s performance.

So we spent a lot of time studying that and built a new piece of software called APFS to allow us to do facial performance — either from capture, from animation or a combination — at a really detailed, granular level. What’s going on inside the face when you see a performance, and why does that move you? How do we make that come through from what our actors are giving us? And what do we want to see on our characters? Dealing with all that was probably where we spent most of our time and effort given that we created 30 principle, speaking CG characters with over 3,000 facial performances that we tracked and animated for the film.

It sounds like the facial performance software was a breakthrough, but a very long time coming.
It was. To me it felt like something we were on the verge of understanding but couldn’t quite crack, and now we cracked it. Now that’s not to say we’ve perfected it, but we’ve created a new framework for understanding it by moving it all into a neural network. And now that it’s in place, we can take it further over the next few films.

Avatar: The Way of WaterWhile the first film was largely set in the rain forest, this one is largely set underwater. Can you talk about the challenges of all that water?
Developing the technology we needed for underwater performance capture was a big challenge since so much of the film takes place not just underwater but in the water. You’re under the water or at the waterline, and things happen differently at both places and as you transition, so we put a lot of effort into that.

One of the big pieces was being able to capture the performances underwater. All the actors underwent extensive free-diving training so they could hold their breath for two minutes at a time. We had a volume under the water and a volume above the water, and we worked out all the differences in lighting frequencies and refractions so we could link the two together. That way, characters could pop up and dive below, and we could capture it all in one continuous action. That really helped you feel what it’s like to move underwater.

Dealing with all the water was so crucial that we rebuilt our entire simulation approach and used a global simulation methodology within our new in-house Loki framework. This allowed us to deal not just with all the water but also with textures like skin, cloth, hair and so on. We also developed a new “depth compositing” system that gave us a real-time composite in-camera without using any green- or bluescreen. That let us blend live action and CG elements so we could get a very close version of the final shot even while we were on-set shooting a scene.

Fair to say all this was another big leap forward?
Yes. On the first film, we had a couple of shots in water, like the one where Jake jumps off a waterfall when he’s being chased and lands in the water below and swims. But we did that with Sam on an office chair being pulled down the hallway (laughs), and that wasn’t going to work for this film. A large part of what we did was study and really understand body movement. We’ve worked so much with performance capture, and now we’ve expanded that capability into working with water. That was the other big breakthrough along with the facial performance-capture element. Those are what made all of this unique.

Avatar: The Way of Water: I was on the set of The Abyss, and Jim told me, “If you ever make a movie, never, ever shoot in water. It’s a total nightmare.” Didn’t he read his own memo?
(Laughs) I guess not, but in a way, this was different because you’re working out the performance. You’re not tied into the thing of, “OK, I’ve got the performance I want, but the shot didn’t work because there were bubbles in front of the camera.” You could decouple that stuff, as we were adding all that in later. We were creating the water environment and then adding the performance into it, so when the actors were working in the tank, most of the focus was on the performances. And that was true for both the performance capture and the live-action scenes that were shot in a tank. They were partial sets, and we weren’t trying to get everything. We were adding most of it later.

What other big advances were made since the first film?
Our rendering technology. We did a global illumination technique for the first Avatar film that was called “spherical harmonic lighting.” It was unique at the time — something we saw they were using in gaming that we adapted for films — and it served us very well. But it was really a stopgap for the full, spectrally accurate path-tracing technique we chose to follow since then.

Avatar: The Way of Water

We ended up writing our own spectral renderer called Manuka, which provides realistic renders of environments and characters. It’s been in action since the third Hobbit film, so it’s not new, but again, it’s that framework that allowed us to build toward what we really needed for this film. And unlike the renderers we’ve worked with in the past, which could only handle primaries — red, green and blue — Manuka allows you to work with light at all wavelengths.

That really helps when you’re doing underwater stuff, as water absorbs light differently at different depths and depending on whether it’s clear or turbid. All that was really critical to getting the look right, and we knew when we built it all those years ago that it would be a big element in making this film. So since about 2014, it’s been our renderer on every project.

What was the hardest VFX sequence to do?
(Laughs) I don’t think there were any easy ones. They all had their own unique set of challenges, but the one where the Sullys first land at Metkayina Village was extremely challenging. We had to introduce this whole new clan and dozens of unique characters who are in almost every shot. We also had to build all the environments, including the huge reef and flexible walkways.

Then all the shots for the tulkun chase were very complex because we also had half a dozen boats in the water. They’re all interacting with each other, and everything’s interacting with the water, and you have the creatures breaching and swimming and diving. So that was a huge amount of water interaction on a vast scale. Those two sequences probably represent the two ends of the whole spectrum we were dealing with.

Have you started work on the next sequels?
Yes, we’ve already shot most of 2 and 3, which we shot simultaneously with this one, and we’re already underway on all the VFX work. We have a deadline in two years for the next one, so we’re rolling right along. Jim is heavily invested in the VFX, and he really understands the state of the art of VFX and where he can take it for the next films.

[Editor’s Note: We will have an Avatar VFX roundtable in an upcoming issue, featuring a variety of artists who worked on the film.]


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.