NBCUni 9.5.23

Avatar: The Way of Water Visual Effects Roundtable

By Ben Mehlman     

James Cameron’s 2009 film Avatar reinvented what we thought movies could accomplish. Now, after 13 years, the first of four announced sequels, Avatar: The Way of Water, finally hit theaters. Original cast members Sam Worthington, Zoë Saldana, Sigourney Weaver and Stephen Lang are returning, joined by new additions Kate Winslet and Edie Falco.

While postPerspective spoke to Weta FX’s Joe Letteri about his role as VFX supervisor on the film back in January, we felt the Oscar-nominated visual effects were worth a deeper look. During a recent press day at Disney, I was able to sit down with six of the VFX department heads from Weta and Lightstorm Entertainment — including Letteri once more — to discuss how they accomplished the visual effects that appear in Avatar: The Way of Water. I met with them in pairs, so this article is broken down into three sections.

The Jungles of Pandora and Things That Move

The first pair I interviewed was Weta FX senior animation supervisor Dan Barrett, who, in his own words, was “in charge of most things that moved,” such as characters, creatures and vehicles. Joining him was Weta FX VFX supervisor Wayne Stables, who oversaw a team of artists to accomplish the jungle scenes that make up the first third of the film.

Wayne Stables

Where does one even begin tackling something like the jungles, where the details feel literally never-ending?
Wayne Stables: I mean, we’re lucky, right? We had a really strong basis to work from with the template that got turned over to us from the motion capture stage. The good thing about that is, with Jim, he has pretty much composed and choreographed his shots, so you know if there’s a big tree inside the frame, he wants a big tree there because he’s framing for it.

Then you look at what we did in the first film, and we also look to nature and spend an awful lot of time doing that research to dress it out.

There are also the details that make it a Pandoran jungle, like the luminescent plant life. What goes into making those choices?
Stables: That’s about the amount of exotic plants and how many slightly odd color elements, like purple or red, you like in the shot. We got good at knowing that you need a couple of big splashes of color here and there to remind the audience that they’re on Pandora, and Jim also had us put bugs everywhere.

Dan Barrett

Dan Barrett: Our amazing layout team would hand-dress those plants in.

Dan, is this where you would come in to have the wildlife interact with the jungle?
Barrett: Exactly. That’s the department I’ll complain to (laugh). “You can’t put a plant there; something’s supposed to be walking through there.” But yes, we work quite closely with the layout team. That’s the terrain where our characters are going to be climbing a tree or walking across dirt.

When it comes to movement, what makes something feel more realistic?
Barrett: In terms of creatures, there’s a couple of things. Their physiology needs to make sense; it needs to look like something that could’ve evolved. That’s something that the art department at Lightstorm does an amazing job of. We also do a lot of motion tests during design to make sure it can move properly.

And the characters’ faces were a giant focus. Obviously, you want a body to move naturally, and hands are also a big focus for us. But for an audience to connect, you can’t get away with missing even the subtlest detail in a face.

Wayne, when you’re building these environments, are you only building as much as the camera can see, or are you building the entire environment?
Stables: Typically, we’ll build what we call a “master layout,” because that’s how Jim works as well. He decides on the environment he wants to do a scene in, then, on a set, he shoots the performance capture around that location through a number of different setups. Then we break things down shot by shot.

Can you both talk about the software and hardware you used?
Barrett: For years and years, we used the same facial system. We call it the FACS, the Facial Action Coding System, and it worked well. It’s a system where essentially the surface of the face is what moves. This tends to be more expression-based than muscle-based. It’s also a system that, unless you’re very careful, can start breaking things — or what we call “going off model.” That’s when you over-combine shapes, and all of a sudden it doesn’t look like the character you’re supposed to be animating.

For this film we spent a lot of time working out how to do it differently. Now the face has been basically broken down into muscles, meaning the muscles have separated from the skin. So when we get an actor’s performance, we now know what the muscles themselves are doing, and that gets translated to the character. The beauty of this is that we can still go for all of the emotional authenticity while staying much more anatomically plausible.

How about you, Wayne?
Stables: Our biggest in-house software that drives everything is the renderer we created called Manuka, which is a specific path-trace renderer. The reason that’s become a cornerstone for us is it drives all our lighting, camera, shading and surfacing tools. We developed much more physically accurate lighting models, which let our people light shots by adjusting stops and exposure so that everything fits into real-world photography that we understand.

Tashi Trieu

Barrett: One of the other things, since there’s obviously a lot of water in the film, is a coupled simulation system we’ve been developing where you can put characters into a body of water. These simulations couple the water against the hair, against the clothes. It’s a very powerful tool.

Stables: We create a lot of fire and explosions, so we start with the simple thing first. Like for fire, we started with a candle. That way you start to understand that if you have a candle burning, you’ve got an element that’s generating heat and affecting the gas around it. This causes other effects to come through, like low pressure zones, and it shows the coupling effect.

It’s through that understanding that we were able to couple everything, whether it was water to gas or other simulations. That’s what really got us to where we needed to be for the film. But that’s a pretty big step to take on a film because you can’t just rush into it straight away and say, “What’s our final picture?” We first need to figure out how to get there and what we need to understand. Because if you can’t make a candle work, it’s going to be pretty hard to make an explosion work.

Dan, the character of Kiri is Grace’s daughter, and they’re both played by Sigourney Weaver. How did you differentiate the characters even though they’re meant to look similar?
Barrett: Once we’re given a character design, the essence of which we’re ultimately going to keep, we start testing it and seeing how the face moves. One of the things we did very early on was to study Sigourney when she was younger. (Sigourney gave us access to family photographs of when she was young.) We also referred to her body of work from early in her career.

The animation team spent many hours with early facial rigs, trying to match what we were seeing in Sigourney’s earliest work to see if we believed it. That meant the model started to evolve from what was given to us at the start so that it moved in ways that felt like a young Sigourney.

All the things we learned there meant we could then take her performance for this film and apply it to the motions we built for the younger character. But it’s still an incredible performance by Sigourney Weaver, who can play a 14-year-old girl like you wouldn’t believe.

Since Pandora is its own planet, does it have its own rules about when the sun sets or how the light hits?
Stables: It’s really driven by Jim. Obviously, things like the eclipse and time of day are all narrative-driven. Sometimes we strongly followed the template. For example, there’s a scene where Neteyam, Jake and Neytiri are landing in the forest during an eclipse, with these beautiful little orange pits of light coming through. When I talked about it with Jim, we both agreed that we liked the template and were going to stick with it.

But then there were other moments, like when Quaritch and his team are going through the jungle, that we broke away from the template because there were other films Jim referenced that he really liked. So he had us do some experiments to see what happens when we give the jungle a different look, even if it’s just for this one scene. I believe the reference he had was Tears of the Sun. So we created a very misty jungle look.

Basically, we stray as much as Jim allows us. Sometimes he lets us experiment a bit more, and other times he lets us know that he very much likes what he worked out.

Speaking of homages, did you work on the Apocalypse Now shot of Jake Sully coming out of the water? I assume this was a conscious homage.
Barrett: I did. Often when an animator submits something, they’ll have picture and picture references. So we certainly have versions of that shot of Martin Sheen popping out of the water in the picture, except it’s Sam [Worthington] popping out of the water.

Stables: I think even if it was never explicitly mentioned, everybody knew what that shot was. It’s a beautiful homage.

What’s an individual moment you worked on that you’re most proud of?
Barrett: I look back fondly at the sequence in the tent, when Jake is insisting that they need to leave high camp. We basically took these rigs we already had, threw them away and built a whole new system. So that was a sequence where a lot of development took place, with a lot of iterations of those shots. They were also done really early, and I hadn’t looked at those shots in a couple of years. So seeing how good it looked when we watched the film last night after having worked on that sequence is something that’ll long live with me.

Stables: For me, I really enjoyed the stuff we did with the nighttime attack inside the jungle with the rain. It’s a lot of fun to do big guns in the rain inside a jungle while also blowing stuff up.

The funny thing is, the two parts of the film that are my absolute favorite are ones I had nothing to do with. I just loved the part where Kiri has the anemone attack the ship. I thought that was phenomenal. The other moment toward the end with Jake, Lo’ak, Neytiri, Tuk and Kiri — hands down my favorite part. I wish I’d worked on that because it was just beautiful.

From Template Prep to the Final Image

My second interview was with executive producer and Lightstorm Entertainment VFX supervisor Richie Baneham, who helped prep the movie and produce a template and then worked directly with Weta FX to take the film to completion. He was joined by Weta FX senior VFX supervisor Joe Letteri, who took the templates Baneham handed over to create everything we see on the screen in its final form.

Richie Baneham

Avatar productions feel unique. Can you talk about the workflow and how it may differ from other productions you’ve worked on?
Joe Letteri: It starts with Jim working out the movie in what we call a template form, where he’s working on a stage with minimal props — before actor performance capture — to block it out and virtual cameras to lay the whole thing out. Richie has a big part in that process, working directly with Jim.

Richie Baneham: Yes, it is very different and unique. I’d actually call it a filmmaking paradigm shift. We don’t storyboard. We do what we call “a scout,” where we block scenes with a troop. Once we stand up the scout — by figuring out if the blocking works and developing the environment — then we look at it from a production design standpoint, and then we bring in our actors.

Once we get the performance capture, we have to down-select to focus on the real performances we want. That is an editorial process, which is different from the norm because we introduce editorial into the pipeline before we have shots. This also includes working with our head of animation, Erik Reynolds, who works under Dan Barrett, to create a blocking pass for every element we would see before we get into shot construction. It’s a very unusual way to make movies.

Joe Letteri

Then we get into shot creation, which is when we start to do proxy lighting. We try to realize as much as possible before we have the editors reintroduced, and once they get involved, it becomesa cut sequence. Then that cut sequence can be turned over to Weta.

Letteri: It’s designed upfront to be as fast and interactive as possible. We want Jim to be able to move things around like he’s moving something on-set. If you want to fly a wall out, no problem. Move a tree? A vehicle? No problem. It’s designed for fast artistic feedback so we can get his ideas out there as quickly as possible… because our part is going to take a lot longer.

We have to work in all the details, like fine-tuning the character moments, translating the actors’ expressions onto their characters, finish all the lighting and rendering — going from the virtual cinematography to the cinematography you’ll see in the final image. The idea is being able to be as creatively engaged as possible while still giving us the room to add the kind of detail and scope that we need.

So the performance capture allows you to make whatever shots you might want once they’re in the world you’ve created?
Baneham: Correct. There’s no camera on-set in the same way you would have in live action. Our process is about freeing up the actors to give the best possible performance and then protect what they’ve done all the way until the final product.

As far as shot creation is concerned, it’s completely limitless. Think of it as a play. On any given night, one actor could be great, and the next night, the opposing actor is great. We’re able to take all of our takes and combine the best moments so we can see the idealized play. It’s a plus being able to add in a camera that can give exactly what you want to tell the story. That’s the power of the tool. 

How does that kind of limitless potential affect what your relationship looks like?
Letteri: It doesn’t. That’s the whole point of the front part of the process. It’s to work out the best shots, and then we’ll jump in once Richie lets us know they’re close on something. We then try to start working with it as soon as we know nothing needs to go back to Richie and his team.

Baneham: Being down to that frame edit allows for the world to be built. The action can go forward once we know we’re definitely working with these performances, and then Weta can get started. Even after we hand that off, we still evolve some of the camera work at Weta because we may see a shot and realize it would work better, for example, if it were 15 degrees to the right and tilted up slightly or have a slow push-in. This allows us a second, third or fourth bite at the cherry. As long as the content and environment don’t change, we’re actually really flexible until quite late in the pipeline.

Letteri: That happened a lot with the water FX shots because you can’t do simulations in real time. If you’ve got a camera down low in the water with some big event happening, like a creature jumping up or a ship rolling over, then it’s going to generate a big splash. Suddenly the camera gets swamped by this huge wave, and you realize that’s not going to work. You don’t want to shrink the ship or slow down the creature because that will lessen the drama. So instead, we find a new camera angle.

Can you tell us about the software and hardware you used?
Baneham: One of the great advantages of this show is that we integrated our software with Wētā. First time around, we shot in a stand-alone system that was outside of the Wētā pipeline. This time around, we were able to take the virtual toolset Wētā employs across all movies and evolve it to be a relatively seamless file format that can be transferred between Lightstorm and Wētā. So when we were done shooting the proxy elements, they could be opened up at Weta directly.

Letteri: We wrote two renderers. One is called Gazebo, which is a real-time renderer that gets used on the stage. The other is Manuka, which is our path tracer. We wrote them to have visual parity within the limits of what you can do on a GPU. So we know everything Richie is setting up in Gazebo can be translated over to Manuka.

We tend to write a lot of our own software, but for the nuts and bolts, we’ll use Maya, Houdini, Nuke and Katana because you need a good, solid framework to develop on. But there’s so much custom-built for each show, especially this one.

Baneham: We’re inside a DCCP, which is a motion builder, but it’s a vessel that now holds a version of the Weta software that allows us to do virtual production.

With a movie like this, are you using a traditional nonlinear editing system, or is it a different process entirely?
Baneham: We edit in Avid Media Composer. Jim’s always used Avid. Even when we’re doing a rough camera pass, or when Jim is on the stage, we would do a streamed version of it, which is a relatively quick capture. It’s got flexible frame buffering. It isn’t synced to timecode, so it would have to be re-rendered to have true sync, but it gives pretty damn close to a real-time image. We can send the shot to the editors within five minutes, which allows Jim or I to request a cut. It’s a rough edit, but it allows the editors to get involved as early as possible and be as hands-on as possible.

What was your most difficult challenge? What about your proudest moment?
Baneham: One of the more difficult things to do upfront was to evolve the in-water capture system. Ryan Champney and his team did an amazing job with solving that. From a technical standpoint, that was a breakthrough. But ultimately, the sheer volume of shots that we have at any given time is a challenge in and of itself.

As far as most proud, for me, it’s the final swim-out with Jake and Lo’ak. There’s something incredibly touching about the mending of their relationship and Lo’ak becoming Jake’s savior. I also think visually it worked out fantastically well.

Letteri: What Richie is touching on is character, and to me that’s the most important thing. The water simulations were technically, mathematically and physically hard, but the characters are how we live and die on a film like this. It’s those small moments that you may not even be aware of that define who the characters are. Those moments where something changes in their life and you see it in their eyes, that’s what propels the story along.

Metkayina Village and Water Simulations

My final interview was with Weta FX’s head of effects, Jonathan Nixon, who oversaw the 127-person FX team. Their responsibilities included all the simulations for water, fire and plant dynamics. He was joined by VFX supervisor and WetaFX colleague Pavani Boddapati, who supervised the team responsible for the Metkayina Village.

Can you talk about your working relationship, given how intertwined the Metkayina Village is to water and plant life?
Jonathan Nixon: We worked very closely; we started on what was called the “Water Development Project.” This was created to look at the different scenarios where you’re going to have to simulate water and what needs to be done, not only just in FX, but how it works with light, shaders, animation and how the water looks. So we were working together to make sure that all the sequences Pavani was going to deliver had all the technology behind it that she was going to need.

Pavani Boddapati: The movie is called The Way of Water (laughs), so there is some component of water in every shot. I mean, even the jungle has rain and waterfalls.

Jonathan Nixon

What is it like working for the director of The Abyss, a film that basically invented water visual effects?
Nixon: It’s inspiring to have a director that understands what you do. We’ve learned so much from Jim, like what a specific air entrapment should look like, or what happens when you have a scuba mask on and are doing this type of breathing. So our department goes by his direction. He understands what we do, he understands how simulations work and he understands the time it takes.

It’s a once-in-a-lifetime chance to work on a film like this. And I think most of the FX team was here because they wanted to work with Jim and wanted to deliver a movie that has this much emphasis on what we do and things that we’re interested in. There’s not a better director to work for who knows what he wants and what to expect.

 

Boddapati: I’m obviously a repeat offender since I worked on the first film, the Pandora ride Flight of Passage at Disney and this film, and I’ve signed up for the next one. For me, the world of Pandora is really fascinating. I haven’t been able to get my head out of this work.

As far as Jim goes, he’s amazing and very collaborative. He knows exactly what he wants, but he wants your ideas, and he wants to make it better. All the artists on the show really enjoyed being a part of that process.

What is it like having to jump — forgive my terrible pun — into the deep end on this?
Nixon: We’ve got tons of water puns. “Get your feet wet,” all that. When I watched the first film in 2009, I was just a few years out of college. I remember sitting in that theater in New York watching the film and thinking, “This is why I’m in this industry, because of films like this.”

Pavani Boddapati

Fast forward a decade later, and I not only get to work on the sequel, but I get to be a pretty important part of steering a team of people to generate this work. It’s surreal. There’s no better way to describe getting a chance to work in this universe with a lot of people from the first one, like Pavani, who can help guide you and steer you away from problems they encountered before. It’s also great to have new people with new ideas who have a similar story to mine.

Boddapati: What’s also interesting is we had some artists from Wētā who’ve been working at Lightstorm since the first Avatar — some of whom came over to New Zealand and are now working on production. It’s helpful because they have a history of on-set work that we maybe weren’t exposed to, and that’s pretty awesome.

What were the influences in developing the Metkayina Village?
Boddapati: [Production designer] Dylan Cole was very instrumental, as was Jim himself, who draws, paints and approves all the designs. It takes inspiration from a lot of different cultures around the world. Take something small, like the weaving pattern. There was a lot of attention brought to what people use for materials when they live in places with no access to something like a supermarket. What are these materials made of? How do they weave them? Every single detail in the village was thought of like a working village. There are bottles, gourds, storage, stoves.

There was a huge amount of work that Lightstorm had done before we got involved, and then on our side, we built this thing from the ground up so it feels like a living and breathing place.

What is it like having to manage teams on something this huge when you want to stay creative and also make your schedule?
Boddapati: I’ve been on this movie for about six years, and from the beginning I’ve told every artist that this is a marathon, not a sprint. We aren’t just trying to put something together and get shots out quickly. It’s the principle of measuring twice and cutting once. Plan everything beforehand and pace yourself because we know how much preparation we need, as the short turnovers happen.

The most important thing for artists coming on is keeping that timeline in mind. Knowing that people are going to be on a show for five years, four years, three years — when an average show could be six months to a year.

Nixon: It’s tough, especially since the FX team at Weta is 160 people, and by the end of this film, we had about 127 of them working on it. As Pavi said, it’s a tricky show because of the length. I said the same thing to artists: We may have short sprints, short targets or short deadlines, but it’s still a marathon. We’d move people onto different teams or environments to give them some diversity of thought and technique. That was really important in keeping our teams happy and healthy.

Can you tell me about the software and hardware you used?
Nixon: The FX team uses Houdini, and our simulation R&D team built a framework called Loki, which is what we’re using for all of our water sims, combustion sims and fire sims on plant solvers. Loki is pretty important because of how it interfaces with Houdini.

Houdini, an industry standard, allows us to get a lot of artists into Wētā who can do the work they do at other places, while Loki enhances their work by being able to plug standard processes into it. It allows for things like higher fidelity of water sims or more material-based combustions. You can ask it if it’s a cooking fire or a big explosion, which has a lot of different types of fuels in it. It also allows plants to be moved by the water sims in a way that would be more difficult in off-the-shelf software like Houdini.

How does the film’s 48fps and 3D affect what you do?
Boddapati: A huge amount, with the stereo being the primary one. Almost everything is designed by Jim with stereo in mind, and he tells you that in the turnover. Starting with the little particles in the water, how close they are and how dense they are to show depth and scale, to water simulations, where you need lens splashes to look as if there is a dome housing on the camera.

Stereo is a huge component of the design — how close things are, how pleasing they look on the screen. We worked closely with Geoff Burdick and Richie Baneham from Lightstorm to make sure that was realized.

Regarding the 48fps, it’s critical for QC since there are now twice the amount of frames, and it also means it’s twice the amount of data.

Nixon: That’s what it is for us, especially in FX. We’ve got water simulations that are terabytes per frame. So when you increase that to 48, you’re doubling your footprint. But it also gives you flexibility when Jim decides a shot needs to go from 24 to 48.

Since Pandora has its own gravity and atmosphere, does that play into how you approach your water and fire simulations?
Nixon: We had a very big discussion about what gravity is on Pandora. You’ve got these 9-foot-tall creatures and multiple moons, but we just based everything on our reality as the starting point. If you don’t start with what people can recognize, then something that might be mathematically plausible for Pandora won’t be bought into by the audience. That’s why we start with what would look real on Earth and then push or pull where we need, based on Jim’s direction.

Boddapati: This even applies to characters. For example, if you’re looking at 9-foot-tall people, and you’re thinking about what the pore detail on the skin should be, we base that on human skin because we know we can capture it. We know we make a texture of it. We know how it should look and light, and we know we can produce that. It’s surprising how smoothly that translates to characters that are much bigger in scale.

How do the water simulations interact with the skin and hair of the characters?
Boddapati: For example, you have underwater shots, above-water shots and shots that transition between the two. That interaction between the water and the skin is critical to making you believe that person is in the water. We rendered those shots as one layer. There was no layer compositing, so when the kids are in the water learning how to swim, that’s one image.

We do have the ability to select and grade components of it, but for all practical purposes, we simulate it in a tank that’s got the characters in it. We make sure water dripping down a character falls into the water and creates ripples. Everything is coupled. Then we pass that data onto creatures, and they’ll make sure the hair and costume moves together. Then we render the whole thing in one go.

Nixon: It’s the coupling of it that matters for us because we tend to do a basic bulk sim, a free surface sim with motion, so a motion we get from stage looks correct. The waves and timing are lapping against the skin properly. Then we work tightly with creatures for hair. If you have long hair, that’s going to affect wave detail.

A lot of our process is coming up with new fin film simulations, which are like millimeter-scale sims that give you all the components you’d traditionally do in pieces. So you’ve got a rivulet of water that starts somewhere, comes down the side of the skin and then drips off.

Generally, when you do that in any other film, those are separate pieces — someone’s doing the droplet, someone’s doing the path, someone’s doing a separate sim on the drip itself. A lot of what we aimed to do had a process that does all that together so it can be rendered all together with the character, and Loki is what gives us the power to do that coupling.

Boddapati: Building off what Jonathan was saying, we actually take the map of all the displacements on the skin and displace that falling drop to make sure it’s actually going along pores because it would be affected if the skin was rough or if someone had facial hair.


Ben Mehlman, currently the post coordinator on the Apple TV+ show Presumed Innocent, is also a writer/director. His script “Whittier” was featured on the 2021 Annual Black List after Mehlman was selected for the 2020 Black List Feature Lab, where he was mentored by Beau Willimon and Jack Thorne.  


Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.