Tag Archives: James Cameron

Avatar: The Way of Water Visual Effects Roundtable

By Ben Mehlman     

James Cameron’s 2009 film Avatar reinvented what we thought movies could accomplish. Now, after 13 years, the first of four announced sequels, Avatar: The Way of Water, finally hit theaters. Original cast members Sam Worthington, Zoë Saldana, Sigourney Weaver and Stephen Lang are returning, joined by new additions Kate Winslet and Edie Falco.

While postPerspective spoke to Weta FX’s Joe Letteri about his role as VFX supervisor on the film back in January, we felt the Oscar-nominated visual effects were worth a deeper look. During a recent press day at Disney, I was able to sit down with six of the VFX department heads from Weta and Lightstorm Entertainment — including Letteri once more — to discuss how they accomplished the visual effects that appear in Avatar: The Way of Water. I met with them in pairs, so this article is broken down into three sections.

The Jungles of Pandora and Things That Move

The first pair I interviewed was Weta FX senior animation supervisor Dan Barrett, who, in his own words, was “in charge of most things that moved,” such as characters, creatures and vehicles. Joining him was Weta FX VFX supervisor Wayne Stables, who oversaw a team of artists to accomplish the jungle scenes that make up the first third of the film.

Wayne Stables

Where does one even begin tackling something like the jungles, where the details feel literally never-ending?
Wayne Stables: I mean, we’re lucky, right? We had a really strong basis to work from with the template that got turned over to us from the motion capture stage. The good thing about that is, with Jim, he has pretty much composed and choreographed his shots, so you know if there’s a big tree inside the frame, he wants a big tree there because he’s framing for it.

Then you look at what we did in the first film, and we also look to nature and spend an awful lot of time doing that research to dress it out.

There are also the details that make it a Pandoran jungle, like the luminescent plant life. What goes into making those choices?
Stables: That’s about the amount of exotic plants and how many slightly odd color elements, like purple or red, you like in the shot. We got good at knowing that you need a couple of big splashes of color here and there to remind the audience that they’re on Pandora, and Jim also had us put bugs everywhere.

Dan Barrett

Dan Barrett: Our amazing layout team would hand-dress those plants in.

Dan, is this where you would come in to have the wildlife interact with the jungle?
Barrett: Exactly. That’s the department I’ll complain to (laugh). “You can’t put a plant there; something’s supposed to be walking through there.” But yes, we work quite closely with the layout team. That’s the terrain where our characters are going to be climbing a tree or walking across dirt.

When it comes to movement, what makes something feel more realistic?
Barrett: In terms of creatures, there’s a couple of things. Their physiology needs to make sense; it needs to look like something that could’ve evolved. That’s something that the art department at Lightstorm does an amazing job of. We also do a lot of motion tests during design to make sure it can move properly.

And the characters’ faces were a giant focus. Obviously, you want a body to move naturally, and hands are also a big focus for us. But for an audience to connect, you can’t get away with missing even the subtlest detail in a face.

Wayne, when you’re building these environments, are you only building as much as the camera can see, or are you building the entire environment?
Stables: Typically, we’ll build what we call a “master layout,” because that’s how Jim works as well. He decides on the environment he wants to do a scene in, then, on a set, he shoots the performance capture around that location through a number of different setups. Then we break things down shot by shot.

Can you both talk about the software and hardware you used?
Barrett: For years and years, we used the same facial system. We call it the FACS, the Facial Action Coding System, and it worked well. It’s a system where essentially the surface of the face is what moves. This tends to be more expression-based than muscle-based. It’s also a system that, unless you’re very careful, can start breaking things — or what we call “going off model.” That’s when you over-combine shapes, and all of a sudden it doesn’t look like the character you’re supposed to be animating.

For this film we spent a lot of time working out how to do it differently. Now the face has been basically broken down into muscles, meaning the muscles have separated from the skin. So when we get an actor’s performance, we now know what the muscles themselves are doing, and that gets translated to the character. The beauty of this is that we can still go for all of the emotional authenticity while staying much more anatomically plausible.

How about you, Wayne?
Stables: Our biggest in-house software that drives everything is the renderer we created called Manuka, which is a specific path-trace renderer. The reason that’s become a cornerstone for us is it drives all our lighting, camera, shading and surfacing tools. We developed much more physically accurate lighting models, which let our people light shots by adjusting stops and exposure so that everything fits into real-world photography that we understand.

Tashi Trieu

Barrett: One of the other things, since there’s obviously a lot of water in the film, is a coupled simulation system we’ve been developing where you can put characters into a body of water. These simulations couple the water against the hair, against the clothes. It’s a very powerful tool.

Stables: We create a lot of fire and explosions, so we start with the simple thing first. Like for fire, we started with a candle. That way you start to understand that if you have a candle burning, you’ve got an element that’s generating heat and affecting the gas around it. This causes other effects to come through, like low pressure zones, and it shows the coupling effect.

It’s through that understanding that we were able to couple everything, whether it was water to gas or other simulations. That’s what really got us to where we needed to be for the film. But that’s a pretty big step to take on a film because you can’t just rush into it straight away and say, “What’s our final picture?” We first need to figure out how to get there and what we need to understand. Because if you can’t make a candle work, it’s going to be pretty hard to make an explosion work.

Dan, the character of Kiri is Grace’s daughter, and they’re both played by Sigourney Weaver. How did you differentiate the characters even though they’re meant to look similar?
Barrett: Once we’re given a character design, the essence of which we’re ultimately going to keep, we start testing it and seeing how the face moves. One of the things we did very early on was to study Sigourney when she was younger. (Sigourney gave us access to family photographs of when she was young.) We also referred to her body of work from early in her career.

The animation team spent many hours with early facial rigs, trying to match what we were seeing in Sigourney’s earliest work to see if we believed it. That meant the model started to evolve from what was given to us at the start so that it moved in ways that felt like a young Sigourney.

All the things we learned there meant we could then take her performance for this film and apply it to the motions we built for the younger character. But it’s still an incredible performance by Sigourney Weaver, who can play a 14-year-old girl like you wouldn’t believe.

Since Pandora is its own planet, does it have its own rules about when the sun sets or how the light hits?
Stables: It’s really driven by Jim. Obviously, things like the eclipse and time of day are all narrative-driven. Sometimes we strongly followed the template. For example, there’s a scene where Neteyam, Jake and Neytiri are landing in the forest during an eclipse, with these beautiful little orange pits of light coming through. When I talked about it with Jim, we both agreed that we liked the template and were going to stick with it.

But then there were other moments, like when Quaritch and his team are going through the jungle, that we broke away from the template because there were other films Jim referenced that he really liked. So he had us do some experiments to see what happens when we give the jungle a different look, even if it’s just for this one scene. I believe the reference he had was Tears of the Sun. So we created a very misty jungle look.

Basically, we stray as much as Jim allows us. Sometimes he lets us experiment a bit more, and other times he lets us know that he very much likes what he worked out.

Speaking of homages, did you work on the Apocalypse Now shot of Jake Sully coming out of the water? I assume this was a conscious homage.
Barrett: I did. Often when an animator submits something, they’ll have picture and picture references. So we certainly have versions of that shot of Martin Sheen popping out of the water in the picture, except it’s Sam [Worthington] popping out of the water.

Stables: I think even if it was never explicitly mentioned, everybody knew what that shot was. It’s a beautiful homage.

What’s an individual moment you worked on that you’re most proud of?
Barrett: I look back fondly at the sequence in the tent, when Jake is insisting that they need to leave high camp. We basically took these rigs we already had, threw them away and built a whole new system. So that was a sequence where a lot of development took place, with a lot of iterations of those shots. They were also done really early, and I hadn’t looked at those shots in a couple of years. So seeing how good it looked when we watched the film last night after having worked on that sequence is something that’ll long live with me.

Stables: For me, I really enjoyed the stuff we did with the nighttime attack inside the jungle with the rain. It’s a lot of fun to do big guns in the rain inside a jungle while also blowing stuff up.

The funny thing is, the two parts of the film that are my absolute favorite are ones I had nothing to do with. I just loved the part where Kiri has the anemone attack the ship. I thought that was phenomenal. The other moment toward the end with Jake, Lo’ak, Neytiri, Tuk and Kiri — hands down my favorite part. I wish I’d worked on that because it was just beautiful.

From Template Prep to the Final Image

My second interview was with executive producer and Lightstorm Entertainment VFX supervisor Richie Baneham, who helped prep the movie and produce a template and then worked directly with Weta FX to take the film to completion. He was joined by Weta FX senior VFX supervisor Joe Letteri, who took the templates Baneham handed over to create everything we see on the screen in its final form.

Richie Baneham

Avatar productions feel unique. Can you talk about the workflow and how it may differ from other productions you’ve worked on?
Joe Letteri: It starts with Jim working out the movie in what we call a template form, where he’s working on a stage with minimal props — before actor performance capture — to block it out and virtual cameras to lay the whole thing out. Richie has a big part in that process, working directly with Jim.

Richie Baneham: Yes, it is very different and unique. I’d actually call it a filmmaking paradigm shift. We don’t storyboard. We do what we call “a scout,” where we block scenes with a troop. Once we stand up the scout — by figuring out if the blocking works and developing the environment — then we look at it from a production design standpoint, and then we bring in our actors.

Once we get the performance capture, we have to down-select to focus on the real performances we want. That is an editorial process, which is different from the norm because we introduce editorial into the pipeline before we have shots. This also includes working with our head of animation, Erik Reynolds, who works under Dan Barrett, to create a blocking pass for every element we would see before we get into shot construction. It’s a very unusual way to make movies.

Joe Letteri

Then we get into shot creation, which is when we start to do proxy lighting. We try to realize as much as possible before we have the editors reintroduced, and once they get involved, it becomesa cut sequence. Then that cut sequence can be turned over to Weta.

Letteri: It’s designed upfront to be as fast and interactive as possible. We want Jim to be able to move things around like he’s moving something on-set. If you want to fly a wall out, no problem. Move a tree? A vehicle? No problem. It’s designed for fast artistic feedback so we can get his ideas out there as quickly as possible… because our part is going to take a lot longer.

We have to work in all the details, like fine-tuning the character moments, translating the actors’ expressions onto their characters, finish all the lighting and rendering — going from the virtual cinematography to the cinematography you’ll see in the final image. The idea is being able to be as creatively engaged as possible while still giving us the room to add the kind of detail and scope that we need.

So the performance capture allows you to make whatever shots you might want once they’re in the world you’ve created?
Baneham: Correct. There’s no camera on-set in the same way you would have in live action. Our process is about freeing up the actors to give the best possible performance and then protect what they’ve done all the way until the final product.

As far as shot creation is concerned, it’s completely limitless. Think of it as a play. On any given night, one actor could be great, and the next night, the opposing actor is great. We’re able to take all of our takes and combine the best moments so we can see the idealized play. It’s a plus being able to add in a camera that can give exactly what you want to tell the story. That’s the power of the tool. 

How does that kind of limitless potential affect what your relationship looks like?
Letteri: It doesn’t. That’s the whole point of the front part of the process. It’s to work out the best shots, and then we’ll jump in once Richie lets us know they’re close on something. We then try to start working with it as soon as we know nothing needs to go back to Richie and his team.

Baneham: Being down to that frame edit allows for the world to be built. The action can go forward once we know we’re definitely working with these performances, and then Weta can get started. Even after we hand that off, we still evolve some of the camera work at Weta because we may see a shot and realize it would work better, for example, if it were 15 degrees to the right and tilted up slightly or have a slow push-in. This allows us a second, third or fourth bite at the cherry. As long as the content and environment don’t change, we’re actually really flexible until quite late in the pipeline.

Letteri: That happened a lot with the water FX shots because you can’t do simulations in real time. If you’ve got a camera down low in the water with some big event happening, like a creature jumping up or a ship rolling over, then it’s going to generate a big splash. Suddenly the camera gets swamped by this huge wave, and you realize that’s not going to work. You don’t want to shrink the ship or slow down the creature because that will lessen the drama. So instead, we find a new camera angle.

Can you tell us about the software and hardware you used?
Baneham: One of the great advantages of this show is that we integrated our software with Wētā. First time around, we shot in a stand-alone system that was outside of the Wētā pipeline. This time around, we were able to take the virtual toolset Wētā employs across all movies and evolve it to be a relatively seamless file format that can be transferred between Lightstorm and Wētā. So when we were done shooting the proxy elements, they could be opened up at Weta directly.

Letteri: We wrote two renderers. One is called Gazebo, which is a real-time renderer that gets used on the stage. The other is Manuka, which is our path tracer. We wrote them to have visual parity within the limits of what you can do on a GPU. So we know everything Richie is setting up in Gazebo can be translated over to Manuka.

We tend to write a lot of our own software, but for the nuts and bolts, we’ll use Maya, Houdini, Nuke and Katana because you need a good, solid framework to develop on. But there’s so much custom-built for each show, especially this one.

Baneham: We’re inside a DCCP, which is a motion builder, but it’s a vessel that now holds a version of the Weta software that allows us to do virtual production.

With a movie like this, are you using a traditional nonlinear editing system, or is it a different process entirely?
Baneham: We edit in Avid Media Composer. Jim’s always used Avid. Even when we’re doing a rough camera pass, or when Jim is on the stage, we would do a streamed version of it, which is a relatively quick capture. It’s got flexible frame buffering. It isn’t synced to timecode, so it would have to be re-rendered to have true sync, but it gives pretty damn close to a real-time image. We can send the shot to the editors within five minutes, which allows Jim or I to request a cut. It’s a rough edit, but it allows the editors to get involved as early as possible and be as hands-on as possible.

What was your most difficult challenge? What about your proudest moment?
Baneham: One of the more difficult things to do upfront was to evolve the in-water capture system. Ryan Champney and his team did an amazing job with solving that. From a technical standpoint, that was a breakthrough. But ultimately, the sheer volume of shots that we have at any given time is a challenge in and of itself.

As far as most proud, for me, it’s the final swim-out with Jake and Lo’ak. There’s something incredibly touching about the mending of their relationship and Lo’ak becoming Jake’s savior. I also think visually it worked out fantastically well.

Letteri: What Richie is touching on is character, and to me that’s the most important thing. The water simulations were technically, mathematically and physically hard, but the characters are how we live and die on a film like this. It’s those small moments that you may not even be aware of that define who the characters are. Those moments where something changes in their life and you see it in their eyes, that’s what propels the story along.

Metkayina Village and Water Simulations

My final interview was with Weta FX’s head of effects, Jonathan Nixon, who oversaw the 127-person FX team. Their responsibilities included all the simulations for water, fire and plant dynamics. He was joined by VFX supervisor and WetaFX colleague Pavani Boddapati, who supervised the team responsible for the Metkayina Village.

Can you talk about your working relationship, given how intertwined the Metkayina Village is to water and plant life?
Jonathan Nixon: We worked very closely; we started on what was called the “Water Development Project.” This was created to look at the different scenarios where you’re going to have to simulate water and what needs to be done, not only just in FX, but how it works with light, shaders, animation and how the water looks. So we were working together to make sure that all the sequences Pavani was going to deliver had all the technology behind it that she was going to need.

Pavani Boddapati: The movie is called The Way of Water (laughs), so there is some component of water in every shot. I mean, even the jungle has rain and waterfalls.

Jonathan Nixon

What is it like working for the director of The Abyss, a film that basically invented water visual effects?
Nixon: It’s inspiring to have a director that understands what you do. We’ve learned so much from Jim, like what a specific air entrapment should look like, or what happens when you have a scuba mask on and are doing this type of breathing. So our department goes by his direction. He understands what we do, he understands how simulations work and he understands the time it takes.

It’s a once-in-a-lifetime chance to work on a film like this. And I think most of the FX team was here because they wanted to work with Jim and wanted to deliver a movie that has this much emphasis on what we do and things that we’re interested in. There’s not a better director to work for who knows what he wants and what to expect.

 

Boddapati: I’m obviously a repeat offender since I worked on the first film, the Pandora ride Flight of Passage at Disney and this film, and I’ve signed up for the next one. For me, the world of Pandora is really fascinating. I haven’t been able to get my head out of this work.

As far as Jim goes, he’s amazing and very collaborative. He knows exactly what he wants, but he wants your ideas, and he wants to make it better. All the artists on the show really enjoyed being a part of that process.

What is it like having to jump — forgive my terrible pun — into the deep end on this?
Nixon: We’ve got tons of water puns. “Get your feet wet,” all that. When I watched the first film in 2009, I was just a few years out of college. I remember sitting in that theater in New York watching the film and thinking, “This is why I’m in this industry, because of films like this.”

Pavani Boddapati

Fast forward a decade later, and I not only get to work on the sequel, but I get to be a pretty important part of steering a team of people to generate this work. It’s surreal. There’s no better way to describe getting a chance to work in this universe with a lot of people from the first one, like Pavani, who can help guide you and steer you away from problems they encountered before. It’s also great to have new people with new ideas who have a similar story to mine.

Boddapati: What’s also interesting is we had some artists from Wētā who’ve been working at Lightstorm since the first Avatar — some of whom came over to New Zealand and are now working on production. It’s helpful because they have a history of on-set work that we maybe weren’t exposed to, and that’s pretty awesome.

What were the influences in developing the Metkayina Village?
Boddapati: [Production designer] Dylan Cole was very instrumental, as was Jim himself, who draws, paints and approves all the designs. It takes inspiration from a lot of different cultures around the world. Take something small, like the weaving pattern. There was a lot of attention brought to what people use for materials when they live in places with no access to something like a supermarket. What are these materials made of? How do they weave them? Every single detail in the village was thought of like a working village. There are bottles, gourds, storage, stoves.

There was a huge amount of work that Lightstorm had done before we got involved, and then on our side, we built this thing from the ground up so it feels like a living and breathing place.

What is it like having to manage teams on something this huge when you want to stay creative and also make your schedule?
Boddapati: I’ve been on this movie for about six years, and from the beginning I’ve told every artist that this is a marathon, not a sprint. We aren’t just trying to put something together and get shots out quickly. It’s the principle of measuring twice and cutting once. Plan everything beforehand and pace yourself because we know how much preparation we need, as the short turnovers happen.

The most important thing for artists coming on is keeping that timeline in mind. Knowing that people are going to be on a show for five years, four years, three years — when an average show could be six months to a year.

Nixon: It’s tough, especially since the FX team at Weta is 160 people, and by the end of this film, we had about 127 of them working on it. As Pavi said, it’s a tricky show because of the length. I said the same thing to artists: We may have short sprints, short targets or short deadlines, but it’s still a marathon. We’d move people onto different teams or environments to give them some diversity of thought and technique. That was really important in keeping our teams happy and healthy.

Can you tell me about the software and hardware you used?
Nixon: The FX team uses Houdini, and our simulation R&D team built a framework called Loki, which is what we’re using for all of our water sims, combustion sims and fire sims on plant solvers. Loki is pretty important because of how it interfaces with Houdini.

Houdini, an industry standard, allows us to get a lot of artists into Wētā who can do the work they do at other places, while Loki enhances their work by being able to plug standard processes into it. It allows for things like higher fidelity of water sims or more material-based combustions. You can ask it if it’s a cooking fire or a big explosion, which has a lot of different types of fuels in it. It also allows plants to be moved by the water sims in a way that would be more difficult in off-the-shelf software like Houdini.

How does the film’s 48fps and 3D affect what you do?
Boddapati: A huge amount, with the stereo being the primary one. Almost everything is designed by Jim with stereo in mind, and he tells you that in the turnover. Starting with the little particles in the water, how close they are and how dense they are to show depth and scale, to water simulations, where you need lens splashes to look as if there is a dome housing on the camera.

Stereo is a huge component of the design — how close things are, how pleasing they look on the screen. We worked closely with Geoff Burdick and Richie Baneham from Lightstorm to make sure that was realized.

Regarding the 48fps, it’s critical for QC since there are now twice the amount of frames, and it also means it’s twice the amount of data.

Nixon: That’s what it is for us, especially in FX. We’ve got water simulations that are terabytes per frame. So when you increase that to 48, you’re doubling your footprint. But it also gives you flexibility when Jim decides a shot needs to go from 24 to 48.

Since Pandora has its own gravity and atmosphere, does that play into how you approach your water and fire simulations?
Nixon: We had a very big discussion about what gravity is on Pandora. You’ve got these 9-foot-tall creatures and multiple moons, but we just based everything on our reality as the starting point. If you don’t start with what people can recognize, then something that might be mathematically plausible for Pandora won’t be bought into by the audience. That’s why we start with what would look real on Earth and then push or pull where we need, based on Jim’s direction.

Boddapati: This even applies to characters. For example, if you’re looking at 9-foot-tall people, and you’re thinking about what the pore detail on the skin should be, we base that on human skin because we know we can capture it. We know we make a texture of it. We know how it should look and light, and we know we can produce that. It’s surprising how smoothly that translates to characters that are much bigger in scale.

How do the water simulations interact with the skin and hair of the characters?
Boddapati: For example, you have underwater shots, above-water shots and shots that transition between the two. That interaction between the water and the skin is critical to making you believe that person is in the water. We rendered those shots as one layer. There was no layer compositing, so when the kids are in the water learning how to swim, that’s one image.

We do have the ability to select and grade components of it, but for all practical purposes, we simulate it in a tank that’s got the characters in it. We make sure water dripping down a character falls into the water and creates ripples. Everything is coupled. Then we pass that data onto creatures, and they’ll make sure the hair and costume moves together. Then we render the whole thing in one go.

Nixon: It’s the coupling of it that matters for us because we tend to do a basic bulk sim, a free surface sim with motion, so a motion we get from stage looks correct. The waves and timing are lapping against the skin properly. Then we work tightly with creatures for hair. If you have long hair, that’s going to affect wave detail.

A lot of our process is coming up with new fin film simulations, which are like millimeter-scale sims that give you all the components you’d traditionally do in pieces. So you’ve got a rivulet of water that starts somewhere, comes down the side of the skin and then drips off.

Generally, when you do that in any other film, those are separate pieces — someone’s doing the droplet, someone’s doing the path, someone’s doing a separate sim on the drip itself. A lot of what we aimed to do had a process that does all that together so it can be rendered all together with the character, and Loki is what gives us the power to do that coupling.

Boddapati: Building off what Jonathan was saying, we actually take the map of all the displacements on the skin and displace that falling drop to make sure it’s actually going along pores because it would be affected if the skin was rough or if someone had facial hair.


Ben Mehlman, currently the post coordinator on the Apple TV+ show Presumed Innocent, is also a writer/director. His script “Whittier” was featured on the 2021 Annual Black List after Mehlman was selected for the 2020 Black List Feature Lab, where he was mentored by Beau Willimon and Jack Thorne.  

VES

VES Award Winners: Avatar, LOTR: The Rings of Power Take Top Honors

The Visual Effects Society have announced the winners of its 21st Annual VES Awards, the yearly celebration that recognizes outstanding visual effects artistry and innovation in film, animation, television, commercials, video games and special venues. Industry guests gathered at the Beverly Hilton Hotel to celebrate VFX talent in 25 awards categories and special honorees. Avatar: The Way of Water was named the photoreal feature winner, garnering nine awards. Guillermo del Toro’s Pinocchio was named top animated film, winning three awards. The Lord of The Rings: The Rings of Power was named best photoreal episode, winning three awards. Frito-Lay: Push It topped the commercial nominees.

Comedian/actor Patton Oswalt returned as host for the 10th time. Academy Award-winning filmmaker James Cameron presented the VES Lifetime Achievement award to producer Gale Anne Hurd (The Terminator, The Abyss).

Here are this year’s winners:

OUTSTANDING VISUAL EFFECTS IN A PHOTOREAL FEATURE

Avatar: The Way of Water

Richard Baneham

Walter Garcia

Joe Letteri

Eric Saindon

JD Schwalm

OUTSTANDING SUPPORTING VISUAL EFFECTS IN A PHOTOREAL FEATURE

Thirteen Lives

Jason Billington

Thomas Horton

Denis Baudin

Michael Harrison

Brian Cox

OUTSTANDING VISUAL EFFECTS IN AN ANIMATED FEATURE

Guillermo del Toro’s Pinocchio

Aaron Weintraub

Jeffrey Schaper

Cameron Carson

Emma Gorbey

OUTSTANDING VISUAL EFFECTS IN A PHOTOREAL EPISODE

The Lord of the Rings: The Rings of Power; Udûn

Jason Smith

Ron Ames

Nigel Sumner

Tom Proctor

Dean Clarke

OUTSTANDING SUPPORTING VISUAL EFFECTS IN A PHOTOREAL EPISODE

Five Days at Memorial; Day Two

Eric Durst

Danny McNair

Matt Whelan

Goran Pavles

John MacGillivray

 

OUTSTANDING VISUAL EFFECTS IN A REAL-TIME PROJECT

The Last of Us Part I

Erick Pangilinan

Evan Wells

Eben Cook

Mary Jane Whiting

OUTSTANDING VISUAL EFFECTS IN A COMMERCIAL

Frito-Lay; Push It

Tom Raynor

Sophie Harrison

Ben Cronin

Martino Madeddu

OUTSTANDING VISUAL EFFECTS IN A SPECIAL VENUE PROJECT

ABBA Voyage

Ben Morris

Edward Randolph

Stephen Aplin

Ian Comley

 

OUTSTANDING ANIMATED CHARACTER IN A PHOTOREAL FEATURE

Avatar: The Way of Water; Kiri

Anneka Fris

Rebecca Louise Leybourne

Guillaume Francois

Jung Rock Hwang

OUTSTANDING ANIMATED CHARACTER IN AN ANIMATED FEATURE

Guillermo del Toro’s Pinocchio; Pinocchio

Oliver Beale

Richard Pickersgill

Brian Leif Hansen

Kim Slate

OUTSTANDING ANIMATED CHARACTER IN AN EPISODE, COMMERCIAL OR REAL-TIME PROJECT

The Umbrella Academy; Pogo

Aidan Martin

Hannah Dockerty

Olivier Beierlein

Miae Kang

OUTSTANDING CREATED ENVIRONMENT IN A PHOTOREAL FEATURE

Avatar: The Way of Water; The Reef

Jessica Cowley

Joe W. Churchill

Justin Stockton

Alex Nowotny

OUTSTANDING CREATED ENVIRONMENT IN AN ANIMATED FEATURE

Guillermo del Toro’s Pinocchio; In the Stomach of a Sea Monster

Warren Lawtey

Anjum Sakharkar

Javier Gonzalez Alonso

Quinn Carvalho

OUTSTANDING CREATED ENVIRONMENT IN AN EPISODE, COMMERCIAL, OR REAL-TIME PROJECT

The Lord of the Rings: The Rings of Power; Adar; Númenor City

Dan Wheaton

Nico Delbecq

Dan Letarte

Julien Gauthier

OUTSTANDING VIRTUAL CINEMATOGRAPHY IN A CG PROJECT

Avatar: The Way of Water

Richard Baneham

Dan Cox

Eric Reynolds

AJ Briones

OUTSTANDING MODEL IN A PHOTOREAL OR ANIMATED PROJECT

Avatar: The Way of Water; The Sea Dragon

Sam Sharplin

Stephan Skorepa

Ian Baker

Guillaume Francois

OUTSTANDING EFFECTS SIMULATIONS IN A PHOTOREAL FEATURE

Avatar: The Way of Water; Water Simulations

Johnathan Nixon

David Moraton

Nicolas James Illingworth

David Caeiro Cebrian

OUTSTANDING EFFECTS SIMULATIONS IN AN ANIMATED FEATURE

Puss in Boots: The Last Wish

Derek Cheung

Michael Losure

Kiem Ching Ong

Jinguang Huang

OUTSTANDING EFFECTS SIMULATIONS IN AN EPISODE, COMMERCIAL, OR REAL-TIME PROJECT

The Lord of the Rings: The Rings of Power; Udûn; Water and Magma

Rick Hankins

Aron Bonar

Branko Grujcic

Laurent Kermel

OUTSTANDING COMPOSITING & LIGHTING IN A FEATURE

Avatar: The Way of Water; Water Integration

Sam Cole

Francois Sugny

Florian Schroeder

Jean Matthews

OUTSTANDING COMPOSITING & LIGHTING IN AN EPISODE

Love, Death and Robots; Night of the Mini Dead

Tim Emeis

José Maximiano

Renaud Tissandié

Nacere Guerouaf

OUTSTANDING COMPOSITING & LIGHTING IN A COMMERCIAL

Ladbrokes; Rocky

Greg Spencer

Theajo Dharan

Georgina Ford

Jonathan Westley

OUTSTANDING SPECIAL (PRACTICAL) EFFECTS IN A PHOTOREAL PROJECT

Avatar: The Way of Water; Current Machine and Wave Pool

JD Schwalm

Richard Schwalm

Nick Rand

Robert Spurlock

OUTSTANDING VISUAL EFFECTS IN A STUDENT PROJECT (AWARD SPONSORED BY AUTODESK)

A Calling. From the Desert. To the Sea

Mario Bertsch

Max Pollmann

Lukas Löffler

Till Sander-Titgemeyer

EMERGING TECHNOLOGY AWARD

Avatar: The Way of Water; Water Toolset

Alexey Stomakhin

Steve Lesser

Sven Joel Wretborn

Douglas McHale

 

In other award activity, the Society’s current and former Board Chairs presented the VES Board of Directors Award to former Executive Director Eric Roth; the group included Lisa Cooke, current VES Chair; Jim Morris, VES, President of Pixar Animation and founding VES Chair; and former Chairs Jeffrey A. Okun, VES; Mike Chambers, VES; Carl Rosendahl, VES; and Jeff Barnes.  Award presenters included: Academy-Award nominated filmmaker Rian Johnson, Academy-Award-winning filmmaker Domee Shi; actors Tig Notaro, Jay Pharoah, Tyler Posey, Randall Park, Angela Sarafyan, Bashir Salahuddin, Josh McDermitt and Danny Pudi.  Dara Treseder, Autodesk’s Chief Marketing Officer, presented the VES-Autodesk Student Award.

 

Tashi Trieu

Avatar: The Way of Water Colorist Tashi Trieu on Making the Grade

By Randi Altman

Working in post finishing or 10 years, colorist Tashi Trieu also has an extensive background in compositing as well as digital and film photography. He uses all of these talents while working on feature films (Bombshell), spots (Coke Zero) and episodics (Titans). One of his most recent jobs was as colorist on the long-awaited Avatar follow-up, Avatar: The Way of Water, which has been nominated for a Best Picture Oscar.

Tashi Trieu

We reached out to Trieu, who has a long relationship with director James Cameron’s production company Lightstorm Entertainment, to learn more about how he got involved in the production and his workflow.

We know James Cameron has been working on this for years, but how early did you get involved on the film, and how did that help?
I was loosely involved in preproduction after we finished Alita [produced by Cameron and Jon Landau] in early 2019. I was the DI editor on that film. I looked at early stereo tests with the DP Russell Carpenter [ASC], and I was blown away by the level of precision and specificity of those tests.

Polarized reflections are a real challenge in stereo as they result in different brightnesses and textures between the eyes that degrade the stereo effect. I remember them testing multiple swatches of black paint to find the one that retained the least polarization. I had never been a part of such detailed camera tests before.

What were some initial directions that you got from DP Russell Carpenter and director Cameron? What did they say about how they wanted the look to feel?
Jim impressed on me the importance of everything feeling “real.” The first film was photographic and evoked reality, but this had to truly embody it photorealistically.

Avatar: The Way of WaterWas there a look book? How do you prefer a director or DP to share their looks for films?
They didn’t share a look book with me on this one. By the time I came onboard (October 2021), WetaFX was deep into their work. For any given scene, there is usually a key shot that really shines and perfectly embodies the look and intention Jim’s going for and that often served as my reference. I needed to give everything else that extra little push to elevate to that level.

Did they want to replicate the original or make it slightly different? The first one takes place mostly in the rain forest, but this one is mostly in water. Any particular challenges that went along with this?
Now that the technology has progressed to a point where physically based lighting, subsurface scattering and realistic hair and water simulations are possible on a scale as big as this movie, the attention to photorealism is even more precise. We worked a lot on selling the underwater scenes in color grading. It’s important that the water feel like a realistic volume.

People on Earth haven’t been to Pandora, but a lot of people have put their head underwater here at home. Even in the clearest Caribbean water, there is diffusion, scattering and spectral filtering that occur. We specifically graded deeper water bluer and milked out murkier surface conditions when it felt right to sell that this is a real, living place.

This was done just using basic grading tools, like lift and gamma to give the water a bit of a murky wash.

The film was also almost entirely a visual effect. How did you work with these shots?
We had a really organized and predictable pipeline for receiving, finalizing and grading every shot in the DI. For as complex and daunting as a film like this can be, it was very homogeneous in process. It had to be, otherwise it could quickly devolve into chaos.

Every VFX shot came with embedded mattes, which was an incredible luxury that allowed me to produce lightning-fast results. I’d often combine character mattes with simple geometric windows and keys to rapidly get to a place that in pure live-action photography would have required much more detailed rotoscoping and tracking, which is only made more difficult in stereo 3D.

Did you create “on-set” LUTs? If so, how similar were those to the final look?
I took WetaFX’s lead on this one. They were much closer to the film early on than I was and spent years developing the pipeline for it. Their LUT was pretty simple, just a matrix from SGamut3.Cine to something just a little wider than P3 to avoid oversaturation, and a simple S-Curve.

Usually that’s all you need, and any scene-specific characteristics can be dialed in through production design, CGI lighting and shaders or grading. I prefer a simpler approach like this for most films — particularly on VFX films, rather than an involved film-emulation process that can work 90% of the time but might feel too restrictive at times.

WetaFX built the base LUT and from there I made several trims and modifications for various 3D light-levels and Dolby Cinema grades.

Tashi Trieu

Park Road Post

Where were you based while working on the film, and what system did you use? Any tools in that system come in particularly handy on this one?
I’m normally in Los Angeles, but for this project I moved to Wellington, New Zealand for six months. Park Road Post was our home base and they were amazing hosts.

I used Blackmagic DaVinci Resolve 18 for the film. No third-party plugins, just out-of-the-box stuff. Resolve’s built-in ResolveFX tools keep getting more and more powerful, and I used them a lot on this film. Resolve’s Python API was a big part of our workflow and streamlined shot-ingest and added a lot of little quality-of-life improvements to our specific workflow.

How did your workflow differ, if at all, from a traditionally shot film?
Most 3D movies are conversions from 2D sources. In that workflow, you’re spending the majority of your time on the 2D version and then maybe a week at the end doing a trim grade for 3D.

On a natively 3D movie that is 3D in both live-action and visual effects production, the 3D is given the proper level of attention that really makes it shine. When people come out of the theater saying they loved the 3D, or that they “don’t” have a headache from the 3D and they’re surprised by that, it’s because it’s been meticulously designed for years to be that good.

In grading the film, we do it the opposite way the conversion films do. We start in 3D and are in 3D most of the time. Our primary version was Dolby Cinema 3D 14fL in 1.85:1 aspect ratio. That way we’re seeing the biggest image on the brightest screen. Our grading decisions are influenced by the 3D and done completely in that context. Then later, we’d derive 2D versions and make any trims we felt necessary.

Tashi TrieuThis film can be viewed in a few different ways. How did your process work in terms of the variety of deliverables?
We started with a primary grading version, Dolby Cinema 3D 14fL. Once that was dialed in and the bulk of the creative grading work was done, I’d produce a 3.5fL version for general exhibition. That version is challenging, but incredibly important. A lot of theaters out there aren’t that bright, and we still owe those audiences an incredible experience.

As a colorist, it’s always a wonderful luxury to have brilliant dynamic range at your fingertips, but the creative constraint of 3.5fL can be pretty rewarding. It’s tough, but when you make it work it’s a bit of an accomplishment. Once I have those anchors on either end of the spectrum, I can quickly derive intermediate light levels for other formats.

The film was released in both 1.85:1 and 2.39:1, depending on each individual theater’s screen size and shape to give the most impact. On natively cinema-scope screens, we’d give them the 2.39:1 version so they would have the biggest and best image that can be projected in that theater. This meant that from acquisition through VFX and into the DI, multiple aspect ratios had to be kept in mind.

The crew!

Jim composed for both simultaneously while filming virtual cameras as well as live action.

But there’s no one-size-fits-all way to do that, so Jim did a lot of reframing in the DI to optimize each of the two formats for both story and aesthetic composition. Once I had those two key light-levels and framing for the two aspect ratios, I built out the various permutations of the two, ultimately resulting in 11 simultaneous theatrical picture masters that we delivered to distribution to become DCPs.

Finally, what was the most memorable part of working on Avatar: The Way of the Water from a work perspective?
Grading the teaser trailer back in April and seeing that go live was really incredible. It was like a sleeping giant awoke and announced, “I’m back” and everybody around the world and on the internet went nuts for it.

It was incredibly rewarding to return to LA and take friends and family to see the movie in packed theaters with excited audiences. It was an amazing way for me to celebrate after a long stint of challenging work and a return to movie theaters post-pandemic.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 25 years. 

Avatar: The Way of Water

Avatar: The Way of Water: Weta’s Joe Letteri on VFX Workflow

By Iain Blair

In 2009, James Cameron’s Avatar became the highest grossing film of all time. Now with the first sequel, Avatar: The Way of Water, he has returned to Pandora and the saga of the Sully family — Jake (Sam Worthington), his mind now permanently embedded in his blue, alien Na’vi body; his wife, Neytiri (Zoe Saldaña); and their four children.

Avatar: The Way of Water

Joe Letteri

To create the alien world of The Way of Water, Cameron reteamed with another visionary — Weta’s VFX supervisor, Joe Letteri, whose work on Avatar won him an Oscar. (He’s also won for The Lord of the Rings: The Two Towers, The Lord of the Rings: The Return of the King and King Kong). The film was nominated for Best Picture, Best Sound, Best Production Design and Best Visual Effects for Letteri and his team, which included Richard Baneham, Eric Saindon and Daniel Barrett.

I spoke with Letteri, who did a deep dive — pun intended — into how the team created all the immersive underwater sequences and cutting-edge visual effects.

This has to be the most complex film you’ve ever done. Give us some nuts and bolts about the VFX and what it took to accomplish them.
Yes, this is the largest VFX film Weta FX has ever worked on and the biggest I’ve ever done. I believe there were only two shots in the whole film that didn’t have any VFX in them. We worked on over 4,000 shots, and there were 3,289 shots in the final film. Weta FX alone worked on 3,240 VFX shots, 2,225 of which were water shots. Some of the shots — about 120 — were done by ILM.

It was huge in every way. For instance, the total amount of data stored for this film was 18.5 petabytes, which is 18.5 times the amount used on the original Avatar, and close to 40% of the rendering was completed in the cloud. In terms of all the crew needed to accomplish this, we had a core team of probably around 500, and then we had close to 1,800 artists and people working on it all over the world at any given time.

On a typical film, we have might one or two VFX supervisors and animation supervisors, but for this film, we needed 10 VFX supervisors and nine animation supervisors. We began working on the motion capture way back in 2017 and started on all the VFX work right away because there was so much prep to do — including building the models, doing character studies and so on. I’d say we really ramped up in earnest over the past three years.

What were the big challenges of creating so many characters and VFX for this?
The biggest thing for us was in a way evolutionary. Ever since Avatar and going back to LOTR’s Gollum and King Kong, we’ve been working very hard on character and trying to get the performance and emotion to come through. When we saw what we’d need to create for this film – the sheer number of characters, the scope of the work and scope of the emotions — we decided we needed a better and deeper understanding of just how emotions get conveyed through a character’s performance.

So we spent a lot of time studying that and built a new piece of software called APFS to allow us to do facial performance — either from capture, from animation or a combination — at a really detailed, granular level. What’s going on inside the face when you see a performance, and why does that move you? How do we make that come through from what our actors are giving us? And what do we want to see on our characters? Dealing with all that was probably where we spent most of our time and effort given that we created 30 principle, speaking CG characters with over 3,000 facial performances that we tracked and animated for the film.

It sounds like the facial performance software was a breakthrough, but a very long time coming.
It was. To me it felt like something we were on the verge of understanding but couldn’t quite crack, and now we cracked it. Now that’s not to say we’ve perfected it, but we’ve created a new framework for understanding it by moving it all into a neural network. And now that it’s in place, we can take it further over the next few films.

Avatar: The Way of WaterWhile the first film was largely set in the rain forest, this one is largely set underwater. Can you talk about the challenges of all that water?
Developing the technology we needed for underwater performance capture was a big challenge since so much of the film takes place not just underwater but in the water. You’re under the water or at the waterline, and things happen differently at both places and as you transition, so we put a lot of effort into that.

One of the big pieces was being able to capture the performances underwater. All the actors underwent extensive free-diving training so they could hold their breath for two minutes at a time. We had a volume under the water and a volume above the water, and we worked out all the differences in lighting frequencies and refractions so we could link the two together. That way, characters could pop up and dive below, and we could capture it all in one continuous action. That really helped you feel what it’s like to move underwater.

Dealing with all the water was so crucial that we rebuilt our entire simulation approach and used a global simulation methodology within our new in-house Loki framework. This allowed us to deal not just with all the water but also with textures like skin, cloth, hair and so on. We also developed a new “depth compositing” system that gave us a real-time composite in-camera without using any green- or bluescreen. That let us blend live action and CG elements so we could get a very close version of the final shot even while we were on-set shooting a scene.

Fair to say all this was another big leap forward?
Yes. On the first film, we had a couple of shots in water, like the one where Jake jumps off a waterfall when he’s being chased and lands in the water below and swims. But we did that with Sam on an office chair being pulled down the hallway (laughs), and that wasn’t going to work for this film. A large part of what we did was study and really understand body movement. We’ve worked so much with performance capture, and now we’ve expanded that capability into working with water. That was the other big breakthrough along with the facial performance-capture element. Those are what made all of this unique.

Avatar: The Way of Water: I was on the set of The Abyss, and Jim told me, “If you ever make a movie, never, ever shoot in water. It’s a total nightmare.” Didn’t he read his own memo?
(Laughs) I guess not, but in a way, this was different because you’re working out the performance. You’re not tied into the thing of, “OK, I’ve got the performance I want, but the shot didn’t work because there were bubbles in front of the camera.” You could decouple that stuff, as we were adding all that in later. We were creating the water environment and then adding the performance into it, so when the actors were working in the tank, most of the focus was on the performances. And that was true for both the performance capture and the live-action scenes that were shot in a tank. They were partial sets, and we weren’t trying to get everything. We were adding most of it later.

What other big advances were made since the first film?
Our rendering technology. We did a global illumination technique for the first Avatar film that was called “spherical harmonic lighting.” It was unique at the time — something we saw they were using in gaming that we adapted for films — and it served us very well. But it was really a stopgap for the full, spectrally accurate path-tracing technique we chose to follow since then.

Avatar: The Way of Water

We ended up writing our own spectral renderer called Manuka, which provides realistic renders of environments and characters. It’s been in action since the third Hobbit film, so it’s not new, but again, it’s that framework that allowed us to build toward what we really needed for this film. And unlike the renderers we’ve worked with in the past, which could only handle primaries — red, green and blue — Manuka allows you to work with light at all wavelengths.

That really helps when you’re doing underwater stuff, as water absorbs light differently at different depths and depending on whether it’s clear or turbid. All that was really critical to getting the look right, and we knew when we built it all those years ago that it would be a big element in making this film. So since about 2014, it’s been our renderer on every project.

What was the hardest VFX sequence to do?
(Laughs) I don’t think there were any easy ones. They all had their own unique set of challenges, but the one where the Sullys first land at Metkayina Village was extremely challenging. We had to introduce this whole new clan and dozens of unique characters who are in almost every shot. We also had to build all the environments, including the huge reef and flexible walkways.

Then all the shots for the tulkun chase were very complex because we also had half a dozen boats in the water. They’re all interacting with each other, and everything’s interacting with the water, and you have the creatures breaching and swimming and diving. So that was a huge amount of water interaction on a vast scale. Those two sequences probably represent the two ends of the whole spectrum we were dealing with.

Have you started work on the next sequels?
Yes, we’ve already shot most of 2 and 3, which we shot simultaneously with this one, and we’re already underway on all the VFX work. We have a deadline in two years for the next one, so we’re rolling right along. Jim is heavily invested in the VFX, and he really understands the state of the art of VFX and where he can take it for the next films.

[Editor’s Note: We will have an Avatar VFX roundtable in an upcoming issue, featuring a variety of artists who worked on the film.]


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Terminator: Dark Fate director Tim Miller

By Iain Blair

He said he’d be back, and he meant it. Thirty-five years after he first arrived to menace the world in the 1984 classic The Terminator, Arnold Schwarzenegger has returned as the implacable killing machine in Terminator: Dark Fate, the latest installment of the long-running franchise.

And he’s not alone in his return. Terminator: Dark Fate also reunites the film’s producer and co-writer James Cameron with original franchise star Linda Hamilton for the first time in 28 years in a new sequel that picks up where Terminator 2: Judgment Day left off.

When the film begins, more than two decades have passed since Sarah Connor (Hamilton) prevented Judgment Day, changed the future and re-wrote the fate of the human race. Now, Dani Ramos (Natalia Reyes) is living a simple life in Mexico City with her brother (Diego Boneta) and father when a highly advanced and deadly new Terminator — a Rev-9 (Gabriel Luna) — travels back through time to hunt and kill her. Dani’s survival depends on her joining forces with two warriors: Grace (Mackenzie Davis), an enhanced super-soldier from the future, and a battle-hardened Sarah Connor. As the Rev-9 ruthlessly destroys everything and everyone in its path on the hunt for Dani, the three are led to a T-800 (Schwarzenegger) from Sarah’s past that might be their last best hope.

To helm all the on-screen mayhem, black humor and visual effects, Cameron handpicked Tim Miller, whose credits include the global blockbuster Deadpool, one of the highest grossing R-rated films of all time (it grossed close to $800 million). Miller then assembled a close-knit team of collaborators that included director of photography Ken Seng (Deadpool, Project X), editor Julian Clarke (Deadpool, District 9) and visual effects supervisor Eric Barba (The Curious Case of Benjamin Button, Oblivion).

Tim Miller on set

I recently talked to Miller about making the film, its cutting-edge VFX, the workflow and his love of editing and post.

How daunting was it when James Cameron picked you to direct this?
I think there’s something wrong with me because I don’t really feel fear as normal people do. It just manifests as a sense of responsibility, and with this I knew I’d never measure up to Jim’s movies but felt I could do a good job. Jim was never going to tell this story, and I wanted to see it, so it just became more about the weight of that sense of responsibility, but not in a debilitating way. I felt pretty confident I could carry this off. But later, the big anxiety was not to let down Linda Hamilton. Before I knew her, it wasn’t a thing, but later, once I got to know her I really felt I couldn’t mess it up (laughs).

This is still Cameron’s baby even though he handed over the directing to you. How hands-on was he?
He was busy with Avatar, but he was there for a lot of the early meetings and was very involved with the writing and ideas, which was very helpful thematically. But he wasn’t overbearing on all that. Then later when we shot, he wanted to write a few of the key scenes, which he did, and then in the edit he was in and out, but he never came into my edit room. He’d give notes and let us get on with it.

What sort of film did you set out to make?
A continuation of Sarah’s story. I never felt it was John’s story to me. It was always about a mother’s love for a son, and I felt like there was a real opportunity here. And that that story hadn’t been told — partly because the other sequels never had Linda. Once she wanted to come back, it was always the best possible story. No one else could be her or Arnold’s character.

Any surprises working with them?
Before we shot, people were telling me, “You got to be ready, we can’t mess around. When Arnold walks on set you’d better be rolling!” Sure enough, when he walked on he’d go, “And…” (Laughs) He really likes to joke around. With Linda — and the other actors — it was a love-fest. They’re both such nice, down-to-earth people, and I like a collegial atmosphere. I’m not a screamer. I’m very prepared, and I feel if you just show up on time, you’re already ahead of the game as a director.

What were the main technical challenges in pulling it all together?
They were all different for each big action set piece, and fitting it all into a schedule was tough, as we had a crazy amount of VFX. The C-5 plane sequence was far and away the biggest challenge to do and [SFX supervisor] Neil Corbould and his team designed and constructed all the effects rigs for the movie. The C-5 set was incredible, with two revolving sets, one vertical and one horizontal. It was so big you could put a bus in it, and it was able to rotate 360 degrees and tilt in either direction at the same time.

You just can’t simulate that reality of zero gravity on the actors. And then after we got it all in camera, which took weeks, our VFX guy Eric Barba finished it off. The other big one was the whole underwater scene, where the Humvee falls over the top of a dam and goes underwater as it’s swept down a river. For that, we put the Humvee on a giant scissor lift that could take it all the way under, so the water rushes in and fills it up. It’s really safe to do, but it feels frighteningly realistic for the actors.

This is only my second movie, so I’m still learning, but the advantage is I’m really willing to listen to any advice from the smart people around me on set on how best to do all this stuff.

How early on did you start integrating post and all the VFX?
Right from the start. I use previz a lot, as I come from that environment and I’m very comfortable with it, and that becomes the template for all of production to work from. Sometimes it’s too much of a template and treated like a bible, but I’m like, “Please keep thinking. Is there a better idea?” But it’s great to get everyone on the same page, so very early on you see what’s VFX, what’s live-action only, what’s a combination, and you can really plan your shoot. We did over 45 minutes of previz, along with storyboards. We did tons of postviz. My director’s cut had no blue/green at all. It was all postviz for every shot.

Tim Miller and Linda Hamilton

DP Ken Seng, who did Deadpool with you, shot it. Talk about how you collaborated on the look.
We didn’t really have time to plan shot lists that much since we moved so much and packed so much into every day. A lot of it was just instinctive run-and-gun, as the shoot was pretty grueling. We shot in Madrid and [other parts of] Spain, which doubled for Mexico. Then we did studio work in Budapest. The script was in flux a lot, and Jim wrote a few scenes that came in late, and I was constantly re-writing and tweaking dialogue and adjusting to the locations because there’s the location you think you’ll get and then the one you actually get.

Where did you post?
All at Blur, my company where we did Deadpool. The edit bays weren’t big enough for this though, so we spilled over into another building next door. That became Terminator HQ with the main edit bay and several assistant bays, plus all the VFX and compositing post teams. Blur also helped out with postviz and previz.

Do you like the post process?
I love post! I was an animator and VFX guy first, so it’s very natural to me, and I had a lot of the same team from Deadpool, which was great.

Talk about editing with Julian Clarke who cut Deadpool. How did that work?
It was the same set up. He’d be back here in LA cutting while we shot. He’s so fast; he’d be just one day behind me — I’ve never met anyone who works as hard. Then after the shoot, we’d edit all day and then I’d deal with VFX reviews for hours.

Can you talk about how Adobe Creative Cloud helped the post and VFX teams achieve their creative and technical goals?
I’m a big fan, and that started back on Deadpool as David Fincher was working closely with Adobe to make Premiere something that could beat Avid. We’re good friends — we’re doing our animated Netflix show Love, Death & Robots together — and he was like, “Dude, you gotta use this tool,” so we used it on Deadpool. It was still a little rocky on that one, but overall it was a great experience, and we knew we’d use it on this one. Adobe really helped refine it and the workflow, and it was a huge leap.

What were the big editing challenges?
(Laughs) We just shot too much movie. We had many discussions about cutting one or more of the action scenes, but in the end, we just took out some of the action from all of them, instead of cutting a particular set piece. But it’s tricky cutting stuff and still making it seamless, especially in a very heavily choreographed sequence like the C-5.

VFX plays a big role. How many were there?
Over 2,500 — a huge amount. The VFX on this were so huge it became a bit of a problem, to be honest.

L-R: Writer Iain Blair and director Tim Miller

How did you work with VFX supervisor Eric Barba.
He did a great job and oversaw all the vendors, including ILM, who did most of them. We tried to have them do all the character-based stuff, to keep it in one place, but in the end, we also had Digital Domain, Method, Blur, UPP, Cantina, and some others. We also brought on Jeff White from ILM since it was more than Eric could handle.

Talk about the importance of sound and music.
Tom Holkenborg, who scored Deadpool, did another great job. We also reteamed with sound design and mixer Craig Henighan and we did the mix at Fox. They’re both crucial in a film like this, but I’m the first to admit music’s not my strength. Luckily, Julian Clarke is excellent with that and very focused. He worked hard at pulling it all together. I love sound design and we talked about all the spotting, and Julian managed a lot of that too for me because I was so busy with the VFX.

Where did you do the DI and how important is it to you?
It’s huge, and we did it at Company 3 with Tim Stipan, who did Deadpool. I like to do a lot of reframing, adding camera shake and so on. It has a subtle but important effect on the overall film.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Rob Legato to receive HPA’s Lifetime Achievement Award 

The Hollywood Professional Association (HPA) will honor renowned visual effects supervisor and creative Robert Legato with its Lifetime Achievement Award at the HPA Awards at the Skirball Cultural Center in Los Angeles on November 21. Now in its 14th year, the HPA Awards recognize creative artistry, innovation and engineering excellence in the media content industry. The Lifetime Achievement Award honors the recipients’ dedication to the betterment of the industry.

Legato is an iconic figure in the visual effects industry with multiple Oscar, BAFTA and Visual Effects Society nominations and awards to his credit. He is a multi-hyphenate on many of his projects, serving as visual effects supervisor, VFX director of photography and second unit director. From his work with studios and directors and in his roles at Sony Pictures Imageworks and Digital Domain, he has developed a variety of digital workflows.

He has enjoyed collaborations with leading directors including James Cameron, Jon Favreau, Martin Scorsese and Robert Zemeckis. Legato’s career in VFX began in television at Paramount Pictures, where he supervised visual effects on two Star Trek series, which earned him two Emmy awards. He left Paramount to join the newly formed Digital Domain where he worked with founders James Cameron, Stan Winston and Scott Ross. He remained at Digital Domain until he segued to Sony Imageworks.

Legato began his feature VFX career on Neil Jordan’s Interview with the Vampire. He then served as VFX supervisor and DP for the VFX unit on Ron Howard’s Apollo 13, which earned him his first Academy Award nomination, and a win at the BAFTAs. He worked with James Cameron on Titanic, earning him his first Academy Award. Legato continued to work with Cameron, conceiving and creating the virtual cinematography pipeline for Cameron’s visionary Avatar.

Legato has also enjoyed a long collaboration with Martin Scorsese that began with his consultation on Kundun and continued with the multi-award winning film The Aviator, on which he served as co-second unit director/cameraman and VFX supervisor. Legato’s work on The Aviator won him three VES awards. He returned to work with the director on the Oscar Best Picture winner The Departed as the 2nd unit director/cameraman and VFX supervisor.  Legato and Scorsese collaborated once again on Shutter Island, on which he was both VFX supervisor and 2nd unit director/cameraman. He continued on to Scorsese’s 3D film Hugo, which was nominated for 11 Oscars and 11 BAFTAs, including Best Picture and Best Visual Effects. Legato won his second Oscar for Hugo as well as three VES Society Awards. His collaboration with Scorsese continued with The Wolf of Wall Street as well as with non-theatrical and advertising projects such as the Clio award-winning Freixenet: The Key to Reserva, a 10-minute commercial project, and the Rolling Stones feature documentary, Shine a Light.

Legato worked with director Jon Favreau on Disney’s The Jungle Book (second unit director/cinematographer and VFX supervisor) for which he received his third Academy Award, a British Academy Award, five VES Awards, an HPA Award and the Critics’ Choice Award for Best Visual Effects for 2016. His latest film with Favreau is Disney’s The Lion King, which surpassed $1 billion in box office after fewer than three weeks in theaters.

Legato’s extensive credits include serving as VFX supervisor on Chris Columbus’ Harry Potter and the Sorcerer’s Stone, as well as on two Robert Zemeckis films, What Lies Beneath and Cast Away. He was senior VFX supervisor on Michael Bay’s Bad Boys II, which was nominated for a VES Award for Outstanding Supporting Visual Effects, and for Digital Domain he worked on Bay’s Armageddon.

Legato is a member of ASC, BAFTA, DGA, AMPAS, VES, and the Local 600 and Local 700 unions.

Sony updates Venice to V2 firmware, will add HFR support

At CineGear, Sony introduced new updates and developments for its Venice CineAlta camera system including Version 2 firmware, which will now be available in early July.

Sony also showed the new Venice Extension System, which features expanded flexibility and enhanced ergonomics. Also announced was Sony’s plan for high frame rate support for the Venice system.

Version 2 adds new features and capabilities specifically requested by production pros to deliver more recording capability, customizable looks, exposure tools and greater lens freedom. Highlights include:

With 15+ stops of exposure latitude, Venice will support high base ISO of 2500 in addition to an existing ISO of 500, taking full advantage of Sony’s sensor for superb low-light performance with dynamic range from +6 stops to -9 stops as measured at 18% middle gray. This increases exposure indexes at higher ISOs for night exteriors, dark interiors, working with slower lenses or where content needs to be graded in high dynamic range while maintaining the maximum shadow details; Select FPS (off speed) in individual frame increments, from 1 to 60; V2.0 adds several Imager Modes, including 25p in 6K full-frame, 25p in 4K 4:3 anamorphic, 6K 17:9, 1.85:1 and 4K 6:5 anamorphic imager modes; user-uploadable 3D LUTs allows users to customize their own looks and save them directly into the camera; wired LAN remote control allows users to remotely control and change key functions, including camera settings, fps, shutter, EI, iris (Sony E-mount lens), record start/stop and built-in optical ND filters; and E-mount allows users to remove the PL mount and use a wide assortment of native E-mount lenses.

The Venice Extension System is a full-frame tethered extension system that allows the camera body to detach from the actual image sensor block with no degradation in image quality up to 20 feet apart. These are the result of Sony’s long-standing collaboration with James Cameron’s Lightstorm Entertainment.

“This new tethering system is a perfect example of listening to our customers, gathering strong and consistent feedback, and then building that input into our product development,” said Peter Crithary, marketing manager for motion picture cameras, Sony. “The Avatar sequels will be among the first feature films to use the new Venice Extension System, but it also has tremendous potential for wider use with handheld stabilizers, drones, gimbals and remote mounting in confined places.”

Also at CineGear, Sony shared the details of a planned optional upgrade to support high frame rate — targeting speeds up to 60fps in 6K, up to 90fps in 4K and up to 120fps in 2K. It will be released in North America in the spring of 2019.