Tag Archives: VFX

Creating Titles for Netflix’s Avatar: The Last Airbender

Method Studios collaborated with Netflix on the recently released live-action adaptation of the series, Avatar: The Last Airbender. The series, developed by Albert Kim, follows the adventures of a young Airbender named Aang, and his friends, as they fight to end the Fire Nation’s war and bring balance to the world. Director and executive producer Jabbar Raisani approached Method Studios to create visually striking title cards for each episode — titles that not only nodded to the original animated series, but also lived up to the visuals of the new adaptation.

The team at Method Studios, led by creative director Wes Ebelhar, concepted and pitched several different directions for the title before deciding to move forward with one called Martial Arts.

“We loved the idea of abstracting the movements and ‘bending’ forms of the characters through three-dimensional brushstrokes,” says Ebelhar. “We also wanted to create separate animations to really highlight the differences between the elements of air, earth, fire and water. For example, with ‘Air,’ we created this swirling vortex, while ‘Earth’ was very angular and rigid. The 3D brushstrokes were also a perfect way to incorporate the different elemental glyphs from the opening of the original series.”

Giving life to the different elemental brushstrokes was no easy task, “We created a custom procedural setup in Houdini to generate the brushstrokes, which was vital for giving us the detail and level of control we needed. Once we had that system built, we were able to pipe in our original previz , and they matched the timing and layouts perfectly. The animations were then rendered with Redshift and brought into After Effects for compositing. The compositing ended up being a huge task as well,” explains Ebelhar. “It wasn’t enough to just have different brush animations for each element, we wanted the whole environment to feel unique for each — the Fire title should feel like its hanging above a raging bonfire, while Water should feel submerged with caustics playing across its surface.”

Ebelhar says many people were involved in bringing these titles to life and gives “a special shout out to Johnny Likens, David Derwin, Max Strizich, Alejandro Robledo Mejia, Michael Decaprio and our producer Claire Dorwart.”

Foundry Intros Modo 17.0, Bundles With Otoy OctaneRender

Foundry has released Modo 17.0, an update to its 3D software that overhauls internal systems to provide performance increases. These enhancements help artists by providing the interactivity necessary for modern asset creation workflows, with an additional focus on quality-of-life features in multiple areas. Foundry has also bundled Otoy’s Prime version of OctaneRender, which gives artists a speed increase of up to 50x over traditional CPU renderers straight out of the box.

“With 3D asset creation becoming widely adopted, performance is paramount for the future of DCC apps,” says Greg Brown, product manager at Foundry. “Modo 17.0 sets a foundation for increased performance now plus further enhancements well into Modo’s future. Additionally, bundling the Prime version of the OctaneRender from Otoy with Modo 17.0 will speed up the entire experience, from modeling to final render, reducing performance barriers for artists.”

Artists working on Apple Silicon machines will see an additional speed increase of 50% on average, thanks to Modo’s new native macOS ARM build.

With overhauled core systems and granular performance updates to individual tools, Modo, says Foundry, is poised to re-envision 3D workflows. The Modo community can expect a return to more frequent releases for Modo in 2024, which will build on the foundation of 17.0 to further accelerate more aspects of Modo. This 3D application is tailored to enhance the capabilities of experts while also making those capabilities easier for novices to use.

Foundry has enhanced several capabilities of Modo’s powerful modeling tools, including:

  • Decal workflow — It’s now faster and easier to use decals and wrap flat images onto complex surfaces with minimal distortion and no UV creation.
  • Primitive Slice — Users can now clone multiple slices of the same shape at once, making it easier to produce complex patterns. A new Corner Radius feature rounds corners on rectangles and squares so artists can make quick adjustments without switching between presets.
  • Mesh Cleanup — With this tool, users can automatically fix broken geometry and gaps so they can stay productive and avoid interrupting the creative flow.
  • Radial Align — Radial Align turns a selection into a flat circle, but artists frequently need a partial radius and not a complete circle for things like arches. Modo 17.0 ships with the ability to create a partial radial alignment.
  • PolyHaul — PolyHaul combines many of the most used modeling tools into one streamlined tool. This means artists can spend less time jumping between separate tools, helping them to stay in the flow.

“We are thrilled to bundle OctaneRender with Modo 17.0, bringing instant access to the industry’s first and fastest unbiased GPU render engine. Our mission is to democratize high-end 3D content creation, enabling anyone with a modern GPU to create stunning motion graphics and visual effects at a fraction of the cost and time of CPU architectures. We are excited to see how Modo artists integrate OctaneRender’s GPU-accelerated rendering platform into their creative process, including the ability to scale large rendering jobs across near-unlimited decentralized GPU nodes on the Render Network,” says Otoy founder/CEO, Jules Urbach.

 

 

 

 

Masters of the Air: Directors and DP Talk Shoot, VFX and Grade

By Iain Blair

World War II drama Masters of the Air is a nine-episode Apple TV+ limited series that follows the men of the 100th Bomb Group as they conduct perilous bombing raids over Nazi Germany and grapple with the frigid conditions, the lack of oxygen and the sheer terror of combat at 25,000 feet in the air. Starring Austin Butler and Barry Keoghan, it’s the latest project from Steven Spielberg, Tom Hanks and Gary Goetzman, the producing team behind Band of Brothers and The Pacific.

Anna Boden and Ryan Fleck

Ranging in locations from the fields and villages of southeast England to the harsh deprivations of a German POW camp, Masters of the Air is enormous in both scale and scope. It took many years and an army of creatives to bring it to life — such as directors including Anna Boden and Ryan Fleck and DPs including Jac Fitzgerald.

Here, Boden and Fleck (Captain Marvel) talk about the challenges of shooting, editing and posting the ambitious show. In a sidebar, Fitzgerald (True Detective) talks about integrating the extensive VFX and the DI.

After doing Captain Marvel, I guess you guys could handle anything, but this was still a massive project. What were the main challenges?
Anna Boden: We did episodes 5 and 6. I’d say for us, Episode 5 was a big challenge in terms of wrapping our heads around it all. Some of the prep challenges were very big because it’s really a long air battle sequence that takes up almost the entire episode, and we had limited prep and not a ton of time to do previz and work everything out ahead of time. Also, simultaneously, we were prepping Episode 6, which was going to take us on location and to a whole bunch of new spaces that the show had never been to before. Finding those new locations and doing both of those things at once required so much planning, so it was challenging.

How did you handle the big air battle sequence and working with the volume stage?
Boden: You don’t want to show up on the day and wing it. As filmmakers, sometimes it’s really fun to get on-set and block the sequence based on what the actors want to do. But you can’t do that when you’re shooting on a volume stage, where you’re projecting a lot of imagery on the wall around you. You have to plan out so much of what’s going to be there. That was new for us. Even though we’d worked on Captain Marvel and used greenscreen, we’d never used those big-volume LED stages before. It was a really cool learning experience. We learned a lot on the fly and ultimately had fun crafting a pretty exciting sequence.

I assume director Cary Joji Fukunaga and his DP, Adam Arkapaw, set the template in the first four episodes for the look of the whole show, and then you had to carry that across your episodes.
Boden: Yeah. They’d obviously started shooting before us, and so we were studying their dailies and getting a sense of their camera movements and the color palettes and the vibe for the show. It was really helpful. And our DP, Jac Fitzgerald, knows Adam pretty well, so I think that they had a close working relationship. Also, we were able to visit the set while Cary was shooting to get a sense of the vibe. Once we incorporated that, then we were on our own to do our thing. It’s not like we suddenly changed the entire look of the show, but we had the freedom to put our personalities into it.

And one of the great things about the point where we took over is that Episode 5 is its own little capsule episode. We tried to shoot some of the stuff on the base in a similar tone to how they were shooting it. But then, once we got to that monster mission, it became its own thing, and we shot it in our own way. Then, with Episode 6, we were in completely different spaces. It’s a real break from the previous episodes because it’s the midpoint of the season, we’re away from the base, and there’s a big shift in terms of where the story is going. That gave us a little bit of freedom to very consciously shift how we were going to approach the visual language with Jac. It was an organic way to make that change without it feeling like a weird break in the season.

Give us some sense of how integrating all the post and visual effects worked.
Ryan Fleck: We were using the volume stage, so we did have images, and for the aerial battles, we had stuff for the actors to respond to, but they were not dialed in completely. A lot of that happened after the shooting. In fact, most of it did. (Jac can probably help elaborate on that because she’s still involved with the post process for the whole show.) It wasn’t like Mandalorian levels of dialed-in visual effects, where they were almost finished, and the actors could see. In this show, it was more like the actors were responding to previz, but I think that was hugely helpful.

On Captain Marvel, so often actors are just responding to tennis balls and an AD running around the set for eyelines. In this case, it was nice for the actors to see an actual airplane on fire outside their window for their performances to feel fresh.

Did you do a lot of previz?
Fleck: Yeah, we did a lot for those battle sequences in the air, and we worked closely with visual effects supervisor Stephen Rosenbaum, who was integral in pulling all that stuff together.

What did Jac bring to the mix? You hadn’t worked together before, right?
Fleck: No, and we like her energy. She has experience on big movies and small movies, which we appreciate, and so do we. We like those sensibilities. But I think she just has a nice, calm energy. She likes to have fun when she’s working, and so do we, but she’s also very focused on executing the plan. She’s an organized and creative brain that we really appreciated.

Boden: I think that we had a lot of the same reference points when we first started talking, like The Cold Blue, an amazing documentary with a lot of footage that was taken up in the planes during World War II. Filmmakers actually were shooting up there with the young men who were on missions in these bomber planes. That was a really important reference point for us in terms of determining where the cameras can be mounted inside one of these planes. We tried as much as possible to keep those very real camera positions on the missions so that it felt as reality-based and as visceral as possible and not like a Marvel movie. We used some of the color palette from that documentary as well.

It was also Jac’s working style to go to the set and think about how to block things in the shot list… not that we need to stick to that. Once we get in there and work it through with the actors, we all become very flexible, and she’s very flexible as well. Our work styles are very similar, and we got on really well. We like our sets to be very calm and happy instead of chaotic, and she has a very calm personality on-set. We immediately hired her to shoot our next feature after this show, so we’re big fans.

Was it a really tough shoot?
Boden: Yeah. We started shooting in July and finished in October. That’s pretty long for two episodes, but COVID slowed it all down.

Fleck: I’ve never shot in London or the UK before, but I loved it. I loved the crews; I loved the locations. We got to spend time in Oxford, and I fell in love with the place. I really loved exploring the locations. But yes, there were challenges. I think the most tedious stuff was the aerial sequences because we had mounted cameras, and it was just slow. We like to get momentum and move as quickly as we can when shooting.

Even though this is TV, you guys were involved in post to some degree, yes? 
Ryan Fleck: Yes, we did our director’s cuts, and then Gary kept us involved as the cuts progressed. We were able to get back into the edit room even after we delivered our cuts, and we continued to give our feedback to guide the cuts. Typically, TV directors give over their cuts, and then it’s “Adios.” But because we worked so long on it and we had a good relationship with Gary and the actors, we wanted to see this through to the end. So we stayed involved for much longer than I think is typical for episodic directing.

Typically, on our films, we’re involved in all the other post departments, visual effects and sound, every step of the way. But on this series, we were less involved, although we gave notes. Then Jac did all the grading and the rest of the show. She kind of took over and was very involved. She’ll have a lot of insights into the whole DI process. (See Sidebar)

Anna, I assume you love post, and especially editing, as you edited your first four features.
Boden: I love post because it feels like you’ve made all your compromises, and now all you can do is make it better. Now your only job is to make it the best version of itself. It’s like this puzzle, and you have all the time in the world to do the writing again. I absolutely love editing and the process of putting your writing/editing brain back on. You’re forgetting what happened as a director on-set and rethinking how to shape things.

Give us some idea of how the editing worked. Did you also cut your episodes?
Boden: No, we hired an editor named Spencer Averick, who worked on our director’s cut with us. Every director was able to work on their director’s cut with a specific editor, and then there was Mark Czyzewski, the producer’s editor, who worked on the whole series after that. We worked with him after our director’s cut period. We went back into the room, and he was really awesome. We edited in New York for a couple of weeks on the director’s cut, and then we were editing in LA after that in the Playtone offices in Santa Monica.

What were the big editing challenges for both episodes? Just walk us through it a bit.
Boden: I’d say that one of the biggest challenges, at least in terms of the director’s cut, was finding the rhythm of that Episode 5 mission. When you have a long action sequence like that, the challenge is finding the rhythm so that it has the right pace without feeling like it’s barraging you the whole time. It needs places to breathe and places for emotional and character moments, but it still has to keep moving.

Another challenge is making sure viewers know where they are in every plane and every battle throughout the series. That ends up being a big challenge in the edit. You don’t realize it as much when you’re reading a script, but you realize it a lot when you’re in the edit room.

Then, for Episode 6, it was about connecting the stories because in that episode, we have three main characters — Crosby, Rosenthal and Egan — and they’re in three different places on three very separate journeys, in a way. Egan is in a very dark place, and Rosenthal is in a dark place as well, but he finds himself in this kind of palatial place, trying to have a rest. And then Crosby’s having a much lighter kind of experience with a potential love interest. The intercutting between those stories was challenging, just making sure that the tones were connecting and not colliding with each other, or if they were colliding, colliding in a way that was interesting and intentional.

How hands on were Spielberg and Hanks, or did they let you do your own thing?
Fleck: We mostly interacted with Gary Goetzman, who is Tom Hanks’ partner at Playtone. I think those guys [Spielberg and Hanks] were involved with early days of prep and probably late days of post. But in terms of the day-to-day operations, Gary was really the one that we interacted with the most.

Boden: One of the most wonderful things about working with Gary as a producer — and he really is the producer who oversaw this series — is that he’s worked with so many directors in his career and really loves giving them the freedom and support to do what they do best. He gave us so much trust and support to really make the episodes what we wanted them to be.

Looking back now, how would you sum up the whole experience?
Fleck: All of it was challenging, but I think the biggest challenge for us was shooting during COVID. We kept losing crew members day by day, and it got down to the point where everybody had to test every day and wait for their results. We would have crew members waiting three to four hours before they could join us on-set, so that really cut the amount of shooting time we had every day from 11 hours down to six.

Boden: Some days we’d show up and suddenly find out an hour into the day that we weren’t going to get an actor that we were planning to shoot with, so we’d have to rearrange the day and try to shoot without that actor. That was a big challenge.

Fleck: The great thing for me was how much I learned. Back in history class, you get all the big plot points of World War II, but they don’t tell you about how big these B-17s were, how violent it was up in the air for these guys. You think of the D-Day invasion when you think of the great milestones of World War II, but these aerial battles were unbelievably intense, and they were up there in these tin cans; they were so tight and so cold. I just couldn’t believe that these kids were sent into these situations. It was mind-boggling.

Boden: I also learned a lot through the process of reading the material and the research about the history of these specific people in the stories. But I’d say that one of the things that really sticks with me from the experience was working with this group of actors. That felt very special.

DP Jac Fitzgerald on Shooting Masters of the Air

Jac, integrating all the VFX with visual effects supervisor Stephen Rosenbaum must have been crucial.
Yes. When I started the show, I imagined that the majority of the VFX work would be done on the volume stage. But then I realized that he had a whole World War II airfield to create on location. Obviously, we had the tower structure for the airfield, and we had two planes, one of which was being towed. And it was all so cobbled together from the outside.

Jac Fitzgerald

The planes looked like they were complete, but they weren’t moving by themselves. They didn’t have engines in them or anything. What was interesting to me was the extent of the visual effects that Stephen had to do on the exteriors. We only had two plane bodies, but at any one time when you see the airstrip, there are 12 planes there or more. So there was a huge amount of work for him to do in that exterior world, which was actually as important as the VFX in the volume.

What about the DI? Where did you do all the grading?
It was predominantly in LA at Picture Shop with colorist Steven Bodner, who did the whole show. And because of the enormous amount of VFX, it was obvious early on that things were going to need to be done out of order in the DI.

At first, they thought that my two episodes [5 and 6] would be the first ones to have the DI, as Adam Arkapaw was unavailable to do his episodes [1 through 4] because he was working on another film. At the time they thought they would go in and do my episodes and start prepping and setting the look for episodes 1 through 4 as well. Then it became clear that the DI schedule would have to adjust because of the enormity of the VFX.

Stephen Rosenbaum spent a lot of time making the footage we’d shot and all the VFX worlds collide. I think he had an extraordinary number of people from vendors around the world involved in the project, so there was certainly a lot of cleaning up to do. We all did a lot of work on the look in the DI, trying to make it as seamless as possible. And then again, because episodes 1 through 4 needed so much VFX work, we did my episodes and then we did 7, 8 and 9, and then we went back to 1 through 4. It was certainly a lot of jumping around. I wish that we could have mapped it all from beginning to end, but it wasn’t to be.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

VFX Supervisor Sam O’Hare on Craig Gillespie’s Dumb Money

By Randi Altman

Remember when GameStop, the aging brick-and-mortar video game retailer, caused a stir on Wall Street thanks to a stock price run-up that essentially resulted from a pump-and-dump scheme?

Director Craig Gillespie took on this crazy but true story in Dumb Money, which follows Keith Gill (Paul Dano), a normal guy with a wife and baby who starts it all by sinking his life savings into the GameStop stock. His social media posts start blowing up, and he makes millions, angering the tried-and-true Wall Street money guys who begin to fight back.Needless to say, things get ugly for both sides.

Sam O’Hare

While this type of film, which has an all-star cast, doesn’t scream visual effects movie, there were 500 shots, many of which involved putting things on computer and phone screens and changing seasons. To manage this effort, Gillespie and team called on New York City-based visual effects supervisor Sam O’Hare.

We reached out to O’Hare to talk about his process on the film.

When did you first get involved on Dumb Money?
I had just finished a meeting at the Paramount lot in LA and was sitting on the Forrest Gump bench waiting for an Uber when I got a call about the project. I came back to New York and joined the crew when they started tech scouting.

So, early on in the project?
It wasn’t too early, but just early enough that I could get a grip on what we’d need to achieve for the film, VFXwise. I had to get up to speed with everything before the shoot started.

Talk about your role as VFX supervisor on the film. What were you asked to do?
The production folks understood that there was enough VFX on the film that it needed a dedicated supervisor. I was on-set for the majority of the movie, advising and gathering data, and then, after the edit came together, I continued through post. Being on-set means you can communicate with all the other departments to devise the best shoot strategy. It also means you can ensure that the footage you are getting will work as well as possible in post and minimize costs in post.

I also acted as VFX producer for the show, so I got the bids from vendors and worked out the budgets with director Craig Gillespie and producer Aaron Ryder. I then distributed and oversaw the shots, aided by my coordinator, Sara Rosenthal. I selected and booked the vendors.

Who were they, and what did they each supply?
Chicken Bone tackled the majority of the bluescreen work, along with some screens and other sequences. Powerhouse covered a lot of the screens, Pete Davidson’s car sequence, the pool in Florida and other elements. Basilic Fly handled the split screens and the majority of the paint and cleanup. HiFi 3D took on the sequences with the trees outside Keith Gill’s house.

I also worked closely with the graphics vendors since much of their work had to be run through a screen look that I designed. Since the budget was tight, I ended up executing around 100 shots myself, mostly the screen looks on the graphics.

There were 500 VFX shots? What was the variety of the VFX work?
The editor, Kirk Baxter, is amazing at timing out scenes to get the most impact from them. To that end we had a lot of split screens to adjust timing on the performances. We shot primarily in New Jersey, with a short stint in LA, but the film was set in Massachusetts and Miami, so there was also a fair amount of paint and environmental work to make that happen. In particular, there was a pool scene that needed some extensive work to make it feel like Florida.

The film took place mostly over the winter, but we shot in the fall, so we had a couple of scenes where we had to replace all of the leafy trees with bare ones. HiFi handled these, with CG trees placed referencing photogrammetry I shot on-set to help layout.

There was a fair amount of bluescreen, both in car and plane sequences and to work around actors’ schedules when we couldn’t get them in the right locations at the right times. We shot background plates and then captured the actors later with matched lighting to be assembled afterward.

Screens were a big part of the job. Can you walk us through dealing with those?
We had a variety of approaches to the screens, depending on what we needed to do. The Robinhood app features heavily in the film, and we had to ensure that the actors’ interaction with it was accurate. To that end, I built green layouts with buttons and tap/swipe sequences for them to follow, which mimicked the app accurately at the time.

For the texting sequence, we set up users on the phones, let the actors text one another and used as much of it as possible. Their natural movements and responses to texts were great. All we did was replace the bubbles at the top of the screen to make the text consistent.

For Roaring Kitty, art department graphics artists built his portfolio and the various website layouts, which were on the screens on the shoot. We used these when we could and replaced some for continuity. We also inserted footage that was shot with a GoPro on-set. This footage was then treated with rough depth matte built in Resolve to give a low-fi cut-out feel and then laid over the top of the graphics for the YouTube section.

The screen look for the close-ups was built using close-up imagery of LED screens, with different amounts of down-rez and re-up-rez to get the right amount of grid look for different screens and levels of zoom. Artists also added aberration, focus falloff, etc.

Any other challenging sequences?
We had very limited background plates for the car sequences that were shot. Many had sun when we needed overcast light, so getting those to feel consistent and without repeating took a fair bit of editing and juggling. Seamlessly merging the leafless CG trees into the real ones for the scene outside Keith Gill’s house was probably the most time-consuming section, but it came out looking great.

What tools did you use, and how did they help?
On-set, I rely on my Nikon D750 and Z6 for reference, HDRI and photogrammetry work.

I used Blackmagic Resolve for all my reviews. I wrote some Python pipeline scripts to automatically populate the timeline with trimmed plates, renders and references all in the correct color spaces from ShotGrid playlists. This sped up the review process a great deal and left me time enough to wrangle the shots I needed to work on.

I did all my compositing in Blackmagic Fusion Studio, but I believe all the vendors worked in Foundry Nuke.

Writer/Director Celine Song Talks Post on Oscar-Nominated Past Lives

By Iain Blair

In her directorial film debut, Past Lives, South Korean-born playwright Celine Song has made a romantic and deceptively simple film that is intensely personal and autobiographical yet universal, with its themes of love, loss and what might have been. Past Lives is broken into three parts spanning countries and decades. First we see Nora as a young girl in South Korea, developing an early bond with her best friend, Hae Sung, before moving with her family to Toronto. Then we see Nora in her early 20s as she reconnects virtually with Hae Sung. Finally, more than a decade later, Hae Sung visits Nora, now a married playwright living in New York. It stars Greta Lee, Teo Yoo and John Magaro.

Celine Song directing Greta Lee

I spoke with Song about the post workflow and making the A24 film, which is Oscar-nominated for Best Picture and Best Original Screenplay. It also just won Best Director and Best Feature at the Independent Spirit Awards.

How did you prep to direct your first film? Did you talk to other directors?
I talked to some amazing directors, but what they all said is that because only I know the film that I’m making, the way it’s going to be prepped is a process that only I can really know. You need really strong producers and department heads, which I was so lucky to have. I was able to draw on their experience and advice for every step of the way.

You shot in Seoul and New York. Was it the same sort of experience or was it different going back to Seoul?
The filmmaking culture is very different in both places. In New York, there is a very strong union, and in Korea there isn’t one. Also, the way that you secure locations is different. In New York, if you want to shoot somewhere, the mayor’s office knows about it. Korea is still a little bit like guerrilla filmmaking. You show up to a location and try to get it right. You can’t really get permits for things in Korea.

The story takes place over three separate timeframes. Did you shoot chronologically?
No. We shot everything in New York City, and then we had a set built for the Skype section. Then we went to Korea, prepped it for another month and shot there for 10 days.

You and your, DP Shabier Kirchner, shot 35mm. What led you to that decision?
It was my very first movie, so I didn’t know how hard it was going to be. I don’t have experience shooting on digital or film. I don’t know anything. I think part of it was first-timer bravery. I don’t know enough to be afraid. That’s where the fearlessness came from. But it was also informed by the conversations I was having with my DP. We talked about the story and how the philosophy of shooting on film is connected to the philosophy of the movie, which is that the movie is about time made tangible and time made visible. It just made sense for it to be shot on film.

Celine Song on-set

You come from the theater, where there is obviously no post production. Was that a steep learning curve for you?
Yes, but you do have a preview period in theater, when you see it in front of an audience, and you keep editing in that way. But more importantly, I’m a writer. So part of post is that I don’t think of the movie as just what I see on screen and all the sound design and every piece of it. To me, it is a piece of text. So just as I would edit a piece of my own writing, I feel like I was looking at the editing process very much like editing text.

Then of course in film, it’s not just the writing on the page. It’s also sound, color, visuals, timing… So in that way, I really felt that editing was about composing a piece of music. I think of film as a piece of music, with its own rhythm and its own beat that it has to move through. So in that way, I think that that’s also a part of the work that I would do as a playwright in the theater, create a world that works like a piece of music from beginning to end.

With all that in mind, I honestly felt like I was the most equipped to do post. I had an entire world to learn; I had never done it before. But with post, I was in my domain. The other thing I really love about editing and VFX in film is that you can control a lot. Let’s say there’s a pole in the middle of the theater space. You have to accept that pole. But in film, you can just delete the pole with VFX. It’s amazing.

Did editor Keith Fraase, who is based in New York, come on-set at all in Korea, or did you send him dailies?
We sent dailies. He couldn’t come on-set because of COVID.

What were the biggest editing challenges on this?
I think the film’s not so far from the way I had written it, so the bigger editing choices were already scripted. The harder bits were things that are like shoe leather — the scenes that hold the movie together but are not the center of the emotion or the center of the story.

One example is when Nora is traveling to Montauk, where we know that she’s going to eventually meet Arthur (who becomes her husband). We were dealing with how much time is required and how to convey time so that when we meet Arthur, it seems like it is an organic meeting and not such a jarring one. I had scripted all this shoe-leather stuff that we had shot – every beat of her journey to Montauk. We had a subway beat; we had a bus beat. We had so many pieces of her traveling to Montauk because I was nervous about it, feeling it was not long enough. But then, of course, when we actually got into the edit, we realized we only needed a few pieces. You just realize that again, the rhythm of it dictates that you don’t need all of it.

Where did you do all the sound mix?
We did it at all at Goldcrest in New York.

Are you very involved in that?
You have no idea. I think that’s the only place where I needed more time. We went over budget… that’s a nicer way to say it. That’s the only part of the post process where I really was demanding so much. I was so obsessed with it. The sound designer’s nickname for me was Ms. Dog Ears. I know different directors have very different processes around sound, but for me, I was in that room with my sound designer Jacob Ribicoff for 14 hours a day, five days a week, and sometimes overtime, for weeks. I wouldn’t leave.

I would stay there because I just know that sound is one of those things that holds the film together. Also, with this movie, the sound design of the cities and how different they are and how it’s going to play with the compositions — I had such a specific idea of how I wanted those things to move. Because again, I do think of a film as a piece of music. So I was pretty crazy about it. But I don’t want people to notice the sound design. I want people to be able to feel like they’re actually just standing in Madison Square Park. I want them to be fully immersed.

Obviously, it’s not a big effects movie, but you have some. How did that go?
I think it’s a bit of a subjective thing. Actually, looking at it, I’m like, “Well, does that seem good to you?” I’m showing it to my production designer and my DP and I’m like, “This looks OK to me, but I wonder if it can be better. Would you look at it?” So I relied on many eyes.

I give credit to Keith, but also to my assistant editor, Shannon Fitzpatrick, who was a total genius at catching any problems with VFX and having such a detailed eye. I think she’s one of the only people who really noticed things that I didn’t notice in the VFX. I’m like, I think that looks fine, and then she would say point to this one thing in the corner that’s not working. There are people at A24 who’re also amazing at catching sound and visuals because that’s their job. They’ll point out what sounds strange or what looks strange. So you have so many people who are part of the process.

Who was the colorist, and how involved were you with the grading?
It was Tom Poole at Company 3, which is where we edited and did color and everything. I love the process because I showed up after Shabier and Tom had already gone through the whole film and graded it. They did amazing, beautiful work. Then I would come in and give notes about certain scenes and then we’d do them. Of course, while they were grading it, they’d send me stills, and I’d give notes on the stills before going into the suite. Also, Shabier and Tom have worked together a lot, so they already kind of had a rhythm for how they wanted to color the film.

What sort of film did you set out to make?
Since this was the first film I’d directed, I felt like the main goal was to discover the language of my movie. It was beyond just trying to tell the story the best way I could, from the script stage to the post. I think that was the goal throughout. But the truth is that I really wanted the language of the film to be my own language, and I wanted to learn and have a revelation for myself of what my movie is.

I know it is partly autobiographical. How much of you is in Nora?
It really was inspired by a true event of sitting between my childhood sweetheart, who had come to visit me from Korea, and my husband who I live with in New York City. So this is very autobiographical, and the feeling that I had in that very personal moment is the inspiration for the whole film. But then once you turn it into a script, which is an objectification process, and then you turn it into a film with hundreds of people — and especially with the cast members who have to play the characters — by that time it has become very much an object. Then with post, it’s about the chiseling. It’s about putting together an object that is to be shared with the world.

A film is so different from writing a play. Was it a big adjustment for you?
I know theater because I was in it for a decade, probably more, so I knew the very fundamental difference between the way a play is made versus how a film is made. For example, I was taught that in theater, time and space is figurative, while time and space in film is literal. So that means there are different kinds of strengths and weaknesses in both mediums when it comes to telling a story that spans decades and continents. And, in this case, because my joke is always that the villains of the story are 24 years and the Pacific Ocean, it actually needs the time and space to be seen literally… because there needs to be a reason why these two lovers are not together. So the children have to be literally there, and Korea and New York City have to feel tangible and literal.

I assume you can’t wait to direct again?
Oh, I can’t wait. I want to wake up and just go to set tomorrow. That’s how I feel. I’m trying to shoot another movie as soon as I can.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Ketama Collective Merges to Form Experiential Studio Bermuda

Ketama Collective, part of the team that won the Grand Prix for Creative Data at Cannes last year, is merging with its two sister companies, Bitgeyser and Pasto, to form one integrated digital creative, production and technology resource known as Bermuda.

The new entity, which has opened a US office in Miami, spans everything from content production for brands to experiential executions and activations, extended realities, metaverse executions, meta-human creations, AI infusions and prototyping, as well as CG animation and design. It is billed by its founders as a creative technology lab that’s focused on offering proficiencies and specializations that global brands are searching for in today’s social media and experience-based landscape.

According to Nico Ferrero, CEO of Bermuda (and formerly MD at Ketama), this move is a natural evolution: Ketama, Bitgeyser and Pasto have frequently collaborated on complex projects for a roster of global clients, he points out. Collectively, their work has been recognized by the industry’s leading awards shows, including a Grand Prix and Gold Lion at Cannes for Stella Artois and GUT, a Silver Lion for LATAM Airlines and McCann, and a Gold Clio for “The Tweeting Pothole” for Medcom and Ogilvy, to name a few.

As it seeks to expand its footprint in the US market after having a location in Buenos Aires, Bermuda has lined up a national sales operation. On the East Coast, Bermuda will be represented by Minerva, led by Mary Knox and Shauna Seresin. Bermuda has also signed with Marla Mossberg and N Cahoots for West Coast representation and Isabel Echeverry and Kontakto for the US Hispanic market,

Bermuda is led by a group of bilingual executives from the three merged companies whose backgrounds encompass everything from agency creative, production and software engineering to experience design and fabrication. In addition to Ferrero, the company’s leaders include chief creative director Santiago Maiz, head of production Agustín Mende, regional new business director Matias Berruezo and CFO Juan Riva.

“Bermuda has opened for business backed by a combined 30 years of experience creating digital content,” Ferrero explains. “We now have a unified team of 50 experts all under one roof: digital artists, AI engineers, animators, industrial designers, software and fabrication engineers and creative technologists who specialize in multimedia executions, as well as specialists in augmented, virtual, and mixed reality content; metaverse executions; and the use of block chain.”

The new company was born after a whirlwind 2023: In the US, experiential/digital and fabrication projects staged in New Orleans, Miami, San Diego and Chicago were created for such agencies as Area 23, David and McCann, and for clients such as Google, Mastercard and pharmaceutical company Boehringer Ingelheim. It also marked the debut of a 52-episode, five-minute show, Dino Pops, that was created in hyper-real 3D animation fully executed in Unreal Engine for NBC’s streaming platform Peacock.

As a multi-brand platform, Bermuda has developed unique experiences with personalized content for literally hundreds of products distributed in Tetra Pak packaging. To date the studio has created more than 1,000 digital experiences representing over 150 household brands marketed across 28 countries.

“Our goal is to go even bigger, with more work from the US market, as we flex our muscles across all of our disciplines,” Ferrero states. “Operating as Bermuda will allow us to produce projects on a larger scale while working in different countries at the same time and while handling more complex and challenging projects. And it allows our clients, both on the agency and brand sides, to consolidate the number of entities they have to deal with while making internal collaboration easier and more efficient.” Besides the newly opened base in Miami, Bermuda currently has its HQ in Buenos Aires and offices in L.A. and Colombia to oversee projects throughout the Americas.

As for how they came up with the name, “It’s the idea of the unknown, this mysterious world,” he says, referring obliquely to the legendary Bermuda Triangle. “When you arrive at an idea, it basically comes from a magical place. How well you execute that idea, and the process by which you do it, sums up what Bermuda means to all of us.”

 

Felix Urquiza

AFX Creative Adds Felix Urquiza as ECD

Creative studio AFX Creative has beefed up its VFX team with the addition of executive creative director Felix Urquiza. He joins with nearly 20 years of experience in the field, working at companies like Method Studios; The Mill; and Team One, heading up the latter’s VFX/CG division TiltShift under the Team One/Publicis Groupe’s USA umbrella.

In his new role at AFX, Urquiza will lead the creative team and develop new strategies. In addition, he will work closely with the studio’s managing director, Nicole Fina, to introduce new clients to AFX and expand its services beyond what it currently offers.

“My goal is to bring a fresh perspective, something more personal and meaningful that will resonate not only with our internal teams but also our clients,” Urquiza notes. “Our work and capabilities are already there, and I am here to help take it to the next level. However, what’s more important to me is bringing an outside perspective to AFX. This will push our team and clients to a higher level of excitement and commitment, elevating our passion and vision of creativity.”

Throughout his career, having an outside perspective is what has propelled Urquiza from being a go-to VFX artist to a creative director and studio director. “I would describe my visual style as modern, clean-cut and pristine,” he explains. “Throughout my career, I have developed both technical and creative skills, and as a result, have become proficient in several areas, including building decks and treatments, writing and designing my own treatments for pitches, and leading the team.”

Early on, Urquiza was inspired to pursue VFX after seeing two James Cameron films. “When I was around 10 to 12 years old, there were two movies that blew my mind,” he recalls: “The Abyss and Terminator 2: Judgment Day. In The Abyss, there is a moment when a ‘water’ creature appears and forms into a girl’s face. I couldn’t understand how they did that. Ever since then, I have been fascinated by movies and how they bring amazing things to life using computers. In my sophomore year of high school, I took an elective for 3D graphics, and on the very first day of that class, I knew this is what I wanted to do. I started researching and connecting the dots, laid out my plan and moved to California. The rest is history.”

Urquiza has used that inspiration while working on projects for Activision, Nike, Bacardi, Samsung, Apple, Lexus, GM, Toyota and many more. In addition, he’s collaborated with agencies such as Team One, Saatchi & Saatchi, Leo Burnett, BBDO, McCann, Omnicom and Argonaut.

What he considers to be his primary career highlights include working on his first-ever film, Pirates of the Caribbean: At World’s End; doing a shoot with Zack Snyder during the opening weekend of 300; working on the game XCOM: The Bureau and being nominated twice for a VES award.

“During my time working at places like The Mill and Method, I gained a lot of experience in understanding what it takes to achieve high-quality work and striving to be the best in the industry,” he says. “I also learned the importance of committing to providing a personalized experience for our clients. At TiltShift, I gained valuable insights into the business side of things, such as navigating holding companies and how the decision-making process impacts the overall success of a business. Drawing from these experiences, I am confident in my ability to set high standards for creative output, collaborate effectively with clients and bring strategic ideas to the table on the business end of things.”

 

Why Egress Fees are Holding Back M&E

By James Flores

Hollywood has the reputation of being an industry at the forefront of technology, thanks to the advancements of digital filmmaking, VFX and virtual production. But there’s one area where the media and entertainment industry falls short of other industries: new technology powering how files get shared and distributed.

Instead of simply uploading the digital assets of a shoot to the cloud and working remotely, many production companies are still moving physical hard drives as if the internet had never been invented. This is because of a hidden cost involved with the major cloud providers — egress fees (aka download fees). These fees can quickly spiral out of control when a studio tries to embrace the cloud model for all digital assets. Because studios don’t want to run up expensive bills with cloud providers, they’ve now built an entire ecosystem of workarounds to get video files off of sets and into post.

These ecosystems are draining resources by adding complication and subtracting budget, and they are ultimately just as damaging as paying egress fees. The M&E field is small but produces incredible amounts of data. The status quo cloud business model involving egress fees is holding our industry back from taking full advantage of the cloud and unlocking new innovations.

What Is Cloud Egress?
One reason that major cloud providers generate such massive profits is the number of fees and additional charges that they tack on, oftentimes without transparency. This results in huge surprise bills at the end of the month. Egress fees are the cost to customers whenever they move their data files out of the provider’s platform. The average egress fee is $0.09 per gigabyte transferred from storage, regardless of use case. But specific costs are not always apparent and can be difficult to predict. In fact, there’s an entire subindustry of consultants and service providers that manage cloud costs on an organization’s behalf (collecting their own fee in the process). The various fees and charges that don’t seem like much at first glance — or that are presented as just the cost of doing business — quickly add up within common M&E workflows.

The average file size from shooting a single take of a scene is several gigabytes, meaning that even one day of shooting creates a huge price tag anytime footage gets moved in and out of the cloud for multiple rounds of digital effects and editing. It makes planning expenses in advance extremely difficult, as filmmakers can’t know how much it will cost until they’ve uploaded their work to the cloud and started editing. With this virtual roadblock in place, it’s not surprising that many M&E companies feel that it’s unfeasible to embrace the cloud.

The Production Company Hard Drive Ecosystem
In the absence of cloud storage, an ecosystem of hand-delivering hard drives has sprung up to move and protect video files, which is not necessarily beneficial to a production company’s finances. Here’s how it works:

A specialized courier industry exists to serve production teams that need to physically send files to the right location. There are a number of issues with this approach. First, it creates a delay between filming and post production that can be anywhere from a few hours to several weeks, depending on the distance between the shooting location and the editing rooms.

Second, this process generates unnecessary costs. What immediately comes to mind are the packaging, courier and other travel fees from carrying those files around. But there are hidden costs as well. Companies will have to purchase multiple hard drives as the devices wear out, and they must keep up to dozens of drives on hand at set locations, depending on the duration of a particular shoot. And if those drives get lost or damaged, then the entire cost of shooting is wasted, and expensive reshoots become necessary.

Finally, those digital assets on hard drives aren’t necessarily safe. The danger of transporting on-premise (hard drive-stored) work means drives can be lost, held up by a foreign country’s customs department or even stolen if the production is high-profile enough. This adds even more cost for security and transportation experts to protect files against each of these threats.

There will always be some need for hard drives on shoots, such as in remote locations without internet connectivity, which therefore requires temporary storage. However, looking at the costs generated by this on-premise, physical transfer ecosystem, it seems fair to ask what it would look like if that wasn’t the case.

What Could Happen Instead
What’s next is the advent of cloud workflows. Cloud technology has reshaped how most businesses operate. The same is true for the M&E industry. Many different technologies offer the ability to take data (media) directly from a camera’s encoder and move it to the cloud. These camera-to-cloud technologies often create their own data silos; data can only go into the given vendor’s cloud storage, and moving it to other tools invokes costly egress charges. With cheaper cloud egress fees — or even no cloud egress fees at all — production teams could more readily use this cloud workflow, opening up room in studio budgets and speeding up their production time thanks to the elimination of the hard drive ecosystem. This could level the playing field for smaller production companies, as they’d be able to film content much more efficiently.

Companies could focus security investments into digital security, which can be much cheaper than physical methods. Instead of trained guards, companies could rely on encrypted backups and object lock, wherein a user can designate certain objects to be immutable, meaning they cannot be altered or deleted by outsiders and thus are safe from ransomware. They’d also be free to move a lot more post production tools and editing techniques to the cloud, and they could pick and choose where they want to store data or which tools they want to use without worrying about what cloud provider they’d be stuck with.

It’s Time for a Change
With the WGA/SAG-AFTRA strikes and negotiations thankfully behind us, there’s going to be pressure on everyone to get new films and shows finished as soon as possible. These condensed timelines mean it’s now time to talk about why it’s acceptable to waste so much money and time on outdated manual processes. This question is not just for the M&E industry but for the cloud industry as well. By keeping exorbitant egress fees in place, cloud providers hurt their own businesses and limit production companies’ potential. Eliminating, or simply reducing them, would be a net benefit for everyone involved.


James Flores is who has been a working video editor/assistant editor and DIT for over 25 years. He is currently product marketing manager M&E at Backblaze.

Sunday Ticket

Creating Sounds for NFL Sunday Ticket Super Bowl Spot

Recreating what a flying football player might sound like as a bird when it lets loose with a caw isn’t your usual Super Bowl spot brief… but that was the heart of what Alt_Mix had to do when coming up with the sound design for Migration, the NFL Sunday Ticket ad that ran right before kickoff of Super Bowl LVIII.

Conceived by YouTube Creative Studio and produced by MJZ, the spot shows what happens when football players take to the skies in their annual, end of season migration. YouTube Creative Studio turned to Alt_Mix , a New York-based audio post studio founded by veteran mixer Cory Melious, for the second year in a row to provide complete audio mixing and sound design services for their Super Bowl commercial.

Sunday TicketMigration opens with a birder watcher raising binoculars to his eyes. “Beautiful, isn’t it,” he says softly as an orchestral score from music studio Walker rises in the background and we hear the far-off cawing of the flying gridsters. “Each year they must follow the path of migration, but never fear, they’ll be back,” he says as we see the players swooping in to grab a fish from a lake or alighting gently just outside a cabin.

Alt_Mix handled all aspects of the spot’s final audio, including sound design from the ground up, voiceover recording and mix.

The greatest challenge was figuring out what a football playing “birdman” should sound like. “There was a lot of testing and experimentation in coming up with just the right sound to their calls,” says Melious, who’s something of an amateur birder himself. “The creative team had a really good idea of what they wanted us to achieve, and it was our job to help them articulate that with sound. We did lots of variations, and in the end, we mixed humans making bird sounds with actual bird calls to get just the right pitch and tone.”

The spot features a number of players, such as D’Andre Swift, the running back for the Philadelphia Eagles; Baltimore Ravens tight end Mark Andrews; and Seattle Seahawks wide receiver Tyler Lockett. Also appearing at the end of the spot, watching Sunday Ticket in the cabin scene, are the popular YouTube Creators Deestroying, Pierson Wodzynski and Sean Evans.

There was an interesting interplay between the artists doing the edit (Joint), effects and finishing (Blacksmith) and the soundscape his studio created, Melious adds. “They recognized that the sound had to be strong in order to sell the idea of a football player-sized bird that migrates.

For instance, they were editing the Tyler Lockett scene with no sound on him. “But once they laid the soundtrack on, it became a laugh-out-loud moment,” says Melious. “For the story to work, we needed to connect the details seen in the visuals to make them believable, so we worked really hard to bring those tiny movements alive with sound, like when the tree branch snapped after a player landed on it, or the dust and debris kicked up when they landed by the cabin. It’s all about elevating the viewers’ experience.”

 

Unsaid Helps Celebrate Losers for M&M Super Bowl Spot

This Super Bowl, M&M’S, BBDO NY and design studio Unsaid Studio have teamed on a spot that celebrates the losers. The brand is consoling — or even trolling — them with the “Almost Champions Ring of Comfort,” which is studded with diamonds made from M&M’S peanut butter. Its sides feature a three-leaf clover and “2>1,” while a glittering M&M proudly flashing two fingers for second place rests in a bed of rubies on the top. Inside, wearers can find a single peanut butter M&M, sat in a mini stadium bezel.

The 30-second M&M spot stars such runners-up as Scarlett Johansson, Dan Marino, Terrell Owens and Bruce Smith, as well as a close-up displayed on M&M’s Jumbotron in Times Square.

Unsaid Studio was responsible for creating the bling for the ring, and they had fun with it, riding the line between epic and cheesy. As the close-up features the ring alone, floating dramatically in space, its storytelling was to be based purely on the design. The studio used Maxon Cinema 4D and OctaneRender, which helped Unsaid to render the crystals and metals in the ring, as well as produce smoke and additional special effects.

To engineer the most hype-filled-yet-sarcastic atmosphere possible for the close-up film, the team used all the tools in the toolbox: lens flares, shining sparks, diamonds refracting rainbows, embers and smoke. To keep the momentum going through the video, Unsaid pushed the camera movement and the pacing of the edit to be as creative as possible. Sweeping shots and animated lighting are paired with an incredibly dramatic track.

Sarofsky Creates Title Sequence for Marvel’s Echo

The first series under the Marvel Studios Spotlight banner, Marvel’s Echo follows Maya Lopez as she faces her past, reconnects with her Native American roots and embraces the meaning of family and community in the hope of moving forward. The series is directed by Sydney Freeland (also a producer) and Catriona McKenzie alongside Kevin Feige, Brad Winderbaum, Stephen Broussard and Richie Palmer as producers.

The producers called on Chicago’s Sarofsky to create Echo’s main title sequence. Creative director Stefan Draht and producer Kelsey Hynes led the project for Sarofsky, which created a 90-second sequence that is scored with the anthemic track “Burning” from Yeah Yeah Yeahs.

For the main title’s storytelling foundation, Freeland and the series’ producers wanted to establish a strong sense of place, emotionally connecting Tamaha, Oklahoma, and New York City. Next, to introduce Maya, her ancestors and Kingpin, the briefing called for themes of duality, tension, danger and Maya’s deafness and use of American Sign Language (ASL).

“One of the first visual themes we explored was using magical reality to express duality – using imagery that was sometimes consonant and other times dissonant,” explains Draht. “By blending various footage sources into visuals that stand outside of literal reality, we were able to bring a sense of mystery to the images.”

Working with designers and animators, including Ariel Costa, Matthew Nowak, Jens Mebes, Dan Moore, João Vaz Oliveira, Mollie Davis, and Andrei Popa, the Sarofsky team also developed a second visual theme: using hands and shadows in their storytelling. “Hands play an essential role in the series as Maya’s means of communicating using ASL – and in the telling of the creation story of the Choctaw Nation, which is told using shadow puppets in the series,” says Draht. “Developing these visual motifs amplified the core story and characters while allowing us to add meaning and tone. We use shadows to express history, danger and Maya’s ancestral connections.”

In Sarofsky’s contributions to Marvel Studios projects, the design pipeline involves visual effects, color and finishing. For Echo’s main titles, the team used Adobe After Effects with Maxon Cinema 4D.

“Because the meaning and structure of shots was so specific and carefully designed, we leaned quite heavily on intense compositing and reconstruction of images using Adobe After Effects,” says Draht.

With most shots consisting of a combination of show footage, stock and original designs, the team used Cinema 4D to recreate scenes in three dimensions, projecting 2D imagery against CG elements. “This approach aided in building shots with camera motion and a dramatic sense of depth,” explains Draht.

As the final touch, artists used Blackmagic DaVinci Resolve to align the color palette across every shot and apply a signature look to the sequence.

“This is one of my favorite types of projects; it exists somewhere in the middle between pure design and visual effects,” concludes Draht. “This series has been produced with so much attention to detail. Being allowed to explore and create something so fantastical to introduce the project is a great honor.”

 

Oscar-Nominated DP Rodrigo Prieto on Killers of the Flower Moon

By Iain Blair

Martin Scorsese and DP Rodrigo Prieto, ASC, AMC, first teamed up on The Wolf of Wall Street and followed that with Silence and The Irishman. Now they’ve collaborated on Killers of the Flower Moon, an epic Western and crime drama that tells the tragic true story of the infamous Osage murders of the 1920s. When the Osage Native Americans strike oil on their reservation in Oklahoma, a cattle baron (Robert De Niro) plots to murder tribal members and steal their wealth, even while he persuades his nephew (Leonardo DiCaprio) to marry an Osage woman (Lily Gladstone).

Rodrigo Prieto

I spoke with Prieto about shooting the film, which earned 10 Oscar nominations — including Best Picture, Best Director and Best Cinematography  — and how he collaborated with Scorsese on the look.

While this is another epic story from Marty, it’s also quite intimate. How did you collaborate on finding the right looks and tone?
You’re right in that it’s both epic in scope but also very intimate, and it took us quite a while to figure out both the looks and the focus of the story. The script was reworked quite a lot. It was the same with the look. I tested out all sorts of lenses and had different ideas about negatives and looks — even pinhole photography and infrared — all to see what felt right. We didn’t end up using many of those ideas in the movie, but they ultimately influenced other things we did do. The main idea that we ended up embracing was the visual representation of the different stories, which was most obviously manifested in the newsreel footage we shot.

Is it true you even used a vintage camera for those scenes?
Yes, we used a 1917 Bell & Howell camera that Scorsese owns. We oiled it up, got it back into mint working condition, hand-cranked that camera and shot the scenes on black-and-white negative.

Did you do a lot of research?
Yes, and a lot of the rest of the look of the film is based on the idea of how people are photographed and how they remember things. I did a lot of research on the start of color photography, and we created a LUT based on Autochrome photography, which the Lumiere brothers invented around 1903. That was one of the first techniques used to create color photography, and we emulated the feel that I had looking at what are basically black transparencies that have a very specific feel to the color.

That was the way we represented the descendants of the American immigrants, the white people and characters like Ernest (DiCaprio) and Hale (De Niro). Their part of the story has that look, but for all the Osage scenes, when they’re alone and not with white people, we photographed them on film negative. The look for that was based on 5219 stock and how that film negative looks on Vision film print. It’s a very naturalistic look. The colors are what we perceive as the colors of nature and underscore the Osage people’s connection to the land and nature.

The third look of the film is ENR-based, which we used toward the end, and it begins with the explosion of Mollie’s sister’s home. That’s when things really start unraveling and when Ernest’s guilt starts really kicking in… his confusion gets worse and worse, and she gets sicker and sicker. To illustrate all that, we transition into a much harsher look. I also used the ENR look for the last part of The Irishman. The feel of it is more desaturated in terms of color and higher contrast, and it looks a little nastier as it enhances the film grain even more. That’s the basic arc of the look of the movie.

When did you start working with colorist Yvan Lucas?
We met on Oliver Stone’s Alexander, which we color-timed in Paris at Éclair, and I fell in love with his work. We became good friends. For me it was a revelation the way he did digital color grading, which is really based on photochemical color grading in terms of his process. He basically uses printer lights, which is a very comfortable method for me. Instead of manipulating highlights and lowlights and midtones on every shot, which is essentially creating a new LUT for every shot, we just create a LUT and use printer lights. That’s why LUTs are so important to me because it’s really like your negative, even if you’re shooting digital. Since 99.9% of prints are actually digital DCPs, the LUTs become a crucial part of the feel of a movie.

How did you make all your camera and lens choices?
We shot it 35mm on ARRICAM LTs and STs, with the Sony Venice 2 for the digital scenes at dusk and night. The lenses were Panavision T Series anamorphic, and they were adapted for us by Panavision’s lens guru, Dan Sasaki. He detuned them a bit for us and also added a special coating that made the flares warmer than usual. I thought that was important because sometimes the blue flare, which is a characteristic of many anamorphics, feels too modern to me.

What were the main challenges of shooting this?
The main challenge was learning about the Osage culture, about Oklahoma at that time and the attitudes of whites to the Osage… and then finding ways to represent all that visually. Scorsese designed a lot of shots to give that sense you mentioned — a big story but also an intimate look at way the characters are living in the moment. So when we introduce Ernest and he gets out of the train, we do this big, swooping shot of the station that starts wide on a crane and then swoops in on Ernest. That’s Scorsese’s grammar, how he expresses himself, and I find that endlessly fascinating and so enjoyable to execute his ideas.

Basically, he designs the shots to give audiences all the information they need; you see the station and the town name, Fairfax, and then the character he’s introducing. Then there’s a drone shot that starts with the car  — we see Ernest driving with Henry Roan (William Belleau). It’s a red car on a green background, the same color contrast that many photographers used with Autochrome, so it’s a very conscious choice and design. Then we pull away and see the landscape with the oil. Again, it’s a way of looking at both the macro story and the intimate one.

What about dealing with all the VFX?
I’ve worked with VFX supervisor Pablo Helman before on other films, and the big challenge here was dealing with all the set extensions. We shot in Fairfax and all around the area, but the main street in Pawhuska, where we also shot, was better because it had more older buildings. We had bluescreen at the end of both streets, and Pablo extended both. That was a challenge in terms of the light and the bluescreen shadows. We also had to do extensions for the drone shots of Fairfax and the surrounding area, but most of the VFX involved clean-up and removing modern things. It helped a lot that not only were we shooting in the real locations, but that many of them hadn’t really changed. We didn’t have to do much work, and we didn’t need tons of crazy, spectacular VFX.

Tell us about the DI.
All the careful work we did with the LUTs in prep was essential, as Scorsese and his editor Thelma Schoonmaker [ACE] spent many months cutting all the material so they could get used to the look. It was crucial that the dailies they were editing with were as close as possible to what I intended. Yvan also supervised the dailies workflow.

We adjusted stuff in the DI, but it wasn’t a big departure from the dailies. We matched all the shots for continuity. The way we work is that Yvan does his pass first to match it all, and then if I want the scene to be darker, there’s an offset for everything since it already matches. That makes the DI work pretty simple. It also gives us time to do a window here, a window there.

For this film I was going for a higher level of contrast than in the others I’ve done with Scorsese. We really wanted to represent the darkness that’s happening in the story. The lighting helped us do that. The chiaroscuro was much stronger than in the other films, especially toward the end. But sometimes I did the opposite, like in the courtroom scene. The set was very light in color, and it was bright, overexposed, harsh light to underscore the inner turmoil. Sometimes you have to use ugly shots and ugly lighting to support the emotions of the story.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

 

 

 

 

 

Post and VFX Houses Team for CrowdStrike Super Bowl Spot

For the second consecutive year, CrowdStrike is airing a spot during the Super Bowl. This year’s ad features CrowdStrike’s AI-powered cybersecurity heroine Charlotte as she tackles modern cyber adversaries and stops breaches. The Future brings a stylized spin to a classic Western tale to show how CrowdStrike is securing the future of the digital frontier. The ad will broadcast during the two-minute warning in the first half of the game.

Last year’s ad looked back at how the company would have stopped history’s most infamous breach: The battle of Troy. This year’s commercial is set in a futuristic Wild West and tells the story of good versus evil as four notorious nation-state and e-Crime cyber adversaries ride into town looking to cause turmoil and disruption. Armed with the power of the AI-native CrowdStrike Falcon XDR platform, Charlotte rapidly detects the threats and stops them.

The 30-second commercial, shot on the Sony Venice camera, was helmed by director Tarsem Singh of RadicalMedia and produced by CrowdStrike’s newly formed internal creative agency, Redbird, in collaboration with Howdy Sound, Lime Santa Monica, Nice Shoes, RadicalMedia, Union Editorial and Zoic Studios.

“It was great to be reunited with the amazing CrowdStrike team. They come to the table knowing what they want, while giving me room to experiment,” says editor John Bradley of Union Austin, who also cut last year’s Troy. “The offline rough-cut portion of any VFX-driven project requires you to use your imagination, and this was a much more effects-heavy spot than Troy. Every shot had several layers of VFX work to be done.” Bradley cut The Future on Adobe Premiere.

For its part, Zoic used a broad spectrum of software and tools but mainly relied on SideFX Houdini, Autodesk Maya and Foundry Nuke to achieve the majority of the VFX lift.

“We used the practical foreground set buildings and built a full-CG world/environment around them,” says Zoic EP Sabrina Harrison-Gazdik. “Set pieces were enhanced and modified as needed to tie them into the environment. The extensive environment build is implemented into all exterior shots in the spot(s).” The interior saloon shots also required VFX work across the scenes — concepting, building and animating tabletop game holographics, the holographic treatment for the piano player and dancers, the sheriff’s robot arm, the robot bartender — and each shot included one or more VFX elements.

Nothing was captured on a volume or green. Everything was shot on-location. “VFX worked off hero takes and/or clean plates where able to integrate CG into practical locations,” explains Harrison-Gazdik.

“Originally, we were going to grade without the alpha channel mattes for every shot, but as we were grading, there was one particular shot where it was difficult to grade the adversaries and the footage separately,” recalls colorist Gene Curley of Nice Shoes. Curley worked on a FilmLight Baselight and had alpha channel mattes for some shots. “Zoic was able to quickly render these mattes during the color session, and it made the shot much easier to grade.”

When it came to sound design, “The call was for futuristic sci-fi characters in an Old West environment,” says Dusty Albertz of Howdy Sound. “I think we succeeded in crafting a soundscape that is both believable and fun.”

Mixer Zac Fisher of Lime adds, “It was important to find the right blend between the nostalgic undertones of the old-timey sound bed and the futuristic elements of the sound design. CrowdStrike’s collaboration with exceptional composers and sound designers made for an ideal mixing experience. I wanted to make sure I focused on enhancing the comedic elements to capture the audience’s attention. In a spontaneous moment during the final stages of the mix, there was a suggestion to include a whip crack to finish the spot. Everyone in the room ended up loving it, and personally, it was my favorite touch of the project.”

Union Austin EP Vicki Russell says this was one of the smoothest processes she’s ever experienced, “especially in the realm of VFX-heavy Super Bowl spots, where there might be added pressure. It’s been a sheer joy, with all the partners working so well together. CrowdStrike/Redbird consistently provided great feedback and maintained a very inclusive, appreciative vibe.”

You can watch The Future, which features an original score by Douglas Fischer, on YouTube before it airs.

 

 

 

Digital Domain’s VFX for Chafa and More for Marvel’s Echo

For the premiere season of Marvel Studios’ Echo, the visual effects team at Digital Domain helped visualize the origin story of the Choctaw people, a Native American tribe belonging to the southeastern part of the United States. To bring the story and the opening scene to the screen, Digital Domain worked closely with production and the Choctaw Nation to ensure the visuals and storytelling held true to the Choctaw culture.

The series opens in a dark cave located inside the Earth where we see Chafa, the first Choctaw, covered with clay, emerge from a glowing, swirling pool of magical blue water. Chafa exits the pools and drinks the water. As she stares down at her hands, tattoos glow and swirl onto her hand, and more clay people begin to appear. A bishinik bird then lands in her palm before flying away as an earthquake begins and the cave starts to collapse.

Chafa holds up the cave as the clay people seek safety. The cave collapses and in the next scene, Chafa and her people are seen in a field with trees alongside a grass mound. The clay begins to crack and shed from their bodies, and they are revealed in human form. The scene closes as the mighty Chafa leads her people.

The Cave
To bring this sequence to life, Digital Domain’s team of VFX artists focused on several key areas, including the cave, the pool, the clay people and the bishinik bird. The team tackled the cave environment first. Artists digitally built the cave asset, and because of lighting and the ambient atmosphere in which the cave was shot, they replaced the majority of the set, including the columns. This sequence also required a fair amount of roto work, as the clay people needed to be roto’d out to recreate the cave.

The Pool
To create the pool in the show, the team had to replace and simulate the practical pool. Artists created many iterations and gave the pool a celestial galaxy-like, swirly design. The sequence with Chafa emerging from the water was shot practically so the Digital Domain team replaced the actor’s body, excluding her face. The actor could be seen waiting for a cue to emerge, so the team painted her out. Artists also simulated the actor’s body exiting the pool and the water that dripped from her.

Clay Transformations
The VFX team created two digidoubles showing the transformation as the characters shed the clay, revealing their human form. The Digital Domain team worked closely with the production team for this scene because the way the clay dried, cracked and peeled from the skin was art-directed. The close-up shots of the hands were complex due to the layers and lines within the skin. Additionally, the team created the mound, the grass and the trees for the background of the environment.

The VFX team at Digital Domain used Autodesk Maya for animation and layout, Maxon ZBrush and Foundry Mari for texturing and modeling; SideFX Solaris for rendering and simulation inside of Houdini (also from SideFX) and Foundry Nuke for compositing.

The Digital Domain team also collaborated with another VFX vendor, ILM, sharing the asset for the biskinik bird. Although the bird was only in about five shots, creating the bird’s feathers was extremely intricate. For Episode 5, “Maya,” Digital Domain’s VFX team animated last moment of the pow-wow scene.

Rachel Faith Hanson Named EP of VFX at Picture Shop

Picture Shop has added Rachel Faith Hanson as executive producer of visual effects. She will be based at Picture Shop’s facility at Sunset Gower Studios in Hollywood. She brings over 20 years of industry experience in visual effects and post to her new role.

Hanson began her career working in post at studios including Paramount Pictures and New Line Cinema before transitioning to the facility side of the industry. She has worked on all aspects of the feature and television post pipeline. Prior to being named executive producer of visual effects for Picture Shop, Hanson spent nearly 7 years with Ingenuity Studios (a Picture Shop sister company) as an EP, where she oversaw projects for top studios including Netflix, HBO and Disney, among others.

She was also responsible for mentoring new producers and coordinators, creating script breakdowns, bidding for new work, and strengthening and maintaining key client relationships. Her recent credits include A Murder at the End of the World, She-Hulk: Attorney at Law and the soon-to-be-released The Spiderwick Chronicles.

Picture Shop’s global finishing team offers Autodesk Flame and beauty services and focuses on efficiency and creative problem-solving for theatrical, broadcast and streaming episodic, and beyond. The team supports the company’s roster of colorists.

Picture Shop’s global finishing team will also support projects that pass through Streamland Media’s visual effects companies, Ghost VFX and Ingenuity Studios.

Picture Shop’s locations include Los Angeles, New York, Toronto and Vancouver, with international locations in London, Manchester, Bristol, Wales and at Pinewood Studios.

 

 

Behind the Title: BlueBolt VFX Supervisor David Scott

David Scott is a visual effects supervisor at London-based BlueBolt, an independent studio that provides VFX for television and film.

“It’s run by a great bunch of industry pros, a lot of whom I’d worked with before in previous companies, like MPC,” explains Scott. “What is nice about being in a smaller company is the scope of work you get to do and the types of films and projects you work on. Your involvement in it is much more than in bigger studios, where things are much more departmentalized. Plus, you get to know almost everyone in the company, which is definitely not the case in bigger ones.”

Let’s hear more from Scott…

What does the role of VFX supervisor entail?
My primary responsibility is to ensure that the director’s vision and expectations are brought to fruition. The process can start during preproduction, where we break down the script, discuss approach to shooting and identify where VFX may be required. Collaborating closely with the production team, we plan the shoot to capture the necessary elements for the shots.

David Scott

The Great

Once the shoot concludes, my focus shifts to the post phase at BlueBolt. Here, we discuss the specific requirements for each shot and plan our approach. Throughout the VFX process, we maintain regular reviews with the director. Our involvement extends into the digital intermediate stage, ensuring our contribution until the final shot is graded and officially locked. It’s a comprehensive journey from initial concepts to final shots, with constant collaboration to achieve the desired look.

What would surprise people the most about what falls under that title?
The number of meetings and reviews each shot has before it’s presented as final.

How long have you been working in VFX, and in what kind of roles?
I have been working in VFX for 20 years. I’ve worked in different companies throughout my career, mainly in London but also for a number of years in New Zealand. I started in the rotoscoping department, moving into prep and then compositing. Within compositing, I’ve been a lead and a comp supervisor, and for the past three years I’ve been VFX supervising.

The Great

How has the VFX industry changed in the time you’ve been working? The good and the bad.
So many aspects have changed, but the first thing that comes to mind is that the scale and complexity of projects has grown massively throughout my career in VFX. Before, a 300-shot show would book out a whole facility, whereas now the larger VFX houses can handle multiple shows, each with thousands of shots.

The upside is that we’re tackling more ambitious projects, pushing the boundaries of what’s visually possible. However, the downside, is that timeframes haven’t kept pace with this expansion. The challenge lies in delivering high-quality work within the same, if not tighter, schedules.

Do you like being on-set for shots? What are the benefits?
There’s a unique energy and immediacy to the on-set environment. Being there allows for instant problem-solving, better collaboration with the production team and an intuitive understanding of the director’s vision. It’s all about soaking it up and ensuring the VFX fits seamlessly into the shots.

What do you see as a big trend that is happening now or maybe is on the verge of happening? Is it AI? If so, what are your thoughts on how it could be used for the good and not the bad in VFX?
Absolutely, AI and machine learning are undeniably making a significant impact on the world of VFX. While headline-grabbing applications like deepfakes and de-aging are understandably in the spotlight, the benefit of AI across the whole VFX workflow will bring massive gains.

David Scott

The Great

As these technologies develop, there’s immense potential for efficiency enhancement, optimizing the day-to-day processes. When integrated thoughtfully, AI has the power to become a valuable ally, boosting productivity and increasing creativity in the VFX industry.

Did a particular film inspire you along this path in entertainment?
There are so many from my childhood, but the standout is Who Framed Roger Rabbit. I remember they promoted it with a lot of behind-the-scenes information about the technology and techniques used, which I found so fascinating.

Where do you find inspiration?
My inspiration comes from everywhere. Reference is key when tackling shots, so I enjoy delving into stock footage sites, exploring YouTube and referencing other movies.

What’s your favorite part of the job?
I love that every show comes with its own set of challenges to solve, both technical and creative. Working with so many talented people, sharing ideas and developing them together is my favorite part.

If you didn’t have this job, what would you be doing instead?
Definitely graphic design. I studied graphic design at college and worked doing that for four years before making the jump into VFX.

David Scott

The Great

Can you name some recent work?
I’m currently working on Nosferatu. Previous work includes, The Northman, The Great (Season 3), Avengers: Endgame and James Bond’s No Time to Die.

What tools do you use day to day?
Most of my day is spent in RV reviewing shots and in ShotGrid for everything else show-related. And if I need to work on specific shots, I’ll use Nuke for compositing.

Finally, what do you do to de-stress from it all?
When I’m mid-project, I find it hard to fully switch off, so exercise becomes key to relieve the stress. And if I have free time, the weather is good and the stars align, then I’ll play some golf.