By Iain Blair
The last time a Spider-Man film got any Oscar love in the visual effects category was in 2004 with Spider-Man 2, but all that has changed with the recent Spider-Man: No Way Home.
The Oscar-nominated VFX team on director Jon Watts’ Spider-Man: No Way Home was led by Kelly Port, Chris Waegner, Scott Edelstein and Dan Sudick. Port, Marvel’s VFX supervisor on the film (on loan from Digital Domain, Vancouver), oversaw some 2,500 VFX shots with work done by many vendors, including Sony Pictures Imageworks, Framestore, Digital Domain, Crafty Apes and Luma. These effects included visually stunning set pieces in addition to resurrecting such VFX-driven characters as Doc Ock, Green Goblin, Sandman, Electro and Lizard. And all the technical advances since Spider-Man 2 are on full display.
I recently spoke with Port, who was previously Oscar-nominated for his work on Avengers: Infinity War, about working on the film, the VFX pipeline and the challenges involved.
Give us a sense of working with director Jon Watts. What were the early discussions about, and what was the approach to creating all the visual effects?
From the very start, he was extremely collaborative and had great attention to detail. We began by working with the previz team at Digital Domain at the end of 2019, and that went on all the way through — including postviz — until we delivered. That whole process was really critical to the film.
Jon and I worked very closely with them on formulating what the scenes would look like, getting the pacing right and doing quick edits. There was a lot of pitching ideas back and forth, and a ton of great ideas came out of that. Then we used that as a template for shooting the live action. Once all that was done and cut together, we’d do the postviz and refine the edit over time. Usually, a sequence would come in long, so then we’d make adjustments in postviz or in additional previz.
How hands-on is Jon with all the VFX work?
Very. During that whole process Jon would give me these very detailed PDFs that he’d make with screen captures and notes about very specific things we may not have called out — little easter eggs and various other things.
How closely did you work with the rest of the VFX teams?
Very closely. We had 12 different vendors with about 2,500 people in all the different departments working on the show, so it was a small army. During COVID we all worked on Zoom and we’d take their material and works in progress, whether it was animation or effects development or character asset development, and look at it and give our notes.
What’s great with Jon is that he often likes to work with the vendors directly and their supervisors and animation directors, and he goes through the scene in great detail. That really helps make the whole process a lot more efficient, I think, when a director’s willing to do that. And all the vendors certainly appreciate it.
What were the big challenges?
It was such a huge undertaking, and so much work. I’d say that 97% of the all the shots in the film had some sort of VFX work. There were only 80 shots we didn’t touch. And all the shots we did create, we wanted them to be amazing – even the invisible ones you don’t notice, and we had so many of those.
For example, if there’s a group of people standing around in the kitchen, not only is there a bluescreen out the window for the New York skyline, but it might be that none of those people were in the same room at the same time because of a scheduling problem, or they were different takes from different days or weeks because of performance reasons. So we’d mix bits of all these different takes.
Then for a good half of the film, Tom Holland’s Spider-Man was wearing a digital suit below the neck. It’s basically Tom’s head in a digital body, so creating that was a huge amount of work, tracking what his body was doing and the folds in the suit he was wearing. Also putting it in the correct lighting environment so that it matched the lighting of the surrounding live-action characters. And then anything that crossed in front of that suit had to be roto’d, so it was very time-consuming and labor-intensive.
How tricky was it doing the transitions from live actors to CG doubles?
Making all those transitions seamless took a lot of work, and often you need to make the transition to the digital character earlier, as you need that overlap. For instance, if Spider-Man grabs MJ and they run and fly into the air, for the live-action bit, they’re harnessed and rigged to do that. But instead of just doing the CG doubles at that point, we’d start earlier so we could get the pacing just right. It has to be spot-on photographically. You can’t just pop over a frame or two.
I assume you did a lot of scanning?
A ton. Basically, we had a whole team set up to scan high res as much as we could during all the live-action shooting – everything from actors to costumes, props and textures. And we had lidar and scanned every single set and all our locations. Most of the time we needed all that stuff, so we made sure we captured it all because it’s a lot more expensive and a bigger deal to scan stuff later after everyone’s gone. Doing it the way we did isn’t cheap, but we needed it because there are so many characters, and having the scans was so important for the people developing the digital versions of those characters. It gives all the modelers and sculptors a great base to start building production models, which can be rigged for animation.
And doing it the way we did isn’t cheap, but we needed it because there are so many characters, and having the scans was so important for the people developing the digital versions of those characters. It gives all the modelers and sculptors a great base to start building production models which can be rigged for animation.
This marks the return of the Sandman from 2007’s Spider-Man 3. The technology’s evolved so much since then. Were you able to use some of the original assets, or did you basically have to start from scratch?
We had to start from scratch. Even with the more sophisticated VFX back then, like Sandman, you can’t just grab assets from before because the software’s evolved so much. The new sand simulation is far more sophisticated than back then, and the processing power is way greater too.
Sandman is a great example of how the VFX pipeline worked on this. With a lot of these characters, we’d typically have a lead VFX studio, and they’d then share some of the models and textures with other vendors – but not the proprietary rigging. So Sandman was actually worked on by three vendors – Digital Domain, Imageworks and Luma. Luma did the “power line corridor” sequence, where Sandman is first introduced in the film along with Electro. There’s a “sand-stormy” version, similar to what you see in the third act, but also a more humanoid version toward the end of that sequence. Then Digital Domain created a humanoid form when he’s talking, and Imageworks did all the big sims for the big battle sequence at the end, when he’s this enormous, swirling sand monster.
How did you create the spectacular Doc Ock bridge attack?
That was done by Digital Domain, and it was one of the earliest sequences we previz’d. We shot it at Trilith Studios in Atlanta and had a 200-foot by 400-foot concrete pad built to create a section of freeway and an off-ramp. It was open at one end, and the other sides had 40-foot bluescreens, and we had all the cars and K-rails. We had to create the whole New York skyline and so on, so there’s a good 4 square miles of fully digital, photoreal, 3D environments that had to attach to all of this — everything from the other end of the bridge to Brooklyn, the river all the trees and buildings.
Even though we had real cars, sometimes it was easier to replace them than to deal with any bad reflections, such as the camera crew and bluescreens. This whole sequence is another great example of interdepartmental crossover collaboration to create a very complex visual. You have all the live action, big stunts with Alfred Molina in wires on a moving platform, along with special effects of cars blowing up and hurling hundreds of feet through the air. There were special rigs, like “the rotisserie,” so we could hang cars upside down, and then all the VFX.
Tell us about creating the Mirrorverse sequence.
That was done by Framestore, and the initial idea was to riff off visuals we’d seen in Dr. Strange, such as rotating buildings sliding in and out. So we did that, but we also wanted to take a step forward and do something that hadn’t been seen before, and I loved the idea of creating a hybrid world. That led to creating this hybrid city/Grand Canyon, as Strange keeps trying to put all these obstacles in Peter’s way, but Spider-Man just keeps adapting to them. Then we added the idea of portals to this, and we had a lot of fun playing with that and the concept of portals within portals. It ended up getting very complicated, but it was very cool to do.
Was this the most challenging VFX job you’ve ever worked on?
Without a doubt. There were so many characters and different environments, and the sheer volume of VFX was so challenging. Then we knew all the fans were so excited about it, and we didn’t want to disappoint them, so it’s a great honor for everyone involved to be nominated.
Main Image Credit: Luma
Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.