By Iain Blair
Tom Hanks enjoys telling stories about World War II. After his Oscar-nominated role in Steven Spielberg’s Saving Private Ryan, Hanks — together with his Playtone producing partner Gary Goetzman — and Spielberg produced the miniseries Band of Brothers and The Pacific.
His latest World War II project is the naval thriller Greyhound, for which he also wrote the screenplay based on the novel “The Good Shepherd” by C.S. Forester.
Set against the backdrop of the Battle of the Atlantic, the film stars Hanks as Ernest Krause, a longtime US Navy officer with no combat experience who finally receives his first command: leading the destroyer Keeling (code-named Greyhound) and three other escort ships to protect a convoy of 37 merchant vessels carrying supplies and troops to England. It’s a dangerous assignment as German submarines patrol the waters, brutally enforcing a German blockade.
To direct Greyhound, Hanks and Goetzman tapped Aaron Schneider, whose 2009 debut film, Get Low, was honored as Best First Feature Film at the Independent Spirit Awards. A former cinematographer (he shot second unit for James Cameron’s Titanic), his directing career began after winning an Oscar for his short-film adaptation of William Faulkner’s Two Soldiers.
His Greyhound team included director of photography Shelly Johnson, production designer David Crank, VFX supervisor Nathan McGuinness and editors Sidney Wolinsky and Mark Czyzewski.
I recently spoke with Schneider about making the Apple Original film, the workflow and his love of VFX.
Filming in water is notoriously difficult. I was on the set of The Abyss, and after Titanic, Jim Cameron told me, “Whatever you do, never ever shoot on water or at sea.” You obviously paid attention.
I did, and none of this was shot at sea. In fact, there’s close to zero real water in the film. You couldn’t even find the period ships to take out to sea, anyway, so we built it all digitally.
What were the main technical challenges in pulling it all together?
It was a relatively low-budget film considering what we had to do, and we couldn’t afford to build all the sets for different parts of the destroyer, so we shot this mostly on stage and on the USS Kidd. The Kidd is a WWII destroyer docked in a museum, which we used as our touchstone, as our Keeling. And to tell the detailed story of how a destroyer works, our best strategy was to build matching sets. Most of it takes place up in the pilot house, so we matched that to the Kidd and used a giant gimbal, and then we could intercut.
Very early on — back in 2016, when Tom, Gary and I teamed up — I began building an online photo-reference bible with all the research and imagery. I then shot over 10,000 photographs of the Kidd and used photogrammetry to generate a high-resolution 3D model of the ship. That was so important as an asset because I could then open it up with 3D software, which allowed me to explore the ship with a virtual camera and do previz and experiment with camera ideas and concept art. I could see exactly what our production camera would see, play around and discover any potential problems.
I heard you also used an ocean simulator plugin that Nvidia created for game developers that floats objects on the water based on the underlying physics of open-ocean waves?
Yes, and there’s been some reporting indicating that’s how we made the movie — how we floated our ships — but that’s not quite accurate in how we used Nvidia in our pipeline. It was more of a look-at tool in that I wanted all of our VFX to feel like we were out in the ocean shooting it.
In preproduction, I was doing some of my own animation and previz to help prepare myself. I’m also a hobbyist VFX artist, and this plugin was very useful in exploring the ocean environment. It allowed me to float a digital camera ship I could look through, and suddenly I had all the chaos and energy of actual ship-to-ship, open-ocean photography. I could figure out how we’d shoot stuff like a ship taking a sharp turn and have the camera ship float in the opposite direction to give the shot energy. We kept the speed of the ships realistic, so the camera wasn’t doing anything it couldn’t have done in the real world. That grounded the shot concepts and VFX in reality, and a lot of our previz and postviz were generated by this plugin, WaveWorks, which helped us fold into Autodesk Maya.
Did you do a lot of previz?
A lot. We did our own four-wall previz, hired artists and rented workstations, and built up our own infrastructure and workflow. The previz fell into two categories. First, we created overhead animation in real time of all the ships in the naval battles. We do this because when you get to set, you need to be able to tell everyone where to look and set eyelines, so when all the digital footage is married to it in post, you have a foundation.
I’d show up on set, gather the actors who’d be engaged in these virtual battles and play back the animation so they could get a good sense of it all — a tactical awareness. Second was classic previz, where you’re trying to meet a VFX budget, get a sense of the shot count and how creative you can be in those parameters. The team came back in post to do postviz so that if we needed a missing shot, they could do it and drop it into the server so we could see if it worked. If we’d had a bigger budget, we’d have previz’d the whole film, like a Pixar or Marvel project.
What did Tom Hanks bring to the project and the lead role? Any surprises?
Beyond all the excitement and terror and suspense of the battles, he wanted it to be a very emotional experience, and Tom acts as the audience’s guide. He’s the human way into the story, and he always saw it as this 90-minute, highly detailed procedural about a world most people know nothing about. He’s the perfect actor for this challenge, as he wrote this somewhat experimental film and counted on his own ability to lead the audience through it. And he didn’t write himself this big acting piece. The drama comes from experiencing it along with his character.
Where did you post?
We did it all at Playtone’s offices in Santa Monica, which was a perfect setup for us.
Talk about your two editors, Sidney Wolinsky and Mark Czyzewski. How did that work?
Sidney was the main editor, and he came on at the start. He hadn’t done a big action film before, but I wanted an editor I could team up with on the story and narrative side. I didn’t want to get lost in all the visceral and visual elements. I wanted an editor who would challenge me and the film to be as narratively cohesive and strong as possible. Then near the end of post, as the burden of dealing with all the VFX and action scenes got heavier, we brought on Mark to help out.
What were the big editing challenges?
The big one was connecting 35 days of production shooting with VFX stuff that doesn’t exist yet, like, say, a shot of Tom looking at a submarine, which I don’t have yet. So we had to use some of the previz material, and if that didn’t work, we had to put the postviz guys to work and start shaping the film. So you’re cutting in plates, slugs, all in a very piecemeal way. And when you watch the rough cut, you have to use your imagination, just like the actors did on the shoot, and you have discussions about shots that aren’t even there yet. And at the same time, you can’t lose sight of the larger context. Do we understand how we got to this point? The tactical dilemma? Why he can’t shoot yet? It was like solving a very complex puzzle.
VFX play a huge role. How many were there and what did they entail?
We had over 1,200, and we used just one company — DNeg — to streamline it all. The VFX supervisor Nathan McGuinness and VFX producer Mike Chambers had a great relationship with them because of their own careers there, and they did a great job considering our very tight post schedule and the challenges of making it all photo-real. Every shot was tricky.
Where did you do the DI, and how important was it to you?
At Company 3 with colorist Bryan Smaller, who used Resolve. The DI was crucial because we made the film for Sony, who then sold it to Apple right when we were in the middle of the DI. Then the DP and I had to shift our focus to the Dolby Vision master. I love the DI, as I was a DP before I became a director, and it’s that final chance to improve the image and the whole look, and I’m really happy with the way it all turned out.
Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.