NBCUni 9.5.23

Oscars: Creating New and Old Sounds for The Creator

By Randi Altman

Director Gareth Edwards’ The Creator takes place in 2055 and tells the story of a war between the human race and artificial intelligence. It follows Joshua Taylor (John David Washington), a former special forces agent who is recruited to hunt down and kill The Creator, who is building an AI super weapon that takes the form of a child.

As you can imagine, the film’s soundscape is lush and helps to tell this futuristic tale, so much so it was rewarded with an Oscar nomination for its sound team: supervising sound editors/sound designers Erik Aadahl and Ethan Van der Ryn, re-recording mixers Tom Ozanich and Dean Zupancic and production sound mixer Ian Voigt.

L-R: Ethan Van der Ryn and Erik Aadahl

We reached out to Aadahl to talk about the audio post process on The Creator, which was shot guerrila style for a documentary feel.

How did you and Ethan collaborate on this one?
Ethan and I have been creative sound partners now for over 17 years. “Mind meld” is the perfect term for us creatively. I think the reason we work so well together is that we are constantly trying to surprise each other with our ideas.

In a sense, we are a lot harder on ourselves than any director and are happiest when we venture into uncharted creative territory with sound. We’ve joked for years that our thermometer for good sound is whether we get goosebumps in a scene. I love our collaboration that way.

How did you split up the work on this one?
We pretty much divide up our duties equally, and on The Creator, we were blessed with an incredible crew. Malte Bieler was our lead sound designer and came up with so many brilliant ideas. David Bach was the ADR and dialogue supervisor, who was in charge of easily one of the most complex dialogue jobs ever, breaking our own records for number of cues, number of spoken languages (some real, some invented), large exterior group sessions and the complexity of robot vocal processing. Jonathan Klein supervised Foley, and Ryan Rubin was the lead music editor for Hans Zimmer’s gorgeous score.

What did director Gareth Edwards ask for in terms of the sound?
Gareth Edwards wanted a sonic style of “retro-futurism” mixed with documentary realism. In a way, we were trying to combine the styles of Terrence Malick and James Cameron: pure expressive realism with pure science-fiction.

Gareth engaged us long before the script was finished — over six years ago — to discuss our approach to this very different film. Our first step was designing a proof-of-concept piece using location scout footage to get the green light, working with Gareth and ILM.

How would you describe the sound?
The style we adopted was to first embrace the real sounds of nature, which we recorded in Cambodia, Laos, Thailand and Vietnam.

For the sound design, Gareth wanted this retro-futurism for much of it, recalling a nostalgia for classic science fiction using analog sound design techniques like vocoders, which were used in the 1970s for films like THX 1138. That style of science fiction could then contrast with the fully futuristic, high-fidelity robot, vehicle and weapon technology.

Gareth wanted sounds that had never been used before and would often make sounds with his mouth that we would recreate. Gareth’s direction for the NOMAD station, which emits tracking beams from Earth’s orbit onto the Earth’s surface, was “It should sound like you’d get cancer if you put your hand in the beam for too long.” I love that kind of direction; Gareth is the best.

This was an international production. What were the challenges of working on different continents and with so many languages?
The Creator was shot on location in eight countries across Asia, including Thailand, Vietnam, Cambodia, Japan and Nepal. As production began, I was in contact with Ian Voigt, the on-location production mixer. He had to adapt to the guerilla-style of filming to invent new methods of wireless boom recording and new methods of working with the novel camera technology, in close contact with Oren Soffer and Greig Fraser, the film’s directors of photography.

Languages spoken included Thai, Vietnamese, Hindi, Japanese and Hindi, and we invented futuristic hybrid languages used by the New Asia AI and the robot characters. The on-location crowds also spoke in multiple languages (some human, some robotic or invented) and required a style of lived-in reality.

Was that the most challenging part of the job? If not, what was?
The biggest challenge was making an epic movie in a documentary/guerilla-style. Every department had to work at the top of its game.

The first giant challenge had to do with dialogue and ADR. Dialogue supervisor David Bach mentioned frequently that this was the most complex film he’d ever tackled. We broke several of our own records, including the number of principle character languages, the number of ADR cues, the amount and variety of group ADR, and the complexity of dialogue processing.

The Creator

Tom Ozanich

Dialogue and music re-recording mixer Tom Ozanich had more radio communication futzes, all tuned to the unique environments, than we’d ever witnessed. Tom also wrangled more robotic dialogue processing channels of all varieties — from Sony Walkman-style robots to the most advanced AI robots — than we’d ever experienced. Gareth wanted audiences to hear the full range of dialogue treatments, from vintage-style sci-fi voices using vocoders to the most advanced tools we now have.

The second big challenge was fulfilling Gareth’s aesthetic goal: Combine ancient and fully futuristic technologies to create sounds that have never been heard before.

What about the tank battle sequence? Walk us through that process.
The first sequence we ever received from Gareth was the tank battle, shot on a floating village in Thailand. For many months, we designed the sound with zero visual effects. A font saying “Tank” or “AI Robot” might clue us in to what was happening. Gareth also chose to use no music in the sequence, allowing us to paint a lush sonic tapestry of nature sounds, juxtaposed with the horrors of war.

He credits editors Joe Walker, Hank Corwin and Scott Morris for having the bravery not to use temp music in this sequence and let the visceral reality of pure sound design carry the sequence.

Our goal was to create the most immersive and out-of-the-box soundscape that we possibly could. With Ethan, we led an extraordinary team of artists who never settled on “good enough.” As is so often the case in any artform, serendipity can appear, and the feeling is magic.

One example is for the aforementioned tanks. We spent months trying to come up with a powerful, futuristic and unique tank sound, but none of the experiments felt special enough. In one moment of pure serendipity, as I was driving back from a weekend of skiing at Mammoth, my car veered into the serrated highway median that’s meant to keep drivers from dozing off and driving off the road. The entire car resonated with a monstrous “RAAAAAAAAHHHHHHMMM!!” and I yelled out, “That’s the sound of the tank!” I recorded it, and that’s the sound in the movie. I have the best job in the world.

The incoming missiles needed a haunting quality, and for the shriek of their descent, we used a recording we did of a baboon. The baboon’s trainer told us that if the baboon witnessed a “theft,” he’d be offended and vocalize. So I put my car keys on the ground and pretended not to notice the trainer snatch the keys away from me and shuffle off. The baboon pointed and let out the perfect shriek of injustice.

What about the bridge sequence?
For this sequence, rudimentary, non-AI bomb robots named G-13 and G-14 (à la DARPA) sprint across the floating village bridge to destroy Alfie, an AI superweapon in the form of a young girl (Madeleine Yuna Voyles). We used the bomb robots’ size and weight to convey an imminent death sentence, their footsteps growing in power and ferocity as the danger approached.

Alfie has a special power over technology, and in one of my favorite moments, G-14 kneels before her instead of detonating. Alfie puts her hand to G-14’s head, and during that touch, we took out all of the sound of the surrounding battle. We made the sound of her special power a deep, humming drone. This moment felt quasi-spiritual, so instead of using synthetic sounds, we used the musical drone of a didgeridoo, an Aboriginal instrument with a spiritual undercurrent.

A favorite sonic technique of ours is to blur the lines between organic and synthetic, and this was one of those moments.

What about the Foley process?
Jonathan Klein supervised the Foley, and Foley artists Dan O’Connell and John Cucci brilliantly brought these robots to life. We have many intimate and subtle moments in the film when Foley was critical in realistically grounding our AI and robot characters to the scene.

The lead character, Joshua, has a prosthetic leg and arm, and there, Foley was vital to contrasting the organic to the inorganic. One example is when Joshua is coming out of the pool at the recovery center — his one leg is barefoot, and his other leg is prosthetic and robotic. These Foley details tell Joshua’s story, demonstrating his physical and, by extension, mental complexity.

What studio did you work out of throughout the process?
We did all of the sound design and editing at our facility on the Warner Bros. studio lot in Burbank.

We broke our own record for the number of mixing stages across two continents. Besides working at WB De Lane Lea in London, we used Stages 5 and 6 at Warner Bros. in Burbank. We were in Stages 2 and 4 at Formosa’s Paramount stages and Stage 1 at Signature Post. This doesn’t even include additional predub and nearfield stages.

The sound team with Gareth Edwards Warner’s Stage 5.

In the mix, both Tom Ozanich and Dean Zupancic beautifully [shifted] from the most delicate and intimate moments, to the most grand and powerful.

Do you enjoy working on VFX-heavy films and sci-fi in particular? Does it give you more freedom in creating sounds that aren’t of this world?
Sound is half of the cinematic experience and is central to the storytelling of The Creator — from sonic natural realism to pure sonic science fiction. We made this combination of the ancient and futuristic for the most unique project I’ve ever had the joy to work on.

Science fiction gives us such latitude, letting us dance between sonic reality and the unreal. And working with amazing visual effects artists allows for a beautiful cross-pollination between sound and picture. It brings out the best in both of our disciplines.

What were some tools you used in your work on The Creator?
The first answer: lots of microphones. Most of the sounds in The Creator are real and organic recordings or manipulated real recordings — from the nature ambiances to the wide range of technologies, from retro to fully futuristic.

Of course, Avid Pro Tools was our sound editing platform, and we used dozens of plugins to make the universe of sound we wanted audiences to hear. We had a special affinity for digital versions of classic analog vocoders, especially for the robot police vocals.

The Oscar-nominated sound team for The Creator pictured with director Gareth Edwards.

Finally, congrats on the nomination. What do you think it was about this film that got the attention of Academy members?
Our credo is “We can never inspire an audience until we inspire ourselves,” and we are so honored and grateful that enough Academy members experienced The Creator and felt inspired to bring us to this moment.

Gareth and our whole team have created a unique cinematic experience. We hope that more of the world not only watches it, but hears it, in the best environment possible.

(Check out this behind-the-scenes video of the team working on The Creator.)


Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.