NBCUni 9.5.23

Category Archives: Audio Post Production

Zach Robinson on Scoring Netflix’s Wrestlers Docuseries

No one can deny the attraction of “entertainment” wrestling. From WWE to NXT to AEW, there is no shortage of muscular people holding other muscular people above their heads and dropping them to the ground. And there is no shortage of interest in the wrestlers and their journeys to the big leagues.

Zach Robinson

That is just one aspect of Netflix’s docuseries Wrestlers, directed by Greg Whiteley, which follows former WWE wrestler Al Snow as he tries to keep the pro wrestling league Ohio Valley Wrestling (OVW) going while fighting off mounting debt and dealing with new ownership. It also provides a behind-the-scenes look at these athletes’ lives outside of the ring.

For the series’ score, Whiteley called on composer Zach Robinson to give the show its sound. “Wrestlers was a dream come true,” says Robinson. “Coming into the project, I was such a huge fan of Greg Whiteley’s work, from Last Chance U to Cheer. On top of that, I grew up on WWE, so it was so much fun to work with this specific group of people on a subject that I really loved.”

Let’s find out more from Robinson, whose other recent projects include Twisted Metal and Florida Man (along with Leo Birenberg) and the animated horror show Fright Krewe

What was the direction you were given for the score?
I originally thought that Greg and the rest of the team wanted something similar to what I do on Cobra Kai, but after watching the first couple of episodes and having a few discussions with the team, we wanted to have music that served as a juxtaposition to the burly, muscular, sometimes brutal imagery you were seeing on screen.

Greg wanted something dramatic and beautiful and almost ballet-like. The music ends up working beautifully with the imagery and really complements the sleek cinematography. Like Greg’s other projects, this is a character drama with an amazing group of characters, and we needed the music to support their stories without making fun of them.

What is your process? Is there a particular instrument you start on, or is it dependent on the project?
It often starts with a theme and a palette decision. Simply, what are the notes I’m writing and what are the instruments playing those notes? I generally like to start by writing a few larger pieces to cover a lot of groups and see what gauges the client’s interest.

In the case of Wrestlers, I presented three pieces (not to picture) and shared them with Greg and the team. Luckily for me, those three pieces were very much in the ballpark of what they were looking for, and I think all three made it into the first episode.

Can you walk us through your workflow on Wrestlers?
Sometimes, working on non-fiction can be a lot different than working on a scripted TV show. We would have spotting sessions (meetings where we watch down the episode and discuss the ins and outs of where the score lives), but as the episodes progressed, I ended up creating more of a library for the editors to grab cues from. That became very helpful for me because the turnaround on these episodes from a scoring standpoint was very, very fast.

However, every episode did have large chunks that needed to be scored to picture. I’m thinking of a lot of the fights, which I really had to score as if I was scoring any type of fight in a scripted show. It took a lot of effort and a lot of direction from the creative team to score those bouts, and finding the right tone was always a challenge.

How would you describe the score? What instruments were used? Was there an orchestra, or were you creating it all?
As I mentioned earlier, the score is very light, almost like a ballet. It’s inspired by a lot of Americana music, like from Aaron Copland, but also, I was very inspired by the “vagabond” stylings of someone like Tom Waits, so you’ll hear a lot of trombone, trumpet, bass, flute and drums.

Imagine seeing a small band performing on the street; that’s kind of what was inspiring to me. This is a traveling troupe of performers, and Greg even referred to them as “the Muppets” during one of our first meetings. We also had a lot of heightened moments that used a large, epic orchestra. I’m thinking especially about the last 30 minutes of the season finale, which is incredibly triumphant and epic in scope.

How did you work with the director in terms of feedback? Any examples of notes or direction given?
Greg and producer Adam Leibowitz were dream collaborators and always had incredibly thoughtful notes and gave great direction. I think the feedback I got most frequently was about being careful not to dip into melodrama through the music. The team is very tasteful with how they portray dramatic moments in their projects, and Wrestlers was no exception.

There were a few times I went a bit too far and big in the music, and Greg would tell me to take a step back and let the drama from the reality of the situation speak for itself. This all made a lot of sense to me, especially because I understood that, coming from scoring mostly scripted programming, I would tend to go harder and bigger on my first pass, which wasn’t always appropriate.

More generally, do you write based on project – spot, game, film, TV — or do you just write?
I enjoy writing music mostly to picture, whether that’s a movie or TV or videogame. I enjoy it much more than writing a piece of music not connected to anything, and I find that when I have to do the latter, it’s incredibly difficult for me.

How did you get into composing? Did you come from a musical family?
I don’t come from a musical family, but I come from a very creative and encouraging family. I knew I wanted to start composing from a very young age, and I was incredibly fortunate to have a family that supported me every step of the way. I studied music in high school and then into college, and then I immediately got a job apprenticing for a composer right after college. I worked my way up and through a lot of odd jobs, and now I’m here.

Any tips for those just starting out?
My biggest piece of advice is to simply be yourself. I know it sounds trite, but don’t try to mold your voice into what you think people want to hear. I’m still learning that even with my 10 years in the business, people want to hear unique voices, and there are always great opportunities to try something different.

CAS Awards: The Last of Us, The Bear, Oppenheimer Among Winners

 The Cinema Audio Society (CAS) announced the winners of its 60th annual CAS Awards during a recent ceremony at the Beverly Hilton. The sound mixing teams for Oppenheimer, Spider-Man: Across the Spider-Verse and 32 Sounds were honored with feature film wins, while The Last of Us, The Bear, 100 Foot Wave and Weird: The Al Yankovic Story won the top non-theatrical honors.

The Cinema Audio Society recognized sound mixing excellence in the seven awards categories, with two special honorees and the annual student filmmaker award. Sound industry executive Doug Kent presented the CAS Career Achievement Award to sound mixer Joe Earle, CAS (American Horror Story, Six Feet Under, Monster’s Ball), and Jon Favreau presented the CAS Filmmaker Award to writer/director/producer J.J. Abrams.

The winners of the 60th Annual CAS Awards are:

Motion Picture Live Action:

Oppenheimer audio team. Credit: Alex J. Berliner/ABImages

Oppenheimer

Production Mixer – Willie D. Burton, CAS
Re-Recording Mixer – Gary A. Rizzo, CAS
Re-Recording Mixer – Kevin O’Connell, CAS
Scoring Mixer – Chris Fogel, CAS
Foley Mixer – Tavish Grade
Foley Mixer – Jack Cucci
Foley Mixer – Mikel Parraga-Wills

 

Motion PictureAnimated:

 

Spider-Man: Across the Spider-Verse

Original Dialogue Mixer – Brian Smith
Original Dialogue Mixer – Aaron Hasson
Original Dialogue Mixer – Howard London, CAS
Re-Recording Mixer – Michael Semanick, CAS
Re-Recording Mixer – Juan Peralta
Scoring Mixer – Sam Okell
Foley Mixer – Randy K. Singer, CAS

 

Motion Picture — Documentary:

 

32 Sounds

Production Mixer – Laura Cunningham
Re-Recording Mixer – Mark Mangini
Scoring Mixer – Ben Greenberg
ADR Mixer – Bobby Johanson, CAS
Foley Mixer – Blake Collins, CAS

 

Non-Theatrical Motion Pictures or Limited Series:

 

Weird: The Al Yankovic Story

Production Mixer – Richard Bullock, CAS
Re-Recording Mixer – Tony Solis
Scoring Mixer – Phil McGowan, CAS
ADR Mixer – Brian Magrum, CAS
Foley Mixer – Erika Koski, CAS

 

 

 

 

Television Series — One Hour:

 

The Last Of Us: S01 E01 When You’re Lost In The Darkness

Production Mixer – Michael Playfair, CAS
Re-Recording Mixer – Marc Fishman, CAS
Re-Recording Mixer – Kevin Roache, CAS
Foley Mixer – Randy Wilson, CAS

 

Television Series — Half Hour:

 

The Bear: S02 E07 Forks

Production Mixer – Scott D. Smith, CAS
Re-Recording Mixer – Steve “Major” Giammaria, CAS
ADR Mixer – Patrick Christensen
Foley Mixer – Ryan Collison
Foley Mixer – Connor Nagy

 

Television Series — Non-Fiction, Variety or Music/Series or Specials:

 

100 Foot Wave: S02 E05 Lost at Sea

Re-Recording Mixer – Keith Hodne

Yushu “Doris” Shen from University of Southern California won the Student Recognition Award, receiving a check for $5k. The other four student finalists each took home $1K from the CAS, along with $10k in products and gear.

 

 

 

 

 

 

 

NBCUni 9.5.23

MPSE Gold Reel Award Winners: Maestro, Oppenheimer and More

The Motion Picture Sound Editors (MPSE) have announced the winners of its 71st Annual MPSE Golden Reels. Awards were presented in 19 professional categories, alongside one student award. Categories spanned feature film, television, animation and computer entertainment.

Additionally, Dane A. Davis, MPSE, received the MPSE Career Achievement Award, and Michael Dinner was presented with the MPSE Filmmaker Award.

“What makes this event so special is that we come together from around the world as a sound community to celebrate each other,” says newly elected MPSE president David Barber. “We celebrate each other’s artistry and each other’s achievements. MPSE members are an extraordinarily passionate and giving group of sound enthusiasts who exemplify the meaning of ‘community.’”

Here are the 71st Annual MPSE Golden Reel Award winners:

Outstanding Achievement in Music Editing – Feature Motion Picture

Maestro

Netflix

Supervising Music Editor: Jason Ruder

Music Editor: Victoria Ruggiero

 

Outstanding Achievement in Sound Editing – Feature Dialogue / ADR

Oppenheimer

Universal Pictures

Supervising Sound Editor: Richard King

Supervising Dialogue Editor: David Bach

Dialogue Editors: Russell Farmarco, Albert Gasser MPSE

 

Outstanding Achievement in Sound Editing – Feature Effects / Foley

Oppenheimer

Universal Pictures

Supervising Sound Editor: Richard King

Sound Effects Editor: Michael Mitchell

Sound Designer: Randy Torres

Supervising Foley Editor: Christopher Flick

Foley Artists: Dan O’Connell, John Cucci MPSE

 

Outstanding Achievement in Sound Editing – Broadcast Animation

Star Wars: The Bad Batch: “Faster”

Disney

Supervising Sound Editors: David W. Collins, Matthew Wood

Sound Designer: David W. Collins

Sound Effects Editors: Justin Doyle, Kevin Bolen MPSE, Kimberly Patrick

Supervising Foley Editor: Frank Rinella

Foley Artists: Kimberly Patrick, Margie O’Malley, Andrea Gard

 

Outstanding Achievement in Sound Editing – Non-theatrical Animation

The Monkey King

Netflix Animation

Supervising Sound Editors: David Giammarco, Eric A. Norris MPSE

Dialogue Editor: Sean Massey MPSE

Sound Designers: Jon Title MPSE, Tim Nielsen

Foley Artists: Dan O’Connell, John Cucci MPSE

 

Outstanding Achievement in Sound Editing – Feature Animation

Spider-Man: Across the Spider-Verse

Sony Pictures Animation

Supervising Sound Editor: Geoffrey G. Rubay

Sound Designers: John J. Pospisil, Alec G. Rubay, Kip Smedley

Sound Effects Editors: Cathryn Wang, David Werntz, Bruce Tanis MPSE, Greg ten Bosch MPSE, Daniel McNamara MPSE, Will Digby, Andy Sisul

Supervising Dialogue Editor: James Morioka MPSE

Dialogue Editors: Robert Getty MPSE, Jason W. Freeman, Kai Scheer, Ashley N. Rubay

Foley Supervisor: Colin Lechner MPSE

Foley Artist: Gregg Barbanell MPSE, Jeff Wilhoit MPSE, Dylan Wilhoit

Supervising Music Editor: Katie Greathouse

Music Editor: Barbara McDermott

 

Outstanding Achievement in Sound Editing – Non-theatrical Documentary

Our Planet II: “Chapter 3: The Next Generation”

Netflix

Sound Editor: George Fry

 

Outstanding Achievement in Music Editing – Documentary

Pianoforte

Greenwich Entertainment

Supervising Music Editor: Michal Fojcik MPSE

Music Editor: Joanna Popowicz

 

Outstanding Achievement in Sound Editing – Feature Documentary

32 Sounds

ArKtype

Supervising ADR Editor: Eliza Paley

Supervising Sound Editor: Mark Mangini MPSE

Sound Editor: Robert Kellough MPSE

ADR Editor: Mari Matsuo

Foley Artist: Joanna Fang MPSE

 

Outstanding Achievement in Music Editing – Game Music

Star Wars Jedi: Survivor

Respawn Entertainment

Music Director: Nick Laviers

Music Implementers: Colin Andrew Grant MPSE, Andrew Karboski

 

Outstanding Achievement in Sound Editing – Game Dialogue / ADR

Alan Wake 2

Remedy Entertainment

Audio Director: Richard Lapington

Senior Dialogue Designers: Taneli Suoranta, Arthur Tisseront

 

Outstanding Achievement in Sound Editing – Game Effects / Foley

Marvel’s Spider-Man 2

Insomniac Games

Supervising Sound Editors: Ben Minto MPSE, Chris Sweetman MPSE, Csaba Wagner MPSE, Samuel Justice, Gary Miranda

Supervising Sound Designer: Emile Mika

Senior Director of Sound: Phillip Kovats MPSE

Director, Audio Management: Karen Read

Audio Managers: Daniel Birczynski, Jesse James Allen

Director of Sound Design: Jeremie Voillot MPSE

Senior Audio Directors: Paul Mudra, Jerry Berlongieri, Dwight Okahara

Technical Sound Designers: Ben Pantelis, Sebastian Ruiz, Nick Jackson, Enoch Choi, Cameron Sonju, Gavin Booth

Lead Sound Designer: Blake Johnson

Senior Sound Designers: Eddie Pacheco MPSE, Tyler Cornett, Johannes Hammers MPSE, Zack Bogucki, Alex Previty, Matt Ryan, Juliet Rascon, Andres Herrera, Robert Castro MPSE, Jeff Darby, Beau Anthony Jimenez MPSE, Derrick Espino, Jon Rook

Sound Designers: Tyler Hoffman, Daniele Carli, Bob Kellough MPSE, Bryan Jerden, Eilam Hoffman, Graham Donnelly MPSE, Jason W. Jennings MPSE, Matt Hall, Michael Leaning, Michael Schapiro, Randy Torres, Richard Gould, Stephano Sanchinelli, Tim Walston MPSE, Tobias Poppe, Tom Jaine MPSE, Jeremy Neroes, Adam Sanchez, Brendan Wolf, Roy Lancaster, Rodrigo Robinet, Daniel Barboza, Charlie Ritter, David Goll, Chris Kokkinos MPSE, TJ Schauer

Foley Editors: Blake Collins, Annie Taylor, Austin Creek

Foley Artist: Joanna Fang MPSE

 

Outstanding Achievement in Music Editing – Broadcast Short Form

Dave: “Met Gala”

Hulu

Supervising Music Editor: Amber Funk MPSE

Music Editor: James Sullivan

 

Outstanding Achievement in Sound Editing – Broadcast Short Form

The Mandalorian: “The Return”

Disney

Supervising Sound Editors: Trey Turner, Matthew Wood

Sound Designer: David W. Collins

Sound Effects Editors: Luis Galdames MPSE, Kevin Bolen MPSE

ADR Editors: Brad Semenoff MPSE, Ryan Cota MPSE

Supervising Foley Editor: Frank Rinella

Foley Editors: Joel Raabe, Alyssa Nevarez

Foley Artist: Shelly Roden MPSE

 

Outstanding Achievement in Music Editing – Broadcast Long Form

The Last of Us: “When You’re Lost in the Darkness”

HBO

Music Editor: Maarten Hofmeijer

 

Outstanding Achievement in Sound Editing – Broadcast Long Form Dialogue / ADR

The Marvelous Mrs. Maisel: “Four Minutes”

Amazon Prime

Supervising Sound Editor: Ron Bochar

Dialogue Editor: Sara Stern

ADR Editor: Ruth Hernandez MPSE

 

Outstanding Achievement in Sound Editing – Broadcast Long Form Effects / Foley

All the Light We Cannot See: Episode 4

Netflix

Supervising Sound Editors: Craig Henighan MPSE, Ryan Cole MPSE

Sound Effects Editor: David Grimaldi

Foley Editor: Matt Cloud

Foley Artist: Steve Baine

 

Outstanding Achievement in Sound Editing – Student Film (Verna Fields Award)

Dive

National Film & Television School

Supervising Sound Editor: Simon Panayi

 

Outstanding Achievement in Sound Editing – Non-theatrical Feature

The Last Kingdom: Seven Kings Must Die

Netflix

Supervising Sound Editor: Jack Gillies

Dialogue/ADR Supervisor: Michael Williams

ADR Editor: Steve Berezai

Foley Editor: Neale Ross

Foley Artist: Jason Swanscott

 

Outstanding Achievement in Sound Editing – Foreign Language Feature

Society of the Snow

Netflix

Supervising Sound Editors: Oriol Tarragó, Iosu Martinez, Guillem Giró

Foley Artists: Erik Vidal, Kiku Vidal

Sound Editors: Sarah Romero, Marc Bech, Brendan Golden

Sound Designer: Oriol Tarragó

Music Editor: John Finklea

 

 


Writer/Director Celine Song Talks Post on Oscar-Nominated Past Lives

By Iain Blair

In her directorial film debut, Past Lives, South Korean-born playwright Celine Song has made a romantic and deceptively simple film that is intensely personal and autobiographical yet universal, with its themes of love, loss and what might have been. Past Lives is broken into three parts spanning countries and decades. First we see Nora as a young girl in South Korea, developing an early bond with her best friend, Hae Sung, before moving with her family to Toronto. Then we see Nora in her early 20s as she reconnects virtually with Hae Sung. Finally, more than a decade later, Hae Sung visits Nora, now a married playwright living in New York. It stars Greta Lee, Teo Yoo and John Magaro.

Celine Song directing Greta Lee

I spoke with Song about the post workflow and making the A24 film, which is Oscar-nominated for Best Picture and Best Original Screenplay. It also just won Best Director and Best Feature at the Independent Spirit Awards.

How did you prep to direct your first film? Did you talk to other directors?
I talked to some amazing directors, but what they all said is that because only I know the film that I’m making, the way it’s going to be prepped is a process that only I can really know. You need really strong producers and department heads, which I was so lucky to have. I was able to draw on their experience and advice for every step of the way.

You shot in Seoul and New York. Was it the same sort of experience or was it different going back to Seoul?
The filmmaking culture is very different in both places. In New York, there is a very strong union, and in Korea there isn’t one. Also, the way that you secure locations is different. In New York, if you want to shoot somewhere, the mayor’s office knows about it. Korea is still a little bit like guerrilla filmmaking. You show up to a location and try to get it right. You can’t really get permits for things in Korea.

The story takes place over three separate timeframes. Did you shoot chronologically?
No. We shot everything in New York City, and then we had a set built for the Skype section. Then we went to Korea, prepped it for another month and shot there for 10 days.

You and your, DP Shabier Kirchner, shot 35mm. What led you to that decision?
It was my very first movie, so I didn’t know how hard it was going to be. I don’t have experience shooting on digital or film. I don’t know anything. I think part of it was first-timer bravery. I don’t know enough to be afraid. That’s where the fearlessness came from. But it was also informed by the conversations I was having with my DP. We talked about the story and how the philosophy of shooting on film is connected to the philosophy of the movie, which is that the movie is about time made tangible and time made visible. It just made sense for it to be shot on film.

Celine Song on-set

You come from the theater, where there is obviously no post production. Was that a steep learning curve for you?
Yes, but you do have a preview period in theater, when you see it in front of an audience, and you keep editing in that way. But more importantly, I’m a writer. So part of post is that I don’t think of the movie as just what I see on screen and all the sound design and every piece of it. To me, it is a piece of text. So just as I would edit a piece of my own writing, I feel like I was looking at the editing process very much like editing text.

Then of course in film, it’s not just the writing on the page. It’s also sound, color, visuals, timing… So in that way, I really felt that editing was about composing a piece of music. I think of film as a piece of music, with its own rhythm and its own beat that it has to move through. So in that way, I think that that’s also a part of the work that I would do as a playwright in the theater, create a world that works like a piece of music from beginning to end.

With all that in mind, I honestly felt like I was the most equipped to do post. I had an entire world to learn; I had never done it before. But with post, I was in my domain. The other thing I really love about editing and VFX in film is that you can control a lot. Let’s say there’s a pole in the middle of the theater space. You have to accept that pole. But in film, you can just delete the pole with VFX. It’s amazing.

Did editor Keith Fraase, who is based in New York, come on-set at all in Korea, or did you send him dailies?
We sent dailies. He couldn’t come on-set because of COVID.

What were the biggest editing challenges on this?
I think the film’s not so far from the way I had written it, so the bigger editing choices were already scripted. The harder bits were things that are like shoe leather — the scenes that hold the movie together but are not the center of the emotion or the center of the story.

One example is when Nora is traveling to Montauk, where we know that she’s going to eventually meet Arthur (who becomes her husband). We were dealing with how much time is required and how to convey time so that when we meet Arthur, it seems like it is an organic meeting and not such a jarring one. I had scripted all this shoe-leather stuff that we had shot – every beat of her journey to Montauk. We had a subway beat; we had a bus beat. We had so many pieces of her traveling to Montauk because I was nervous about it, feeling it was not long enough. But then, of course, when we actually got into the edit, we realized we only needed a few pieces. You just realize that again, the rhythm of it dictates that you don’t need all of it.

Where did you do all the sound mix?
We did it at all at Goldcrest in New York.

Are you very involved in that?
You have no idea. I think that’s the only place where I needed more time. We went over budget… that’s a nicer way to say it. That’s the only part of the post process where I really was demanding so much. I was so obsessed with it. The sound designer’s nickname for me was Ms. Dog Ears. I know different directors have very different processes around sound, but for me, I was in that room with my sound designer Jacob Ribicoff for 14 hours a day, five days a week, and sometimes overtime, for weeks. I wouldn’t leave.

I would stay there because I just know that sound is one of those things that holds the film together. Also, with this movie, the sound design of the cities and how different they are and how it’s going to play with the compositions — I had such a specific idea of how I wanted those things to move. Because again, I do think of a film as a piece of music. So I was pretty crazy about it. But I don’t want people to notice the sound design. I want people to be able to feel like they’re actually just standing in Madison Square Park. I want them to be fully immersed.

Obviously, it’s not a big effects movie, but you have some. How did that go?
I think it’s a bit of a subjective thing. Actually, looking at it, I’m like, “Well, does that seem good to you?” I’m showing it to my production designer and my DP and I’m like, “This looks OK to me, but I wonder if it can be better. Would you look at it?” So I relied on many eyes.

I give credit to Keith, but also to my assistant editor, Shannon Fitzpatrick, who was a total genius at catching any problems with VFX and having such a detailed eye. I think she’s one of the only people who really noticed things that I didn’t notice in the VFX. I’m like, I think that looks fine, and then she would say point to this one thing in the corner that’s not working. There are people at A24 who’re also amazing at catching sound and visuals because that’s their job. They’ll point out what sounds strange or what looks strange. So you have so many people who are part of the process.

Who was the colorist, and how involved were you with the grading?
It was Tom Poole at Company 3, which is where we edited and did color and everything. I love the process because I showed up after Shabier and Tom had already gone through the whole film and graded it. They did amazing, beautiful work. Then I would come in and give notes about certain scenes and then we’d do them. Of course, while they were grading it, they’d send me stills, and I’d give notes on the stills before going into the suite. Also, Shabier and Tom have worked together a lot, so they already kind of had a rhythm for how they wanted to color the film.

What sort of film did you set out to make?
Since this was the first film I’d directed, I felt like the main goal was to discover the language of my movie. It was beyond just trying to tell the story the best way I could, from the script stage to the post. I think that was the goal throughout. But the truth is that I really wanted the language of the film to be my own language, and I wanted to learn and have a revelation for myself of what my movie is.

I know it is partly autobiographical. How much of you is in Nora?
It really was inspired by a true event of sitting between my childhood sweetheart, who had come to visit me from Korea, and my husband who I live with in New York City. So this is very autobiographical, and the feeling that I had in that very personal moment is the inspiration for the whole film. But then once you turn it into a script, which is an objectification process, and then you turn it into a film with hundreds of people — and especially with the cast members who have to play the characters — by that time it has become very much an object. Then with post, it’s about the chiseling. It’s about putting together an object that is to be shared with the world.

A film is so different from writing a play. Was it a big adjustment for you?
I know theater because I was in it for a decade, probably more, so I knew the very fundamental difference between the way a play is made versus how a film is made. For example, I was taught that in theater, time and space is figurative, while time and space in film is literal. So that means there are different kinds of strengths and weaknesses in both mediums when it comes to telling a story that spans decades and continents. And, in this case, because my joke is always that the villains of the story are 24 years and the Pacific Ocean, it actually needs the time and space to be seen literally… because there needs to be a reason why these two lovers are not together. So the children have to be literally there, and Korea and New York City have to feel tangible and literal.

I assume you can’t wait to direct again?
Oh, I can’t wait. I want to wake up and just go to set tomorrow. That’s how I feel. I’m trying to shoot another movie as soon as I can.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.


Oscars: Creating New and Old Sounds for The Creator

By Randi Altman

Director Gareth Edwards’ The Creator takes place in 2055 and tells the story of a war between the human race and artificial intelligence. It follows Joshua Taylor (John David Washington), a former special forces agent who is recruited to hunt down and kill The Creator, who is building an AI super weapon that takes the form of a child.

As you can imagine, the film’s soundscape is lush and helps to tell this futuristic tale, so much so it was rewarded with an Oscar nomination for its sound team: supervising sound editors/sound designers Erik Aadahl and Ethan Van der Ryn, re-recording mixers Tom Ozanich and Dean Zupancic and production sound mixer Ian Voigt.

L-R: Ethan Van der Ryn and Erik Aadahl

We reached out to Aadahl to talk about the audio post process on The Creator, which was shot guerrila style for a documentary feel.

How did you and Ethan collaborate on this one?
Ethan and I have been creative sound partners now for over 17 years. “Mind meld” is the perfect term for us creatively. I think the reason we work so well together is that we are constantly trying to surprise each other with our ideas.

In a sense, we are a lot harder on ourselves than any director and are happiest when we venture into uncharted creative territory with sound. We’ve joked for years that our thermometer for good sound is whether we get goosebumps in a scene. I love our collaboration that way.

How did you split up the work on this one?
We pretty much divide up our duties equally, and on The Creator, we were blessed with an incredible crew. Malte Bieler was our lead sound designer and came up with so many brilliant ideas. David Bach was the ADR and dialogue supervisor, who was in charge of easily one of the most complex dialogue jobs ever, breaking our own records for number of cues, number of spoken languages (some real, some invented), large exterior group sessions and the complexity of robot vocal processing. Jonathan Klein supervised Foley, and Ryan Rubin was the lead music editor for Hans Zimmer’s gorgeous score.

What did director Gareth Edwards ask for in terms of the sound?
Gareth Edwards wanted a sonic style of “retro-futurism” mixed with documentary realism. In a way, we were trying to combine the styles of Terrence Malick and James Cameron: pure expressive realism with pure science-fiction.

Gareth engaged us long before the script was finished — over six years ago — to discuss our approach to this very different film. Our first step was designing a proof-of-concept piece using location scout footage to get the green light, working with Gareth and ILM.

How would you describe the sound?
The style we adopted was to first embrace the real sounds of nature, which we recorded in Cambodia, Laos, Thailand and Vietnam.

For the sound design, Gareth wanted this retro-futurism for much of it, recalling a nostalgia for classic science fiction using analog sound design techniques like vocoders, which were used in the 1970s for films like THX 1138. That style of science fiction could then contrast with the fully futuristic, high-fidelity robot, vehicle and weapon technology.

Gareth wanted sounds that had never been used before and would often make sounds with his mouth that we would recreate. Gareth’s direction for the NOMAD station, which emits tracking beams from Earth’s orbit onto the Earth’s surface, was “It should sound like you’d get cancer if you put your hand in the beam for too long.” I love that kind of direction; Gareth is the best.

This was an international production. What were the challenges of working on different continents and with so many languages?
The Creator was shot on location in eight countries across Asia, including Thailand, Vietnam, Cambodia, Japan and Nepal. As production began, I was in contact with Ian Voigt, the on-location production mixer. He had to adapt to the guerilla-style of filming to invent new methods of wireless boom recording and new methods of working with the novel camera technology, in close contact with Oren Soffer and Greig Fraser, the film’s directors of photography.

Languages spoken included Thai, Vietnamese, Hindi, Japanese and Hindi, and we invented futuristic hybrid languages used by the New Asia AI and the robot characters. The on-location crowds also spoke in multiple languages (some human, some robotic or invented) and required a style of lived-in reality.

Was that the most challenging part of the job? If not, what was?
The biggest challenge was making an epic movie in a documentary/guerilla-style. Every department had to work at the top of its game.

The first giant challenge had to do with dialogue and ADR. Dialogue supervisor David Bach mentioned frequently that this was the most complex film he’d ever tackled. We broke several of our own records, including the number of principle character languages, the number of ADR cues, the amount and variety of group ADR, and the complexity of dialogue processing.

The Creator

Tom Ozanich

Dialogue and music re-recording mixer Tom Ozanich had more radio communication futzes, all tuned to the unique environments, than we’d ever witnessed. Tom also wrangled more robotic dialogue processing channels of all varieties — from Sony Walkman-style robots to the most advanced AI robots — than we’d ever experienced. Gareth wanted audiences to hear the full range of dialogue treatments, from vintage-style sci-fi voices using vocoders to the most advanced tools we now have.

The second big challenge was fulfilling Gareth’s aesthetic goal: Combine ancient and fully futuristic technologies to create sounds that have never been heard before.

What about the tank battle sequence? Walk us through that process.
The first sequence we ever received from Gareth was the tank battle, shot on a floating village in Thailand. For many months, we designed the sound with zero visual effects. A font saying “Tank” or “AI Robot” might clue us in to what was happening. Gareth also chose to use no music in the sequence, allowing us to paint a lush sonic tapestry of nature sounds, juxtaposed with the horrors of war.

He credits editors Joe Walker, Hank Corwin and Scott Morris for having the bravery not to use temp music in this sequence and let the visceral reality of pure sound design carry the sequence.

Our goal was to create the most immersive and out-of-the-box soundscape that we possibly could. With Ethan, we led an extraordinary team of artists who never settled on “good enough.” As is so often the case in any artform, serendipity can appear, and the feeling is magic.

One example is for the aforementioned tanks. We spent months trying to come up with a powerful, futuristic and unique tank sound, but none of the experiments felt special enough. In one moment of pure serendipity, as I was driving back from a weekend of skiing at Mammoth, my car veered into the serrated highway median that’s meant to keep drivers from dozing off and driving off the road. The entire car resonated with a monstrous “RAAAAAAAAHHHHHHMMM!!” and I yelled out, “That’s the sound of the tank!” I recorded it, and that’s the sound in the movie. I have the best job in the world.

The incoming missiles needed a haunting quality, and for the shriek of their descent, we used a recording we did of a baboon. The baboon’s trainer told us that if the baboon witnessed a “theft,” he’d be offended and vocalize. So I put my car keys on the ground and pretended not to notice the trainer snatch the keys away from me and shuffle off. The baboon pointed and let out the perfect shriek of injustice.

What about the bridge sequence?
For this sequence, rudimentary, non-AI bomb robots named G-13 and G-14 (à la DARPA) sprint across the floating village bridge to destroy Alfie, an AI superweapon in the form of a young girl (Madeleine Yuna Voyles). We used the bomb robots’ size and weight to convey an imminent death sentence, their footsteps growing in power and ferocity as the danger approached.

Alfie has a special power over technology, and in one of my favorite moments, G-14 kneels before her instead of detonating. Alfie puts her hand to G-14’s head, and during that touch, we took out all of the sound of the surrounding battle. We made the sound of her special power a deep, humming drone. This moment felt quasi-spiritual, so instead of using synthetic sounds, we used the musical drone of a didgeridoo, an Aboriginal instrument with a spiritual undercurrent.

A favorite sonic technique of ours is to blur the lines between organic and synthetic, and this was one of those moments.

What about the Foley process?
Jonathan Klein supervised the Foley, and Foley artists Dan O’Connell and John Cucci brilliantly brought these robots to life. We have many intimate and subtle moments in the film when Foley was critical in realistically grounding our AI and robot characters to the scene.

The lead character, Joshua, has a prosthetic leg and arm, and there, Foley was vital to contrasting the organic to the inorganic. One example is when Joshua is coming out of the pool at the recovery center — his one leg is barefoot, and his other leg is prosthetic and robotic. These Foley details tell Joshua’s story, demonstrating his physical and, by extension, mental complexity.

What studio did you work out of throughout the process?
We did all of the sound design and editing at our facility on the Warner Bros. studio lot in Burbank.

We broke our own record for the number of mixing stages across two continents. Besides working at WB De Lane Lea in London, we used Stages 5 and 6 at Warner Bros. in Burbank. We were in Stages 2 and 4 at Formosa’s Paramount stages and Stage 1 at Signature Post. This doesn’t even include additional predub and nearfield stages.

The sound team with Gareth Edwards Warner’s Stage 5.

In the mix, both Tom Ozanich and Dean Zupancic beautifully [shifted] from the most delicate and intimate moments, to the most grand and powerful.

Do you enjoy working on VFX-heavy films and sci-fi in particular? Does it give you more freedom in creating sounds that aren’t of this world?
Sound is half of the cinematic experience and is central to the storytelling of The Creator — from sonic natural realism to pure sonic science fiction. We made this combination of the ancient and futuristic for the most unique project I’ve ever had the joy to work on.

Science fiction gives us such latitude, letting us dance between sonic reality and the unreal. And working with amazing visual effects artists allows for a beautiful cross-pollination between sound and picture. It brings out the best in both of our disciplines.

What were some tools you used in your work on The Creator?
The first answer: lots of microphones. Most of the sounds in The Creator are real and organic recordings or manipulated real recordings — from the nature ambiances to the wide range of technologies, from retro to fully futuristic.

Of course, Avid Pro Tools was our sound editing platform, and we used dozens of plugins to make the universe of sound we wanted audiences to hear. We had a special affinity for digital versions of classic analog vocoders, especially for the robot police vocals.

The Oscar-nominated sound team for The Creator pictured with director Gareth Edwards.

Finally, congrats on the nomination. What do you think it was about this film that got the attention of Academy members?
Our credo is “We can never inspire an audience until we inspire ourselves,” and we are so honored and grateful that enough Academy members experienced The Creator and felt inspired to bring us to this moment.

Gareth and our whole team have created a unique cinematic experience. We hope that more of the world not only watches it, but hears it, in the best environment possible.

(Check out this behind-the-scenes video of the team working on The Creator.)


Holdovers

The Holdovers Oscar-Nominated Kevin Tent Talks Editing Workflow

By Iain Blair

Editor Kevin Tent, ACE, has had a long and fruitful collaboration with director Alexander Payne. Their first film together was 1995’s Citizen Ruth, and he’s edited all of Payne’s films since, including the Oscar-nominated Sideways. Tent earned his first Academy Award nod for his work on The Descendants.

Kevin Tent, ACE. Photo by Peter Zakhary

Tent was just Oscar-nominated again, this time for his work on Payne’s new film The Holdovers, a bittersweet holiday story about three lonely people marooned at a New England boarding school over winter break in 1970. I spoke with Tent about his workflow and editing the film, which got five Oscar noms, including for Best Picture.

How did the process start with Alexander?
When he first had the idea, he didn’t even have a full script yet with writer David Hemingson. I read the first few drafts, and it was in very good shape, even early on. I would give him my comments on the script and stuff like that, and he’d take them or not. Then when they began shooting, I started cutting right away.

Alexander told me you went to the set on his very early films like Citizen Ruth, but not really since.
Yeah, as usual, I stayed here in LA for this film, while he shot in Boston. I do like to go to the set just for one day to say hello to the actors and everyone, but after you’re there for 20 minutes and have nothing else to do, you’re like, “What the hell am I doing here?” So I don’t spend too much time on-set, and anyway, there’s so much work to be done at the cutting room, as I’m doing an assembly while he shoots.

I assume you’re in constant contact during the shoot.
Yes, we talk every day, at least once a day. I usually send him cut scenes for the weekend if he wants to see them, but on the last couple of movies, he hasn’t wanted to watch cut scenes while he has a weekend off. I think he’s got too many other production issues on his plate to have the time to do that.

He doesn’t watch dailies anymore either, and it works out really well because by the time he comes back to the cutting room, he’s going to look at the dailies with fresh eyes. And I know the footage fairly well by then because I’ve been through it and cut it. So we’re both kind of on an even playing field when we start to cut together.

I know he shoots very precisely, so it’s not like there’s a ton of material you have to wade through and cut?
Right, he was really focused on his coverage on this, which was good. He’s always super-smart about coverage. He doesn’t want to burn out his actors on wide shots and masters and stuff like that. So he gets what he thinks he needs to get us in and out of scenes. Then he spends a lot of time letting the actors find their footing and their characters and give their performances.

Holdovers

He’s a four-to-six takes guy on average, but he allows his actors to take their time and get these great performances. It’s our job when we get to the cutting room to try to condense them and make them efficient… pace ’em up, that kind of thing. We’re getting great raw material, and then our challenge is usually trying to get it all moving and flowing together.

He told me that you’d work on the edit at his home in Omaha for a while and then come back to LA when you had to spend a lot more time here for post?
Yeah, we would go there for a month, come back to LA and then go back again. I was there for Citizen Ruth and About Schmidt. I like it back there, and we had a good time. He’s got an awesome place.

We would cut there using Jump Desktop — we’d log into that and do all the editing. It was remarkable. It’s just the most amazing thing. And we were able to cut away on the computer in California from his place in Omaha. We would then use Evercast for work sessions with associate editor Mindy, Alyssa, music editor Richard Ford and sound supervisor Frank Gaeta. The whole process was efficient and phenomenal.

The whole thing from shooting to finishing was probably nine months. So not overly long. Then we spent a month or so doing the final mix and the DI and all that stuff.

This is Alexander’s first period film, and I loved all the dissolves you used that you don’t see in movies so much anymore.
Yeah, that’s true I think, but we love them. They’re so beautiful. They create emotion, and I was a fan of them even before Alexander and I started working together. I always thought they were amazing, and we’ve been doing them forever, going back to Citizen Ruth. We also did some really long dissolves in Election and in About Schmidt, which has a bunch of really beautiful dissolve sequences when his wife dies. There’s a huge, two-minute dissolve sequence of Warren Schmidt after she passes.

What was the most difficult scene to edit?
There were a couple. It would seem like they were simple, but we spent a lot of time on them. First, the scene where Paul gets fired at the end and takes the hit for the kid. I wouldn’t say we struggled, but we were constantly finessing it, going back and taking things out and putting things back and trying to get it just right. That one took us quite a while.

Then there were scenes that were a little long that needed condensing just to get them right. The first scene, with Mary and Paul watching The Newlywed Game, was a challenge because it had a fair amount of dialogue that we lost. That was a tricky scene because it had a lot of stuff going on in it. It had emotion about her son and her anger with the kids at the school. There were a lot of different transitions and stuff going on characterwise, and that was a challenge in that little area.

I assume you did a fair amount of music temps?
Yes. Mindy Elliott, who’s been our assistant forever but got an associate editor credit this time around, was the first one who imported music from The Swingle Singers, which is the a cappella Christmas music we hear. It was a great call because I was having trouble “hearing” whatever the music would be in the movie. That ended up being a huge element we embraced — using that type of music throughout.

At points it’s ironic and kind of funny, and at other times it’s very poignant, and it became a really important musical element in the movie. We also worked with our music producer/supervisor Richard Ford, and he’s brilliant. He also started bringing in lots of music, including scores from Mark Orton, whom we’d worked with on Nebraska. That became our score sound of the movie. Then we threw in all our fun ‘60s music — that’s just a free-for-all.  It worked great, but then you find out it costs $100,000, and you can’t get it. That happens all the time.

Obviously, it’s not a big visual effects movie, but there are some. Were you doing temps for those as well?
Yeah, we had things like comps and fluid morph, but the visual effects were really all about evening out the snow. There were certain scenes that needed it, like when all the people are leaving the church and there’s snow coming down. Believe it or not, that scene was shot on the same day as the scene where all the boys are talking out in front of the truck. It was blue skies in the morning, then it was snowing like crazy, then a couple hours later, it was all blue skies again.

Crafty Apes did the visual effects, adding snow, wetting the road, putting clouds in the sky, adding some snowflakes at points and trying to make it match a little closer to what was shot earlier in the day… those kind of things.

Tell us about the post workflow and the editing gear you used.
We cut on Avid Media Composer 2018, supplied by Atlas Digital aka Runway Edit. While in production, associate editor Mindy Elliott, assistant editor Alyssa Donovan-Browning and I worked from our homes, and we worked with separate projects, which we kept updated via Dropbox. We had separate drives as well.

Our dailies were provided by Harbor in New York and London, and each morning — sometime between 2am and 6am — they would send us a downloadable link.  I would check my email around 4am, and if I had a link, I’d start the download and then go back to bed. Around 8am, Mindy would use TeamViewer to log on to my computer and copy the organized dailies bins, etc. onto my local drive. Mindy and Alyssa also had local drives. We communicated constantly using a dedicated Telegram Messenger chat, and whenever I needed anything or there was new media (small amounts), we used TeamViewer and Dropbox to download and import it.

Once Alexander was back in town, we moved into a more traditional cutting room in North Hollywood, and we switched over to a Nexis shared media storage in the same building as our cutting rooms. Once we were finalizing the cut, we moved permanently back to LA to finish, and we mixed in Santa Monica with our longtime mixer, Patrick Cyccone.

Were you involved in the DI at all? Did you go to the sessions?
I didn’t go so much on this one because our DP, Eigil Bryld, was shooting in New York, and Alexander and Eigil did it at Harbor in New York with colorist Joe Gawler. I would see it, and when they were done, I’d go and we’d screen it.

What makes it such an enduring partnership with Alexander?
I think we’re both pretty easygoing guys, and we’re both always looking to enjoy life. We take our work seriously, but we don’t ever let it get ugly. We have a good time when we’re working together, and we work hard, but we keep a positive attitude. I guess we’re just very similar in that respect. And we’re pals after all these years, so it’s not even like going to work when we work together. We basically are doing our job and having fun.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.


Oscar-Nominated Sound Pro Talks The Zone of Interest and Poor Things

Supervising sound editor, sound designer and re-recording mixer Johnnie Burn has been busy working on not one but two of this year’s Oscar-nominated films, The Zone of Interest and Poor Things. One is about the horrors of the Holocaust, and the other is a whimsical tale of rebirth and the love of life.

Poor Things

Johnnie Burn

Burn reunites with director Jonathan Glazer on The Zone of Interest after previously working on Glazer’s first film, Under the Skin. While collaborating again on The Zone of Interest, Burn and Glazer aimed for a super-realistic soundscape to maintain the authenticity of the sounds that would have been heard during the Holocaust. Burn and production sound mixer Tarn Willers were nominated for a Best Sound Oscar for their work on the film.

Clearly adept at developing long-lasting collaborations, Burn also recently reunited with The Favourite and The Lobster director Yorgos Lanthimos on Poor Things. Burn worked to develop a soundscape to match the atmosphere of the film.

We reached out to Burn to find out more about his work on both films. Let’s start with the Holocaust film The Zone of Interest

You have worked with Jonathan Glazer before. When did he approach you about this film? And how does that relationship work in terms of shorthand, etc.?
I’ve known Jon for 27 years, and this was our second film. A decade ago, we made Under the Skin and learned a lot about how we like to use sound for a different kind of cinematic immersion. Over the years, Jon had mentioned this film, The Zone of Interest, but it was only when he gave me the script a couple of years out that I realized just how much it would rely on sound.

We agreed that we didn’t know specifically how we would do it, telling the story of the atrocities through sound. We knew that he was going to go and shoot the film, and in a year, he would return to begin post production. Over that time I needed to become an expert on what Auschwitz sounded like in 1943 — the motorbikes that passed by the road outside the camp, the nationalities of the prisoners and so on — and learn the detail of the events that took place there, which led to murder and mass murder on a daily basis.

In terms of our shorthand, I knew Jon wasn’t going to accept any form of mockup representation in sound. Only a soundscape with immense integrity would sound right over such documentary images.

The film is intense and covers a very serious topic — quite a different tone than Poor Things. Can you talk about approaching this film versus Poor Things?
The Zone of Interest was very intense indeed, and the sound is very much an extraordinary counternarrative to the images you see. Jon and I always thought of it as two films: one being the film you see and the other being the film you hear.

Film 1 is a family drama — an exceedingly immersive journey into a family and its house in 1943. It was filmed with many scenes taking place simultaneously thanks to hidden cameras (the director rigged multiple cameras around the house, allowing the actors to improvise and were often unaware the cameras were even rolling]. Some takes were an hour long so that the actors could just “be.” We can observe and keep our critical distance, which really allows us to ponder how like us they are.

Film 2 is the sound that comes over the wall from the concentration camp. It is the sound that the occupants of the house ignore, and it is the worst horrorscape imaginable. We created a scientific representation based on substantial research of the atrocities that took place in Poland in 1943.

To be honest, The Zone of Interest was such a difficult immersion mentally that it bore no relation to working on Poor Things whatsoever. For Zone, I had to become an expert on the sound of genocide, and for Poor Things, I had to make imaginary worlds come to life.

What were some of the more challenging scenes in the film from an audio perspective?
I think every scene was challenging, as the whole thing was pretty awful to listen to and awful to work on.

Probably the hardest was the scene where Rudolf stands in his garden in the evening and smokes a cigar whilst we hear the sound of the gas chamber and crematoria in operation. This was something I had researched elaborately. There was much testimony on the terrifying, howling chorus of pain; the banging and scratching at the doors; and the revving of motorbikes, which the guards would use to attempt to block out the horror; and then the silence after and the hum over the ovens. Credibly creating all these sounds whilst not sensationalizing the material and whilst respecting the victims and survivors was a terrific knife edge.

Two very different worlds, even as they are so close together. The home is filled with laughter and the garden with birds, but then the horrifying sounds from the camp.
The camp is really about the idyllic lifestyle of Hedwig and Rudolf and their young family. For Hedwig, she finally has the house and garden she has dreamed of. They are finally fulfilling not just their own dream, but the socialist nationalist dream of heading east and finding their own “living space.”

On the most basic level, they all block out the thing that allows them to be there — the sound that comes over the garden wall of the daily murder by gassing, the occasional gunshot and the torture of the prisoners. As viewers, we hear this. We know that you can block your eyes, but you cannot block your ears, so we wonder why they don’t react. But it is their choice, on some level, to do so.

Poor Things

Talk about your role on Poor Things.
I worked as sound designer/sound supervisor/re-recording mixer. Often you need a bunch of supervisors to all work individually and then bring the mix together at the final mixing stage. I work with a team of first assistants in one iteration of software, and we sculpt the mix as we go.

Who else was on your team, and how did you split up the duties?
First assistant Simon Carroll helps me with so much, and then I have two second assistants. When a film comes in, we watch it and decide what we need to do. We all work in the same software, on the same timeline, all at once. So if I need some more footsteps in a scene, for example, I will throw a marker in the “To Do” marker list. One of the guys who is feeling happy about feet that day will hit it! This extends out to all sorts of sound design and premixing work. My team is exceedingly talented and is adept at many disciplines — dialogue editing and clean-up, Foley editing, design and mix work. Being so diverse keeps it interesting for them too.

How would you describe the soundscape of Poor Things?
The sound design for Poor Things is a sophisticated blend of seemingly real-world authenticity in a rather surreal environment; creative sound manipulation; and a deep understanding of the film’s narrative and emotional context. The work is integral to the overall impact of the film. We created soundscapes that are both beautiful and surreal yet feel “normal” over the extraordinary visuals of the film. It was also designed to allow the actors’ performances to really sing.

You are a frequent collaborator of Yorgos’. When did he approach you about this film, and how does that relationship work?
Yes, this was our fourth film together. I am so lucky! We spoke about the film quite a while back. We have a very good shorthand really. During the making of the post soundscape for The Killing of a Sacred Deer, Yorgos had to leave at the start of the process. The fact that he didn’t walk out when he heard it at the Cannes premier made for a good level of trust between us!

The film is a comedy/fantasy. How did you use sound to help tell this story? Does it change much from the time Bella (Emma Stone) is brought back to life and is still learning how to live to when she regains some of her freedom/independence?
Absolutely. The sound develops in terms of age characteristic and playfulness, becoming more mature as it goes on. The soundscapes my team and I created help to lend credibility to the extraordinary set builds, and therefore in some way they go toward helping with world-building. Here, great sound design ideally goes unnoticed! But there are some fun, standout bits, like squishy frogs, barking chickens and ships with a heartbeat.

What were some of the more challenging scenes in the film from an audio perspective?
Probably the most challenging were the scenes that show Bella’s extraordinary character growing up. We had to keep the playful tone without undermining a serious message. Also, not one of these places has ever actually existed, nor, given how surreal, have there been any places like them. So the challenge was finding sounds that were unique and bizarre enough to work but not so much that they attracted attention.

Any scenes that stand out as a favorite?
I really love the opening scene in the hallway, with Bella on her bike. Jerskin Fendrix’s extraordinary score is playing and the chicken-dog is barking. The sound design over the end credits was great fun too.

What kind of notes/direction were you getting from Yorgos?
Yorgos and I have such a shorthand, and he really is the most supportive director ever. He hires you because he knows you understand his filmmaking, and then he creates enormous space for you to work in. He knows that I can get going without much of a brief, and then we meet late in post to see where I am at.

Where did you do the work, and what tools did you use?
My team and I recorded sound in the field and then worked out of Wave Studios in London for editorial and premixing. We final-mixed at Halo Post in London. I have a great Dolby Atmos room at my home in Brighton, on the south coast below London. I work predominantly in a less common software called Nuendo [from Steinberg]. It accommodates thousands of audio tracks, network collaboration with my team, and huge sound design and Dolby Atmos mix opportunities. More people should use it.

What haven’t I asked that’s important? 
Integrating with Jerskin’s amazing Oscar-nominated score! Yorgos never used a composer before, and previously I would have had to stitch together disparate musical tonalities into a cohesive soundscape. Not anymore! Plus, his score is so unusual and singular that it really made me adapt my soundscape somewhat more to its essence. Plus, he is a lovely guy!

Podcast 12.4
Sunday Ticket

Creating Sounds for NFL Sunday Ticket Super Bowl Spot

Recreating what a flying football player might sound like as a bird when it lets loose with a caw isn’t your usual Super Bowl spot brief… but that was the heart of what Alt_Mix had to do when coming up with the sound design for Migration, the NFL Sunday Ticket ad that ran right before kickoff of Super Bowl LVIII.

Conceived by YouTube Creative Studio and produced by MJZ, the spot shows what happens when football players take to the skies in their annual, end of season migration. YouTube Creative Studio turned to Alt_Mix , a New York-based audio post studio founded by veteran mixer Cory Melious, for the second year in a row to provide complete audio mixing and sound design services for their Super Bowl commercial.

Sunday TicketMigration opens with a birder watcher raising binoculars to his eyes. “Beautiful, isn’t it,” he says softly as an orchestral score from music studio Walker rises in the background and we hear the far-off cawing of the flying gridsters. “Each year they must follow the path of migration, but never fear, they’ll be back,” he says as we see the players swooping in to grab a fish from a lake or alighting gently just outside a cabin.

Alt_Mix handled all aspects of the spot’s final audio, including sound design from the ground up, voiceover recording and mix.

The greatest challenge was figuring out what a football playing “birdman” should sound like. “There was a lot of testing and experimentation in coming up with just the right sound to their calls,” says Melious, who’s something of an amateur birder himself. “The creative team had a really good idea of what they wanted us to achieve, and it was our job to help them articulate that with sound. We did lots of variations, and in the end, we mixed humans making bird sounds with actual bird calls to get just the right pitch and tone.”

The spot features a number of players, such as D’Andre Swift, the running back for the Philadelphia Eagles; Baltimore Ravens tight end Mark Andrews; and Seattle Seahawks wide receiver Tyler Lockett. Also appearing at the end of the spot, watching Sunday Ticket in the cabin scene, are the popular YouTube Creators Deestroying, Pierson Wodzynski and Sean Evans.

There was an interesting interplay between the artists doing the edit (Joint), effects and finishing (Blacksmith) and the soundscape his studio created, Melious adds. “They recognized that the sound had to be strong in order to sell the idea of a football player-sized bird that migrates.

For instance, they were editing the Tyler Lockett scene with no sound on him. “But once they laid the soundtrack on, it became a laugh-out-loud moment,” says Melious. “For the story to work, we needed to connect the details seen in the visuals to make them believable, so we worked really hard to bring those tiny movements alive with sound, like when the tree branch snapped after a player landed on it, or the dust and debris kicked up when they landed by the cabin. It’s all about elevating the viewers’ experience.”

 

Podcast 12.4

Post and VFX Houses Team for CrowdStrike Super Bowl Spot

For the second consecutive year, CrowdStrike is airing a spot during the Super Bowl. This year’s ad features CrowdStrike’s AI-powered cybersecurity heroine Charlotte as she tackles modern cyber adversaries and stops breaches. The Future brings a stylized spin to a classic Western tale to show how CrowdStrike is securing the future of the digital frontier. The ad will broadcast during the two-minute warning in the first half of the game.

Last year’s ad looked back at how the company would have stopped history’s most infamous breach: The battle of Troy. This year’s commercial is set in a futuristic Wild West and tells the story of good versus evil as four notorious nation-state and e-Crime cyber adversaries ride into town looking to cause turmoil and disruption. Armed with the power of the AI-native CrowdStrike Falcon XDR platform, Charlotte rapidly detects the threats and stops them.

The 30-second commercial, shot on the Sony Venice camera, was helmed by director Tarsem Singh of RadicalMedia and produced by CrowdStrike’s newly formed internal creative agency, Redbird, in collaboration with Howdy Sound, Lime Santa Monica, Nice Shoes, RadicalMedia, Union Editorial and Zoic Studios.

“It was great to be reunited with the amazing CrowdStrike team. They come to the table knowing what they want, while giving me room to experiment,” says editor John Bradley of Union Austin, who also cut last year’s Troy. “The offline rough-cut portion of any VFX-driven project requires you to use your imagination, and this was a much more effects-heavy spot than Troy. Every shot had several layers of VFX work to be done.” Bradley cut The Future on Adobe Premiere.

For its part, Zoic used a broad spectrum of software and tools but mainly relied on SideFX Houdini, Autodesk Maya and Foundry Nuke to achieve the majority of the VFX lift.

“We used the practical foreground set buildings and built a full-CG world/environment around them,” says Zoic EP Sabrina Harrison-Gazdik. “Set pieces were enhanced and modified as needed to tie them into the environment. The extensive environment build is implemented into all exterior shots in the spot(s).” The interior saloon shots also required VFX work across the scenes — concepting, building and animating tabletop game holographics, the holographic treatment for the piano player and dancers, the sheriff’s robot arm, the robot bartender — and each shot included one or more VFX elements.

Nothing was captured on a volume or green. Everything was shot on-location. “VFX worked off hero takes and/or clean plates where able to integrate CG into practical locations,” explains Harrison-Gazdik.

“Originally, we were going to grade without the alpha channel mattes for every shot, but as we were grading, there was one particular shot where it was difficult to grade the adversaries and the footage separately,” recalls colorist Gene Curley of Nice Shoes. Curley worked on a FilmLight Baselight and had alpha channel mattes for some shots. “Zoic was able to quickly render these mattes during the color session, and it made the shot much easier to grade.”

When it came to sound design, “The call was for futuristic sci-fi characters in an Old West environment,” says Dusty Albertz of Howdy Sound. “I think we succeeded in crafting a soundscape that is both believable and fun.”

Mixer Zac Fisher of Lime adds, “It was important to find the right blend between the nostalgic undertones of the old-timey sound bed and the futuristic elements of the sound design. CrowdStrike’s collaboration with exceptional composers and sound designers made for an ideal mixing experience. I wanted to make sure I focused on enhancing the comedic elements to capture the audience’s attention. In a spontaneous moment during the final stages of the mix, there was a suggestion to include a whip crack to finish the spot. Everyone in the room ended up loving it, and personally, it was my favorite touch of the project.”

Union Austin EP Vicki Russell says this was one of the smoothest processes she’s ever experienced, “especially in the realm of VFX-heavy Super Bowl spots, where there might be added pressure. It’s been a sheer joy, with all the partners working so well together. CrowdStrike/Redbird consistently provided great feedback and maintained a very inclusive, appreciative vibe.”

You can watch The Future, which features an original score by Douglas Fischer, on YouTube before it airs.

 

 

 

dearVR Pro 2

Dear Reality Launches dearVR Pro 2 Spatializer Plugin

Spacial audio company Dear Reality has launched dearVR Pro 2, the successor to its dearVR Pro spatializer. The upgrade adds a stereo input, including stereo width control, to the plugin, and gives users access to new immersive Pro Tools formats. dearVR Pro 2 also features new high-pass and low-pass filters for early reflections and late reverb. In addition, the new Mk II software supports third-party OSC head trackers.

​“Simulating distance in spatial audio productions is essential when creating truly authentic 3D spaces. Just one spherical plane with a fixed distance around the listener will not do the trick,” explains Dear Reality co-founder Christian Sander. “dearVR Pro 2’s distance simulation unlocks fully natural depth perception for multi-channel formats and lets the user place the sound seamlessly — even behind the loudspeakers.”

The dearVR Pro 2 output section features 35 multi-channel loudspeaker formats, including the latest 9.0.4, 9.0.6  and 9.1.4 Pro Tools DAW formats. The all-in-one spatializer offers ambisonics ​– up to third order – and binaural outputs, making dearVR Pro 2 very useful for advanced XR productions.

dearVR Pro 2The software’s new stereo input feature expands this concept to stereo tracks, allowing for direct access to the stereo width and making the spatializer plugin well-suited for stem productions. “Recording audio with stereo miking techniques is still common practice, even for the creation of advanced three-dimensional experiences,” says Dear Reality’s Felix Lau. “dearVR Pro 2 helps the engineer to effortlessly position stereo recordings of instruments and ambiance in a spatial sound field.”

In addition to the existing connection to the dearVR Spatial Connect VR controller, dearVR Pro 2 can now also connect to any third-party OSC head tracker using the included Spatial Connect Adapter. This provides extended head-tracking control and a much more natural way to judge immersive productions.

dearVR Pro 2 is available for $199, but during the introductory period that lasts until January 31, dearVR Pro 2 is available for $149. Existing dearVR Pro users can upgrade to the Mk II version for $79.​


Ci Media

Sony’s Ci Media Cloud Targets Small- and Medium-Sized Post Houses

Sony’s Ci Media Cloud, a cloud-based media management and collaboration solution that allows users to capture, backup, review, transform and run streamlined post workflows, has updated its offerings to include a plan for small- and medium-sized companies. It includes Ci Transfer, a new desktop file transfer app, and enhanced support for Dolby Atmos audio files.

“We’re continually evolving our Ci platform to better address users’ needs and become an even more intrinsic part of their daily workflow,” says David Rosen, VP of cloud apps and services at Sony Electronics. “With the addition of a plan created to address the requirements and budget of small- and medium-sized businesses, a new desktop app experience and Dolby Atmos audio for immersive, authentic sound, we’re making Ci even more accessible [and easier to use] for creatives and organizations in every facet of the industry.”

Ci just launched a new Business Plan as an addition to the Free, Pro and Team online plans, which complements the custom Enterprise and Enterprise+ offerings. Optimized for independent creators or organizations with multiple teams, this online subscription option is a step up from Ci’s more introductory Team subscription. At about $249 a month, businesses can create unlimited Workspaces, access 1TB of active storage, take advantage of 4TB of archive storage and leverage 500GB of monthly data transfers. This features some of Ci’s enterprise-tier plans’ more advanced capabilities, including custom branding and usage analytics, this new tier provides valuable benefits, a reasonable price and simplified on-boarding.

Ci Transfer Desktop App and Sidecar Ingest Workflow
Ci Transfer is a desktop app that enables high-speed transfer of files and folders to and from the Ci Media Cloud platform. Available for Mac and PC devices, this new app supports multi-gig speed transfers, retries transfers after interruptions and is optimized to handle hundreds of files. With pause and resume of transfers as well as transfer reports, urgent files can be prioritized for expedited transfers and once complete, validated in transfer logs.

For larger organizations migrating archive libraries, the Ci Transfer app can be paired with a new sidecar ingest workflow to allow users to move media libraries without losing sidecar metadata. By uploading both JSON files with media files, Ci will automatically link the metadata to the media files. Ci Transfer is available to download for free for all Ci users. The sidecar ingest workflow is an entitlement feature for enterprise customers.

Dolby Atmos Support
Video and audio files with Dolby Atmos can now be uploaded and playable in Ci’s player experience. The integration with Dolby.io enables support for ADM (Audio Definition Model) BWF (Broadcast Wave Format) Atmos audio, and users can select to listen to and download a binaural or stereo audio proxies directly in Ci. Users can also visually see Loudness Unit Full Scale (LUFS) and True Peak levels in Ci’s built-in media analysis tool. Creative and quality assurance (QA) teams can listen to and work with immersive content, as it was intended to be heard.

Detroit’s Another Country Ups Joe Philips to Creative Director

Detroit-based Another Country has promoted Joe Philips to creative director after he spent the past decade as creative director of sonic branding and composer for the company.

Philips is known for his ability to pair music and sound with moving images, and while working with clients including Pokemon, Pepsi, GMC, Buick, the Detroit Lions and the Association of Independent Commercial Producers (AICP).

He uses his expertise in music composition, sound design, sonic branding, composition and music supervision to help Another Country’s clients and their projects.

Starting his career as a musician, Philips went on to become a composer, songwriter, audio engineer and record producer. Through his experiences as music supervisor and composer for NYC-based sonic branding agency Made Music Studio, he discovered his passion for strengthening brands through music and sound. Teaching sound design at Detroit’s College for Creative Studies since 2020 has kept him updated on the latest tech and trends.

“Joe is equal parts artist and technician, which is perfect to lead our charge in Detroit,” says managing director Tim Konn. “Joe’s passion for all things audio is inspiring, and his excitement is contagious. He’s also very thoughtful and thorough, always bringing unique insights that elevate whatever he touches.”

“One of Another Country’s greatest strengths is that we’re able to handle just about any audio-related project — big or small,” says Philips. “Between our two locations, we have a deep bench of hard-working audio minds and a strong professional network that spans the globe.”

Another Country is part of Cutters Studios. “We benefit so much from being in the same building with editors, animators, graphic designers,” explains Philips. “It allows us to openly communicate during every step of the process and collaborate extensively, work together to solve problems and ultimately create the best output for our clients.”

 

iZotope AI Voice Enhancement Now Available

Native Instruments has released iZotope VEA, a new AI-powered voice enhancement assistant for content creators and podcasters.

VEA features AI technology that listens first and then enhances audio so creators can feel more confident with their voices and deliver better-sounding content. VEA increases clarity, sets more consistent levels and reduces background noise on any voice recording.

VEA works as a plugin within major digital audio workstations and nonlinear editors. For a list of officially supported hosts, see the system requirements at here.

VEA features three simple controls that are intelligently set by iZotope’s AI technology. Those who are more familiar with editing vocal recordings will find a new way to finish productions quickly by consolidating their effects chains and saving on CPU.

Key features:

  • The Shape control ensures audio sounds professional and audience-ready without having to worry about an EQ. Shape is tailored to each voice and matches the sound of top creators or podcasts with the free iZotope Audiolens tool.
  • The Boost control adds loudness and compression as it’s turned up. Users can easily boost the presence and power of voice recordings without spending time struggling with settings. Boost delivers a smooth and even sound to speech for a more engaging listening experience.
  • The Clean control takes background noise out of the spotlight so every voice can shine. VEA learns the noise in the room automatically and preserves speech for light, transparent noise reduction.

VEA is available now for $29.

 

 

Academy Announces Science and Technical Oscar Awards, Plaques

The Academy of Motion Picture Arts and Sciences will celebrate 16 scientific and technical achievements at its annual Scientific and Technical Awards ceremony on February 23 at the Academy Museum of Motion Pictures.

“Each year, a global group of technology practitioners and experts sets out to examine the extraordinary tools and techniques employed in the creation of motion pictures,” reports Barbara Ford Grant, chair of the Academy’s Scientific and Technical Awards Committee, which oversees the vetting of the awards. “This year, we honor 16 technologies for their exceptional contributions to how we craft and enhance the movie experience, from the safe execution of on-set special effects to new levels of image presentation fidelity and immersive sound to open frameworks that enable artists to share their digital creations across different software and studios seamlessly. These remarkable achievements in the arts and sciences of filmmaking have propelled our medium to unprecedented levels of greatness.”

Unlike other Academy Awards to be presented this year, achievements receiving Scientific and Technical Awards need not have been developed and introduced during a specified period. Instead, the achievements must demonstrate a proven record of contributing significant value to the process of making motion pictures.

The Academy Awards for scientific and technical achievements are:

TECHNICAL ACHIEVEMENT AWARDS (ACADEMY CERTIFICATES)

To Bill Beck for his pioneering use of semiconductor lasers for theatrical laser projection systems.

Bill Beck’s advocacy and education to the cinema industry while at Laser Light Engines contributed to the transition to laser projection in theatrical exhibition.

To Gregory T. Niven for his pioneering work in using laser diodes for theatrical laser projection systems.

At Novalux and Necsel, Gregory T. Niven demonstrated and refined specifications for laser light sources for theatrical exhibition, leading the industry’s transition to laser cinema projection technology.

To Yoshitaka Nakatsu, Yoji Nagao, Tsuyoshi Hirao, Tomonori Morizumi and Kazuma Kozuru for their development of laser diodes for theatrical laser projection systems.

Yoshitaka Nakatsu, Yoji Nagao, Tsuyoshi Hirao, Tomonori Morizumi and Kazuma Kozuru collaborated closely with cinema professionals and manufacturers while at Nichia Corporation Laser Diode Division, leading to the development and industry-wide adoption of blue and green laser modules producing wavelengths and power levels matching the specific needs of the cinema market.

To Arnold Peterson and Elia P. Popov for their ongoing design and engineering, and to John Frazier for the initial concept of the Blind Driver Roof Pod.

The roof pod improves the safety, speed and range of stunt driving, extending the options for camera placement while acquiring picture car footage with talent in the vehicle, leading to rapid adoption across the industry.

To Jon G. Belyeu for the design and engineering of Movie Works Cable Cutter devices.

The design of this suite of pyrotechnic cable cutters has made them the preferred method for safe, precise and reliable release of suspension cables for over three decades in motion picture production.

To James Eggleton and Delwyn Holroyd for the design, implementation and integration of the High-Density Encoding (HDE) lossless compression algorithm within the Codex recording toolset.

The HDE codec allows productions to leverage familiar and proven camera raw workflows more efficiently by reducing the storage and bandwidth needed for the increased amounts of data from high-photosite-count cameras.

To Jeff Lait, Dan Bailey and Nick Avramoussis for the continued evolution and expansion of the feature set of OpenVDB.

Core engineering developments contributed by OpenVDB’s open-source community have led to its ongoing success as an enabling platform for representing and manipulating volumetric data for natural phenomena. These additions have helped solidify OpenVDB as an industry standard that drives continued innovation in visual effects.

To Oliver Castle and Marcus Schoo for the design and engineering of Atlas, and to Keith Lackey for the prototype creation and early development of Atlas.

Atlas’s scene description and evaluation framework enables the integration of multiple digital content creation tools into a coherent production pipeline. Its plug-in architecture and efficient evaluation engine provide a consistent representation from virtual production through to lighting.

To Lucas Miller, Christopher Jon Horvath, Steve LaVietes and Joe Ardent for the creation of the Alembic Caching and Interchange system.

Alembic’s algorithms for storing and retrieving baked, time-sampled data enable high-efficiency caching across the digital production pipeline and sharing of scenes between facilities. As an open-source interchange library, Alembic has seen widespread adoption by major software vendors and production studios.

SCIENTIFIC AND ENGINEERING AWARDS (ACADEMY PLAQUES)

To Charles Q. Robinson, Nicolas Tsingos, Christophe Chabanne, Mark Vinton and the team of software, hardware and implementation engineers of the Cinema Audio Group at Dolby Laboratories for the creation of the Dolby Atmos Cinema Sound System.

Dolby Atmos has become an industry standard for object-based cinema audio content creation and presents a premier immersive audio experience for theatrical audiences.

To Steve Read and Barry Silverstein for their contributions to the design and development of the IMAX Prismless Laser Projector.

Utilizing a novel optical mirror system, the IMAX Prismless Laser Projector removes prisms from the laser light path to create the high brightness and contrast required for IMAX theatrical presentation.

To Peter Janssens, Goran Stojmenovik and Wouter D’Oosterlinck for the design and development of the Barco RGB Laser Projector.

The Barco RGB Laser Projector’s novel and modular design with an internally integrated laser light source produces flicker-free uniform image fields with improved contrast and brightness, enabling a widely adopted upgrade path from xenon to laser presentation without the need for alteration to screen or projection booth layout of existing theaters.

To Michael Perkins, Gerwin Damberg, Trevor Davies and Martin J. Richards for the design and development of the Christie E3LH Dolby Vision Cinema Projection System, implemented in collaboration between Dolby Cinema and Christie Digital engineering teams.

The Christie E3LH Dolby Vision Cinema Projection System uses a novel dual modulation technique that employs cascaded DLP chips along with an improved laser optical path, enabling high dynamic range theatrical presentation.

To Ken Museth, Peter Cucka and Mihai Aldén for the creation of OpenVDB and its ongoing impact within the motion picture industry.

For over a decade, OpenVDB’s core voxel data structures, programming interface, file format and rich tools for data manipulation continue to be the standard for efficiently representing complex volumetric effects, such as water, fire and smoke.

To Jaden Oh for the concept and development of the Marvelous Designer clothing creation system.

Marvelous Designer introduced a pattern-based approach to digital costume construction, unifying design and visualization and providing a virtual analogy to physical tailoring. Under Jaden Oh’s guidance, the team of engineers, UX designers and 3D designers at CLO Virtual Fashion has helped to raise the quality of appearance and motion in digital wardrobe creations.

To F. Sebastian Grassia, Alex Mohr, Sunya Boonyatera, Brett Levin and Jeremy Cowles for the design and engineering of Pixar’s Universal Scene Description (USD).

USD is the first open-source scene description framework capable of accommodating the full scope of the production workflow across a variety of studio pipelines. Its robust engineering and mature design are exemplified by its versatile layering system and the highly performant crate file format. USD’s wide adoption has made it a de facto interchange format of 3D scenes, enabling alignment and collaboration across the motion picture industry.

Maggie Norsworthy

Behind the Title: Mr. Bronx’s Maggie Norsworthy

Maggie Norsworthy is an audio post producer at New York City’s Mr. Bronx, an artist-owned and operated audio post studio that creates soundscapes for ad campaigns, feature films, TV series, experiential installations and theme park rides. Mr. Bronx offers an array of services, including mix, sound design, Foley, ADR, VO record and VO casting.

What does your job entail?
I wear a few different hats. I oversee high-level studio management, mid-level production and in-person production. I also track work for social media posting.

The Boy and the Heron

It usually starts with fielding incoming requests and breaking down project needs using creative or sound design briefs to generate estimates. After all that’s done, we get on creative calls to help bring the project to life. I also assist with coordinating sessions in a way that’s optimal for the talent along with the creative, editorial and production teams. As sessions progress, I manage communications between the engineers, talent, clients and everyone else. Finally, I ensure delivery of the final mixes for the client.

Separately, I now double as a casting director when needed. After honing the casting brief with the client, we curate a talent list and then send out our favorite options. Other various items include non-audio-specific tasks, like client outreach and sales.

What would surprise people the most about what falls under the title of audio producer?
The diversity of what “audio producer” means is really funny. Podcast audio producers often have journalism degrees; music audio producers lay down tracks. I’m not an engineer at all — I coordinate and keep the trains running. That always surprises people. I think they get surprised that post studios do castings too. Casting can be a mysterious process to people, and it’s cool that those are done in-house.

Lakota Nation vs. United States

What’s your favorite part of the job?
I love it when a session goes smoothly. It’s great when, once the actual recording or mix session starts, people know the important information and have all the answers to questions, which lets the engineer relax and do their thing. When everyone is comfortable and can collaborate well together, that’s a great feeling.

I also love in-person work, when clients come in and can see our new space. Paired with this, I love going to in-person events, meeting new people and finding ways to collaborate. In many ways, this is the perfect job for an extrovert.

What is your least favorite?
When we have a lot of sessions going on at the same time. It’s great for us — we want our engineers to be booked — but sometimes that means multiple urgent requests come in at once. That’s why I prioritize time management to assess what really needs to get done first because it can be a challenge to juggle many “emergencies” at once.

Black Is King

What is your most productive time of the day?
Probably midmorning. When we don’t have sessions at the very beginning of the day, I have more time to review our documents and calendar. If I can get that high-level perspective at the beginning of the day, it sets me up for a smoother day ahead.

How has your section of the industry changed since COVID? The good and the bad?
It’s definitely changed in lasting ways, including the huge uptick in remote recording. We used to do some remote recording, but now the importance of testing equipment with talent has skyrocketed because we need to provide the highest quality audio possible.

It’s great for talent because it gives them chances to record in places where they’re not local. Before, everything revolved around an in-person session. We can accommodate more sessions in a day now too. But it’s a different beast because it means more emails, fewer calls and handling things remotely that would have been handled in-person before, which skews the balance toward remote work. Hybrid coordination is different than mostly in-person. It’s cool, though. The engineers have home setups and more flexibility, and it’s easier for clients to join via Zoom.

Black Is King

Do you see some of these workflow changes remaining with us going forward?
For sure. Clients still love to come in, and we love to have them, but if they’re suddenly slammed, it’s great that they can join over Zoom and we can continue the work. It adds a layer of convenience that wasn’t there before.

If you didn’t have this job, what would you be doing instead?
I would probably be in politics. I’ve done a lot of things. I’ve been a Capitol Hill intern studying political science. I’ve also worked at a cheese and wine store. I love both cheese and wine, so maybe I’d work in one as well. Ultimately, I like learning about people and making things happen, which is why I studied anthropology and political science.

Why did you choose this profession?
Ever since I found out what an audio producer was, I was interested. I like that it’s client-facing. I think post production is stimulating since you work on many things in a day. It’s great being the trusted partner in problem-solving and getting clients to the finish line. I like looking for ways to make processes more efficient.

Lakota Nation vs. United States

Can you name some recent projects you have worked on?
Mr. Bronx has such high respect for quality and craft, which means we work on some of the coolest stuff ever: most recently the Dolby Atmos mix for Studio Ghibli’s The Boy and the Heron English release. We’ve also done work for Beyoncé projects, including trailers for the Renaissance world tour film and sound design and mix for her Black Is King and Lemonade visual albums.

On the serial side, we also worked on FX’s Welcome to Wrexham. That was my first time seeing my name in the credits of a TV show. Recently we worked on sound for the documentary Lakota Nation vs. United States, which premiered at the Tribeca Film Festival and was purchased by IFC Films.

Ultimately, I love working alongside people with high standards and excellent output.

Name three pieces of technology you can’t live without.
The Google Tasks list, Slack and Google Calendar.

When you can listen to music, what do you have playing?
It depends on what season of life I’m in. Lately, I’ve been listening to this “Bossa in the Background” Spotify playlist, which has bossa nova music that’s mellow enough not to be distracting.

I also have a tried-and-true classical playlist from college that’s very good. Sometimes I’ll listen to regular music, but classical music and bossa nova usually make it easier for me to get in the zone. For when I am feeling emo, I have a specific playlist of my favorite songs from when I sang in choir in college.

What do you do to de-stress from it all?
I’m training for a half marathon, so I’ve been running. I also go to a lot of live music shows and hang out with my friends all the time. I rarely say no to going to do something.

The Boy and the Heron

Finally, would you have done anything different along your path?
I’m a person that needs to get things out of my system. If I hadn’t worked on the Hill, I would have been thinking about it forever. I don’t regret doing that. Same thing with cheese and wine; I’m happy I have that knowledge now. Maybe I would have been more aggressive earlier on about learning about the audio industry. It’s not always easy from the outside looking in to learn more about jobs like audio post producer.

Any tips for others who are just starting?
For those just starting, I’d say networking isn’t as intimidating as you might think. It’s not what you fear it has to be. Everyone wants to meet people they get along with. Everyone wants to make connections, and everyone has an interest in other people’s industries. You don’t have to feel weird talking to people and asking questions. The number of random connections that have emerged before and after I started at Mr. Bronx is so funny to me at this point, but that’s a story for another time…

 

 

V-Pan

Rhodes’ New Plugin Adds Analog Warmth to Vari-Pan Effects

Rhodes Music has introduced the V-Pan plugin, expanding on its V Series software that targets producers and musicians. According to Rhodes, the plugin pays homage to the iconic vibrato stereo panning effect introduced in the 1970s and emulates the iconic Vari-Pan section found on the Rhodes MK8 flagship piano, bringing analog warmth to a digital audio workstation. 

Integrating the Vari-Pan section from the V-Rack, Rhodes multi-effects plugin, the V-Pan is a stand-alone, dedicated module. Representing the third installment in Rhodes’ line of software solutions, the V-Pan is an example of how Rhodes is dedicated to legacy while simultaneously providing contemporary tools tailored for new-world producers and musicians.

Whether it’s enhancing the depth of a keyboard or synth, creating textures on guitar, infusing ethereal FX sounds or giving vocals a unique twist, the V-Pan plugin, says Rhodes, can help create new possibilities. Its user-friendly interface seamlessly integrates with a user’s preferred DAW, allowing musicians to shape and customize a sonic landscape.

Key Features:

  • Directly modeled from the distinctive Rhodes MK8 hardware Vari-Pan circuit.
  • Equipped with precision left/right panning LED indicators.
  • Deep depth control for nuanced sound manipulation.
  • Provides a clear parameter value readout for precise adjustments.
  • Includes Waveform Slew and Smooth controls, enabling further customization of the panning waveshape.
  • Seamlessly integrates BPM sync functionality for synchronized and rhythmic effects.

The V-Pan plugin will be available for free until January 31. Click the link here and fill out the form to get access.