By Patrick Birk
This summer, Warner Bros. resurrected the beloved Space Jam franchise for the 21st century with the Malcolm D. Lee-directed Space Jam: A New Legacy. In addition to Bugs Bunny, Daffy Duck and other classic Looney Tunes characters, it stars a live-action and animated LeBron James, along with Don Cheadle.
Set to release on Blu-ray and DVD next month, this Space Jam iteration tells the story of a fictionalized James, who is trying to rescue his son from the clutches of Cheadle’s Al G. Rhythm, the villain of Warner’s “Server-verse.” To do this, he must play a digitized game of basketball alongside the Looney Tunes gang.
Bringing this universe to life sonically was no small task, but veteran re-recording mixer Tim LeBlanc, who worked out of Warner Bros.’ Stage 9, was up to it. With past credits including Aquaman, Godzilla and The Tomorrow War, LeBlanc shares what it takes to make large, action-packed movies ready to rock a theatrical audience.
The film has three main settings: the real world, the computer/game world and the cartoon world. How did the mix change in each world?
We have LeBron’s home, and that’s mixed like a normal dialogue scene in a house. Then there’s the Server-verse, which is the Don Cheadle side of the wall. Although it may not seem like it, there are some pretty complex treatments to differentiate between worlds, especially when a new one is introduced. But some of that treatment dials back as you get into the new world.
When we meet Don Cheadle for the first time, that room in the Server-verse has a lot of reverb and echo, plus all kinds of interesting, subtle treatment on both the dialogue and sound design. For example, in that world, all the footsteps aren’t practical footstep sounds; the sound designers created a kind of electronic tonal sound for the feet.
Then there is the Looney Tunes world, where we first meet Bugs Bunny, and that has more of a traditional, cartoony, throwback, simple, comedic cartoon effect and treatments. But there are, of course, transitions between the two to make it all feel seamless. The game is designed to feel like you’re at the Staples Center and it’s the NBA finals — to be as “sports arena” as possible — from the way the music sounds in the stadium to the PA announcer.
What kind of processing was used to create Al G. Rhythm’s voice?
In that case, E2 did the sound design and effects for the movie, and sound designer Malte Bieler came up with those dialogue treatments.
We had an entirely separate Pro Tools system that played back Don Cheadle’s dialogue, which went through a processing chain to create glitches and some metallic, computerized sounds. I mixed the movie on an AMS Neve DFC. I had master faders from that system with the treated and untreated dialogue, and it was mixed to taste from there. We started out with a lot more of it and wound up paring it back as the mix progressed, per the director’s request.
Initially, he really loved it, and then we played the movie back and he thought maybe it was a little off, clarity-wise, or just overstayed its welcome. So it kind of winds up playing more when you first meet AI G. Rhythm, when he’s Wizard of Oz-esque, and then it hits key phrases during emotional changes.
What were the challenges of working with sounds from so many different film franchises?
The sound designers used some original classic sounds, and they also recreated some classic sounds because the original recordings were noisy or not so great. I believe it was a combination. But there were obviously some clips from movie titles in there and from classic movies.
Casablanca was the most problematic one because the piano was tied to dialogue. It’s just a mono mix. Dialogue editor Dave Butler used iZotope RX and was able to isolate the piano and dialogue enough so that we could actually make an M&E in that case. There was still a little bit of leakage, but it was so subtle that it was workable.
But in other cases, like the Mad Max sequence, I don’t think we used anything from the real movie. For The Matrix section, we only used the dialogue stem. For Austin Powers, we used the dialogue and music stems. However, there were cases where the music was tricky — where it goes in and out of original score and classic score. The Wonder Woman and Lola Bunny section is in and out of the original movie score and composer score multiple times in that sequence — and I only had a stereo stem from the source films, whereas I had 16 5.1 channel stems for the Space Jam 2 original score.
How did you achieve the balance between the Space Jam 2 original score and the source scores?
That was a little bit tricky. It was just ears and mixing; there were no special tools. We did try a little bit of that, but it didn’t help the cause. Fortunately that sequence is really big on dialogue, so the music isn’t as much at the forefront as it would be in Wonder Woman. So there’s a little bit of latitude to get in between.
In this case, how did you get the stereo mixes working with Atmos?
With music, I don’t use objects very much unless the scene really calls for it — I prefer the way the bed sounds and translates out in the real world. Sometimes I do, but in that instance I didn’t. So it was essentially the 7.1 bed in that section, and it was just delays, reverbs and EQ to match the two together.
What was the most challenging part of the mix?
To be honest, the most challenging part was COVID. A lot of the sources came from non-ideal recordings due to quarantine. Dialogue got recorded remotely on anything from iPhones to systems sent to people’s houses. You name it, it’s in there. So while there are some parts I wish could have been better, in general it was pretty seamless.
I was very happy with the smart people who came up with ways to get it done … and the clients who were understanding, even though it’s not what they’re used to. Wearing a mask and sitting behind a console for 12 hours a day is a little bit of a challenge, but everyone was on the same page, and we got through it.
Recording LeBron’s dialogue remotely was particularly tricky because of the NBA bubble; he was really limited on what he could do. I’m not talking about the earlier parts — because it’s animated, the movie had gone on for two years — it was the later stuff that was the trickier bit. Some of the voice actors have proper recording setups at home, so their stuff is great.
Warner Bros. engineering came up with this pretty cool system that they sent out in a Pelican case. It has an iPad, a microphone and some isolation, and it all fits on a stand. It phones back home to an ADR stage, so they can run it remotely, and it worked pretty well.
What was the most fun part of the process?
All the people I worked with — like picture editor Bob Ducsay, the sound editorial crew and re-recording mixer Mike Keller — were great. We are all good friends and enjoy working together. Everyone seemed to really have fun on the 2D animation section of the movie because everyone’s laughing — it’s like being a kid again.
The game was the hardest part, particularly the first half of the game, when LeBron and the Tunes are getting beaten down. It’s just a dense part of the movie, and it changes gears a lot. The game is essentially two reels, and one of them was shorter than the other. But the shorter one, which is up until halftime, was by far the hardest part of the movie to mix because of the sheer density of action and dialogue — there’s nonstop talking over action. Keeping the clarity and energy going was quite tricky.
What skills were you drawing from the most for the game?
The music is constantly moving, and EQ was constantly moving on it for that reason. Essentially, we built the reel with everything we had and then figured out what we didn’t need. So it was a subtractive process of getting rid of material that was either not heard or was just not necessary, and cutting clutter — which is kind of par for the course for any dense movie. But the game was particularly challenging. If it were a traditional action movie, there would probably be a lot less dialogue when there’s action, and there aren’t going to be comedic beats that you have to hit. Space Jam has all of that going on. Plus, there are many Tunes, and multiple people talk at the same time.
When you say moving EQ, are you strictly automating EQ cuts and boosts? Do you use things like dynamic EQ and multi-band compression?
I do use those, not that often on music, but I did on some things in this movie. As for moving EQ, let’s say there’s a run that’s nonstop rhythmic music. It’s just pounding through, and there’s basketball going on, and the characters are constantly saying lines … I do it by ear and manually. And you’re just rolling things off that are in the range of whatever the lines are in those spots.
I did use some multi-band compression; I use the FabFilter Pro-Q 3 on music when I am doing that, and I used it primarily on a lot of the hip-hop elements, like hi-hat sounds that were mixed in with kick drums. I used it on a bunch material. But on the orchestral stuff, I usually do that manually.
Dialogue-wise, I’ve have used multi-band dynamics for the past 15 years. Primarily for the top end and upper mid-range in order to deharsh things. I’ll use the top two bands of a Waves C4 so I have a 2kHz to 4.5kHz and then a 5kHz up. And not anything extreme in terms of gain reduction. I do sometimes use the Pro-Q 3. Then, on reassigns, I use McDSP SA-2, which is a hardware insert of the DFC.
Speaking of mixing on an AMS Neve DFC, can you talk a bit about that?
The Neve is a large-format digital film console. It’s not a controller, though. It’s a path-dedicated DSP mixing console. We have the newer version, but the platform has been around since the late ‘90s. The Matrix was one of the first movies mixed on it. In the US, Skywalker Sound, Fox and Warner Bros. have them. That’s it.
The processing happens on the console. Say I have 12 tracks of production dialogue, and those go out to 12 outputs from Pro Tools to 12 inputs into the Neve. I predubbed all the dialogue for Space Jam 2 in Pro Tools, and then finaled it on the Neve DFC.
Does it have built-in channel strips?
Yes. It has Atmos built in, and 16-wide stem monitoring with 2,000 paths built in. It’s a pretty powerful console. And it’s 40-bit float, so to me, it sounds better than mixing in Pro Tools. But the next guy will say that Pro Tools sounds better. It’s all what you’re used to, but Andy Nelson, Paul Massey and I all love it.
You’ve worked on a lot of action films over the course of your career. Was this intentional?
It wasn’t specifically intentional. It’s more of a “one project begets another one” type of scenario. I mean, they’re higher-budget and more complicated, harder, so if you have the skillset to handle it, I think filmmakers and studios look at that and know that you’re up for that task. It’s sort of a self-fulfilling prophecy.
What is your sonic signature for an action film?
Honestly, my mixes are generally based around having really intelligible yet warm dialogue. That’s always 90% of my focus, to make the dialogue sound great — not harsh, but clear. And in a dense action movie, it’s not about raising the dialogue to get to the level of all the loud stuff. It’s about figuring out how to compromise what’s causing the clarity problems, rather than just pushing through it and piling on. It’s not always apparent what the problem is, and it can be a battle, but I think it’s always worth it.
Patrick Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City. He releases original material under the moniker Carmine Vates. Check out his recently released single, Virginia.