By Patrick Birk
Director Jorge R. Gutiérrez’s Maya and the Three is an animated Netflix series that tells the story of Maya (Zoe Saldana), a warrior princess in pre-colonial Mesoamerica. To save her family and humanity, she must fight alongside three legendary warriors to defeat the gods of the underworld.
Audio post veteran Scott Gershin led the sound department at The Sound Lab, a Keywords Studio, that brought Gutiérrez’ epic visual landscape to life. With films like The Book of Life, Pacific Rim and American Beauty under his belt, plus the series Mrs. America, Gershin was more than up to the challenge.
He recently sat down with postPerspective to explain how he and his team brought feature-level audio to this nine-episode limited series.
Maya and the Three has a very colorful, distinctive art style. How did you develop the sound design to match?
The director of the show is Jorge Gutiérrez [El Tigre: The Adventures of Manny Rivera], who I had worked with on The Book of Life, so we had a certain working relationship already established. We have similar tastes in filmmaking styles and how sound can be used to enhance a story. It was a natural fit. Before COVID hit, I read the script, and he showed me concept drawings. This allowed me to understand the characters and start formulating ideas of what things could sound like. Then, little by little, we started looking at different parts of each of the chapters.
As the animation evolved, so did the conversation. And I think this became a passion project, a COVID love project, in a bizarre way, in that we were all stuck at home and fearful of what the future would bring. And here was this wonderful story. And the visuals we started seeing were just amazing. It got us excited to start the creative process.
Because of COVID we had to work remotely. My crew and I hunkered down and started creating and designing. I would take my work and my crew’s work and little by little, do little mixes. We started mixing throughout the editorial process, and I would start sending little QuickTimes to Jorge asking, “How do you like the approach on how the scene plays? What do you think about this character’s design? How do you like the comedic approach? Here’s an idea on how we can make each weapon unique.” It enabled Jorge to be part of the creative process.
We were totally on the same page. He liked what we were doing, and we loved the feedback he was giving us. It was a good match. Then we just started designing until we got to a point where he really felt we had a good understanding and a vibe for the audio portion to support the visuals.
As you watch the show, the first five chapters set up Maya’s world, with each chapter introducing new characters, new gods and new challenges.
There are climaxes and trials within each chapter, and Chapter 9 is the big climax, when everyone who’s been established in previous chapters ends up in the giant confrontation between Mictlan (the god of war) and all the different gods and demigods. It’s fantasy, it’s science fiction, it’s high action.
At the end of the day, I wanted to use our sonic approach and design to complement the visuals, the story and the acting. To strike a balance between comedy, action, fantasy and sci-fi and really try to give it a cinematic approach.
Can you talk a bit more about the workflow?
My approach to the show was a little different. Instead of editing an episode and then mixing it, I talked Jorge into editing and designing the first six chapters together first, which was approximately 12-15 weeks straight. Then after the editorial process I would start “mixing,” which was more like fine-tuning those six chapters at the same time. I mixed for 18 days straight. Then back to editorial for another 6-8 weeks before mixing 12 days. It allowed us to jump back and forth between chapters, creating a sonic arc for the first three hours of the show.
It also allowed us to lose ourselves in the show — to eat, live and breathe it — rather than making a number of stops and starts. For example, while working on Chapter 6, we could always go back to Chapter 1 with a new or evolved idea.
Who was on your team?
My editorial crew consisted of Chris Richardson, Andrew Vernon and David Barbee. Each member of my team had different passions and things they loved to design, so I played to those strengths. Chris came from working with Trent Reznor and has a great sense of sonic power and bite. Andrew came from working at Pixar and has a great sense of comedy, timing and detail. And David had done a lot of great TV shows that I loved, including his recent work on The Boys. Also part of our team was Dan O’Connell and his Foley studio One Step Up, supplying us with amazing Foley and great elements to work with. I have worked with Dan on a large number of shows, including The Book of Life.
How was the work divvied up?
With shows like Maya, I like to break it into categories, and each editor gets assigned a category for the run of the series. This helps with consistency and enables the designer to evolve the sounds within that category.
In addition to grabbing a number of categories myself, I would assemble each of the designers’ tracks and do a little embellishing and tweaking to make the scene play a certain way. We designed against the music and pre-dubbed dialogue, which allowed us to make certain choices early on. We constantly shared sessions and tracks via Aspera, Slack, and Zoom, since each of us were working offsite.
Did you use plugins for the design and manipulating the sound?
For the designs, we manipulated our collected sounds, combined them, pitched-bended them, enveloped them, saturated them and tweaked them. It was a sonic orgy of plugins. We were always talking about different plugins we found and thought would be cool for any given scene or moment, so when you heard a sound, it was unique to Maya.
It’s great that we all had our own tricks and techniques to manipulate sounds, and as we shared our ideas with each other, it made for an enjoyable creative process. Since Jorge brought a wonderful amount of visual detail to the production, we needed to bring that same level of detail and originality to the design of the soundscape. Of course, a major challenge was how to accomplish that in the time that we had. What priorities, efficiencies, and tricks could we come up with? You do the best that you can with the time you’ve got. It’s about trying to work smarter.
How do you create a feeling of hugeness in sound effects?
We wanted to support the depth of the show with sound and design. In some instances, we asked ourselves, “Do we go big here? And if so, how big? Does it rock the room? How does it translate in a streaming environment at home?” I wanted to make sure we didn’t get too intense for the different age ranges. But there were definite levels of intensity, such as with the Stone Golems, which are the giant stone creatures. We wanted to make them kind of cool and fun but with a feel of danger.
The dragon at the end was challenging. The challenge was how to make it stand out from the diversity of creatures that came before it. I decided to forego the noise route and go more tonal instead — like giant horns blasting away, making your knees shiver. If it was a film, I could depend more on the sub, but because it’s playing in your living room or on an iPad, I didn’t have that option. It was a lot of trial and error to find the right vibe, the right feel. I’m glad I had a lot of experience with big creatures, such as on Pacific Rim. That helped.
Within our crew we often discussed what we wanted to accomplish in any given scene or character. What are the milestone moments? And how do we want to lead up to them and get out of them? What are the big characters, and how do they compare and differ from the other characters? Was this a comedic sequence, and if so, how far do we go? How do we want to approach the dark moments? We wanted to capture the full range of emotions for each scene and character. Also I wanted to be able to identify what moments needed to be quiet, letting the actors and the audience have an intimate moment.
How difficult is it to mix for all the different screens people will be watching it on?
Netflix has an average loudness standard of -27 LKFS on the center channel at the frequencies of dialogue; this is a little different than theater and DVDs. Theoretically, we have more headroom, but if you push the other channels too much, you’re going to mask and overshadow dialogue. A lot of the characters were yelling, and the battle scenes had a lot of content in the center speaker. It was always a constant battle deciding how big or quiet to get. We don’t want to be so dynamic that people are diving for their remotes, and I didn’t want to use a lot of bus compression.
I used a little bit here and there, but I didn’t squash it because I wanted all the little peaks of detail. Sometimes if we saturate a sound in a very interesting way, it’ll help cut through the mix, as opposed to if I only used EQ.
Was this just for effects, or for voices too?
Both. Whether it’s the rock creatures or the giant dragon, there were definitely challenges in the mix. I used a lot of delays and a lot of different types of reverbs to sustain sounds and float them into the surrounds. Chapter 6 and Chapter 9 contained the largest battle scenes. I had to make strategic choices in any given set of shots as to what needed to be heard and, most importantly, what didn’t. I love detail and clarity.
What were some key, unique effects your team had to create? Zatz’s horn blast comes to mind.
I wanted to work similarly to how composers score a show, where each character has a motif, a theme, a signature sound. I tried to take the same approach to the sound design. I looked at each of the four characters and tried to figure out what makes them unique. Rico is more comedic. Chimi’s a little shy, but she grows within her character to become a badass, which is depicted in the evolution of her archery and her weapon. Picchu is already the strongman, and while he’s a big guy, I tailored his design to have a combination of strength as well as fragility.
Zatz was the “bad boy” of the group. At times I gave him spurs as he walked… a cross between a Western outlaw and a rock star. For his horn we found some great libraries of horn design that I thought were powerful and supported his character.
Then you have King and Queen Teca, “The Parents.” I wanted to give King Teca a lot of detail as well as strength. He had a comical side, but when it came to protecting Maya, he was all business, and I needed to show his strength and power with the sound.
For the queen, it’s about elegance and a different kind of strength and approach. I wanted to support her diplomatic side and inner strength and have it contrast with the king. The way she walked and moved, it was more of a floating approach.
I did the same for each of the underworld gods — each one has its own signature and motif. Another advantage of the motifs is that you can hear them offstage and know exactly what character is coming.
What about the creatures?
For the creatures and Chiapa, I used a lot of vocalizations. Since they didn’t say words, I had to find vocal sounds to give them personality. I used a lot of animal sounds and then sweetened them with my voice, in addition to having a library of sounds that was recorded early on from Dee Baker for Chiappa.
For the turkey sounds, Andrew Vernon, one of our sound designers, recorded his wife making these really fun sounds that we thought were great for that character.
How do you avoid creating ear fatigue?
I came from the world of music. I think of everything as music in a way. Pace, rhythm, accents, pitch, cadence, phrasing. It’s like how you would approach bass and drums. I feel like they’re sonic cousins to explosion weapons and impacts.
There were times we designed an effect, listened to it and then we realized it was too much or needed a little more. I constantly evaluated what I wanted to hear here or there. How do we want to stylize a scene? Did I want to “ghost” that effect? In addition to creating cool sounds, I looked for opportunities to stylize a given moment.
A method I use on a lot of my shows is to design and mix the loudest moments/elements first. I’ll need those sounds to cut through music, or they might even be on top of the music. Then I go to the next layer and say, “These are things that are going to play in and out of music.” The layer after that is things that are going to play behind the music. Backgrounds come last. I mix from the loudest elements first and make my way to the quietest elements…getting rid of any sounds or elements that create masking. I love to maintain all the little details…Foley sounds and all the little details and ear candy that enrich a mix
How did you keep everything organized in a project with so many different sonic elements?
Organization is key. I had two playback units originally, and then halfway through, I upgraded to the new Pro Tools software, which allowed me to play back over the 750-track limit. During my biggest scenes, I was carrying approximately 1,500 tracks. That included all the elements, (music, dialogue and effects). Music was broken up into four different splits, and each split had about 16 to 32 stereo breakouts. I used those tracks to create a 5.1 music stem. I think the reason I had so many tracks was that I categorized a lot, using heavily nested folders, in Pro Tools terminology.
So there would be an Effects folder, and within that you’d have subfolders such as Creatures, which would contain subfolders for Chiappa, Fire Saber Tooth, and other creatures. Or a Weapons folder that contained subfolders such as Swords and Axes. To help with speed and organization, I used two Stream Decks running SoundFlow. This allowed me to snap to any given category or view. It made the process fast.
What were your go-to EQs, reverbs and dynamic processors?
For the design, each one of our crew had some of their own sets of tricks, and we all shared plugin ideas too. Chris Richardson loved using Neutron; it allowed us to use harmonic distortion and harmonic EQs, which helped cut through the mix. For exteriors, I used Altiverb and some Slap Delay for dialogue. For interiors, I used Cinematic Rooms and FabFilter Pro-R — they’re wonderful reverbs. Pro-R allows me to control reverb times on a per-frequency basis, and I can get some really interesting sounds out of it.
For music I used Cinematic Rooms and Symphony. On the mix side, for compression and saturation, I used a bunch of UAD plugins — tape simulators, classic compressors and Neve stuff. I used Soothe2 and a lot of FabFilter plugins.
I did a bunch of presets for Waves last year, and there was one plugin I created presets for called the CLA Epic. I love the sound of Epic. It gives me four types of delays and four types of reverbs that can be used all at once or separately. So I used that quite a bit. Since I’d just finished doing presets for it, I knew the plugin well. I used Slapper quite a bit too. Other plugins we used were Waves L-Series compressors, some Massey stuff, the list goes on. I used a lot of Nugen for metering and bus limiting. We were always trying new things.
Finally, what did you take away from this project?
It was a project of love and a lot of fun. Jorge’s got to be one of the nicest guys on the planet, and this project was a sound designer’s dream.
Patrick Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City. He releases original material under the moniker Carmine Vates. Check out his recently released single, Virginia.