By Jennifer Walden
Foreign films aren’t just for cinephiles anymore. Streaming platforms are serving up international content to the masses. There are incredible series — like Netflix’s Spanish series Money Heist, Danish series The Rain and the German series Dark — that would have been otherwise unknown to American audiences. The same holds true for American content reaching foreign audiences. For instance, Starz series American Gods is available in French. Great stories are always worth sharing and language shouldn’t be the barrier that holds back the flood of global entertainment.
Now I know there are purists who feel a film or show should be experienced in its original language, but admit it, sometimes you just don’t feel like reading subtitles. (Or, if you do, you can certainly watch those aforementioned shows with subtitles and hear the original language.) So you pop on the audio for your preferred language and settle in.
Dubbing used to be a poorly lipsynced affair, with bad voiceovers that didn’t fit the characters on screen in any capacity. Not so anymore. In fact, dubbing has evolved so much that it’s earned a new moniker — localization. The increased offering of globally produced content has dramatically increased the demand for localization. And as they say, practice makes perfect… or better, anyway.
Two major localization providers — BTI Studios and Iyuno Media Group — have recently joined forces under the Iyuno brand, which is now headquartered in London. Together, they have 40 studio facilities in 30 different countries, and support 82 different languages, according to its chief revenue officer/managing director of the Americas Chris Carey.
Those are impressive numbers. But what does this mean for the localization end result?
Iyuno is able to localize audio locally. The language localization for a specific market is happening in that market. This means the language is current. The actors aren’t just fluent; they’re native speakers. “Dialects change really fast. Slang changes. Colloquialisms change. These things are changing all the time, and if you’re not in the market with the target audience you can miss a lot of things that a good geographically diverse network of performers can give you,” says Carey.
Language expertise doesn’t end with actor performance. There are also the scripts and subtitles to think about. Localization isn’t a straight translation. There’s the process of script adaptation in which words are chosen based on meaning (of course) but also on syllable count in order to match lipsync as closely as possible. It’s a feat that requires language fluency and creativity.
“If you think about the Eastern languages, and the European and Eastern European languages, they use a lot of consonants and syllables to make a simple English word. So we’re rewriting the script to use a different word that means the same thing but will fit better with the actor on-screen. So when the actor says the line in Polish and it comes out of what appears to be the mouth of the American actor on-screen, the lipsync is better,” explains Carey.
Iyuno doesn’t just do translations — dubbing and subtitles — to and from English. Of the 82 languages it covers, it can translate any one of those into another. This process requires a network of global linguists and a cloud-based infrastructure that can support tons of video streaming and asset sharing — including the “dubbing script” that’s been adapted into the destination language.
The magic of localization is 49% script adaptation, 49% dialogue editing and 2% processing in Avid Pro Tools, like time shifting and time compression/expansion to finesse the sync. “You’re looking at the actors on screen and watching their lip movement and trying to adjust this different language to come out of their mouth as close as possible,” says Carey. “There isn’t an automated-fit sound tool that would apply for localization. The actor, the director and the engineer are in the studio together working on the sync, adjusting the lines and editing the takes.”
As the voice record session is happening, “sometimes the actor will suggest a better way to say a line, too, and they’ll do an ‘as recorded script,’” says Carey. “They’ll make red lines and markups to the script, and all of that workflow we have managed into our technology platform, so we can deliver back to the customer the finished dub, the mix, and the ‘as recorded script’ with all of the adaptations and modifications that we had done.”
Iyuno’s technology platform (its cloud-based collaboration infrastructure) is custom-built. It can be modified and updated as needed to improve the workflow. “That backend platform does all the script management and file asset management; we are getting the workflow very efficient. We break all the scripts down into line counts by actor, so he/she can do the entire session’s worth of lines throughout that show. Then we’ll bring in the next actor to do it,” says Carey.
Pro Tools is the de facto DAW for all the studios in the Iyuno Media Group. Having one DAW as the standard makes it easy to share sessions between facilities. When it comes to mic selection, Carey says the studios’ engineers make those choices based on what’s best for each project. He adds, “And then factor in the acoustic space, which can impart a character to the sound in a variety of different ways. We use good studios that we built with great acoustic properties and use great miking techniques to create a sound that is natural and sounds like the original production.”
Iyuno is looking to improve the localization process even further by building up a searchable database of actors’ voices. “We’re looking at a bit more sophisticated science around waveform analysis. You can do a Fourier transform on the audio to get a spectral analysis of somebody’s voice. We’re looking at how to do that to build a sound-alike library so that when we have a show, we can listen to the actor we are trying to replace and find actors in our database that have a voice match for that. Then we can pull those actors in to do a casting test,” says Carey.
Subtitles
As for subtitles, Iyuno is moving toward a machine-assisted workflow. According to Carey, Iyuno is inputting data on language pairs (source and destination) into software that trains on that combination. Once it “learns” how to do those translations, the software will provide a first pass “in a pretty automated fashion, quite faster than a human would have done that. Then a human QCs it to make sure the words are right, makes some corrections, corrects intentions that weren’t literal and needs to be adjusted,” he says. “So we’re bringing a lot of advancement in with AI and machine learning to the subtitling world. We will expect that to continue to move pretty dramatically toward an all-machine-based workflow.”
But will machines eventually replace human actors on the performance side? Carey asks, “When were you moved by Google assistant, Alexa or Siri talking to you? I reckon we have another few turns of the technology crank before we can have a machine produce a really good emotional performance with a synthesized voice. It’s not there yet. We’re not going to have that too soon, but I think it’ll come eventually.”
Main Image: Starz’s American Gods – a localization client.
Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.