Tag Archives: Warner Bros.

Barbie Editor Nick Houy Talks Workflow and VFX 

By Iain Blair

Helmed by Greta Gerwig, co-written by Noah Baumbach (Marriage Story) and starring Margot Robbie and Ryan Gosling, Barbie is a celebration of girl power that effortlessly manages to combine romance, sharp satire, stylish musical numbers, wacky car chases and warm-hearted comedy – all tied up with a big pink bow.

Editor Nick Houy

This surprise blockbuster also showcases skillful editing by Nick Houy, ACE, who seamlessly blends all the disparate elements into a coherent whole. I spoke with Houy, who has cut all of Gerwig’s projects, starting with Lady Bird and including Little Women, about the challenges and workflow.

What were the main challenges of editing Barbie?
(Laughs) Talk about shifts in tone. I can’t think of any other film, especially with this level of success, that comes close. First off, it’s this huge comedy, and they aren’t usually the biggest film of the year. Then we’re constantly shifting gears and showing you what the film’s really about, and constantly pulling the rug out from under the big comedy and trying to tell the story of what it is to become human – and suddenly it’s got much bigger philosophical ideas. So finding the right tones and balancing them all was the biggest challenge.

This is not your typical Gerwig project, and because of all the VFX, this had to be very carefully planned out. How did you two collaborate early on?
First, when I read the original script she’d co-written with Noah, I knew it was going to be something special and have a lot of tonal changes. Then, obviously, I knew we’d have to deal with all the VFX, which we hadn’t worked with in the past.

Luckily, the studio, VFX supervisor Glen Pratt, VFX producer Nick King and the teams were so good about letting us be super-creative in the cutting room. It was about finding the tone of the whole movie, even if it meant losing a scene one day and putting it back the next and then trying it a different way.

Most VFX teams would have been pulling their hair out because we were making serious changes in every reel, every day as we tried tons of different things — just as you would if you were writing a draft of a script. But they were game, as was Warners, and they all supported us through that whole process of finding the movie. It took a while, but it was such a fun process.

It must have been a very steep learning curve dealing with tons of VFX?
Yes, but the great thing was that they held our hands the whole way through. I’ve never seen a VFX team that was so cool, and it was fun. We’d discuss stuff like, should the merman’s tail come out of the water this way or that? Should this house be an old A-frame-style Barbie house or a Frank Lloyd Wright-style house? We worked really hard, but it felt like play all the time — so enjoyable.

In the end, I spent about 14 months, including all the prep and the shoot, and we had a lot of previz as well. In fact, not long ago we finished cutting some extras, including finding some old, deleted scenes, which we put together in a 6-minute montage to go at the end of the IMAX release. So if people go to the IMAX release they’ll see the latest stuff we’ve done. I hope to keep working on as much Barbie stuff as possible, because it’s been such a fun ride… but I think that’s the last of it.

Given all the VFX work, were you on-set?
I try not to go on-set at all. I feel that an editor should be like the audience, seeing it fresh and being totally objective, without having anything you’ve seen on-set influencing you. That’s important, but I’m always in constant communication with Greta, talking and texting while she’s shooting. I have to know what she’s thinking at all times and give her information that I’m finding. I was in New York while they were shooting in London and LA, so it was a crazy schedule. I’d be up at 5am texting, and it was a really long shoot.

Did all the previz, postviz and techviz impact your work at all?
It didn’t. It just brought extra depth to it. Whenever we had a beautiful shot of Barbie waving and looking over Barbie Land, we worked on that shot for the whole time we were in post, adding little details and opening up your eyeline to the horizon. If you consider doing that for 100 key shots, and then everything else is being filled in based on the geography that you set in these key shots — and you do all that properly — then you really feel that you’re in this world.

That’s a rare feeling, I think. You recognize it, but it’s so detailed that you just want to get lost in it. It’s nostalgic for a lot of people and so beautiful, and the set design is just gorgeous. It also has this great Wizard of Oz look with all the 2D set paintings. So it all feels really tactile and made by real people, which is really cool.

What was the most difficult scene to cut and why?
Everything was a challenge. When you have a Tati-esque, Marx Brothers-style, crazy chase scene immediately followed by a really long, quiet scene full of emotional dialogue — where the main characters meet their creator — and then you immediately go into another big set piece with a wild car chase, that’s a unique challenge as an editor. We also have these long dance sequences. I don’t know anyone else who’s had to deal with that. You have to use all your learned skills as an editor, as it’s a very tricky line to walk.

I assume you must have used a lot of temp sound?
Oh yeah! I always use a lot of temp music, trying to get the tone of the temp score and all of the temp sound that tells the story. That way, when we’re doing temp screenings, I don’t get taken aback because the sound or music isn’t right. I think that’s a crutch, and you have to make it the very best it can be so that you know you’re testing what the film actually is.

There were around 1,600 VFX shots, which is a lot. Did you use any temp VFX shots?
Absolutely, and our team was really good at temping in, so never once was there a single bluescreen shot in any of the temp screenings we did. It was an amazing accomplishment.

Tell us about the workflow and the editing gear you used.
To me, it doesn’t really matter what software we use since it’s all about telling the story, but we used Avid Version 2021.12. One of the most interesting tech details was that we edited in UHD, so it was very high-quality. When we’d do test screenings, it looked beautiful, even on huge screens straight out of the Avid. That was so cool.

We rented all the equipment and stored all the footage with Company 3, where we did all the editing and the DI. I think we were the first to do all this in UHD and just cut our offline in high resolution. Our primary camera was the ARRI Alexa 65 in ARRIRAW and with a resolution of 6560×3100. The color space was ARRI Log C/Wide Gamut.

In terms of project information, it was 3840×2160, 16×9 aspect ratio using 2×1 mask, 24fps, YCbCr DCI-P3 color space. DNxHR LB MXF media was in P3 D65 color space. As far as Nexis storage space, we had a ton available to us since we were set up on our own Nexis outside of the rest of Company 3. We used around 25TB to 30TB. Lastly, we were all using Mac Pros, the newer “Cheese Grater” generation.

Finally, how would you sum up the whole experience?
We all knew we had something special, and everyone was operating at the top of their game. It was such a fun, satisfying experience, but it’s still a shock to see how big it’s become and how it resonates with people.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

HPA Awards: Winners and Kim Waugh Celebrated

The HPA Awards returned to the Hollywood Legion Theater to celebrate “exceptional achievement in color grading, sound, editing and visual effects.” The sold-out event also honored industry veteran Kim Waugh with its Lifetime Achievement Award and gave out additional honors for engineering excellence and outstanding achievement in restoration.

The winners of the 2022 HPA Awards Creative Categories are:

Outstanding Color Grading – Theatrical Feature

WINNER: “The Batman”

David Cole // FotoKem

Kim Waugh“Top Gun: Maverick”

Stefan Sonnenfeld, Adam Nazarenko // Company 3

 

“Dune”

David Cole // FotoKem

 

“Nightmare Alley”

Stefan Sonnenfeld, Adam Nazarenko // Company 3

 

“No Time to Die”

Matt Wallach // Company 3

 

 

Outstanding Color Grading – Episode or Non-Theatrical Feature

WINNER: “The Marvelous Mrs. Maisel – How Do You Get to Carnegie Hall?”

Steven Bodner // Picture Shop

 

“1883 – 1883”

Mitch Paulson // Company 3

 

“Better Call Saul – Carrot and Stick”

Keith Shaw // Keep Me Posted

 

“Yellowstone – Half the Money”

Bob Festa // Company 3

 

“Billy the Kid – The Rattler”

Mark Kueper // Picture Shop

 

Outstanding Color Grading – Documentary/Nonfiction

 

WINNER: “Our Great National Parks – A World of Wonder”

Dan Gill // Picture Shop

 

“The Rescue”

Stefan Sonnenfeld, Andrew Geary // Company 3

 

“DIO: Dreamers Never Die”

Frederik Bokkenheuser // Picture Shop

 

“Harry Potter 20th Anniversary: Return to Hogwarts”

John Persichetti // Picture Shop

 

“Billie Eilish: Happier Than Ever”

Natasha Leonnet // Company 3

 

 

Outstanding Color Grading – Commercial

WINNER: Qantas – “I Still Call Australia Home”

Mark Gethin // TRAFIK

 

Zara – “Man Spring/Summer 2022”

Tim Masick // Company 3

 

Amazon – “Kindness, the greatest gift”

Damien Vandercruyssen // Harbor

 

Mango – “A Mediterranean Dream”

Joseph Bicknell // Company 3

 

O2 – “The Everyhome”

Fernando Lui // Marla Colour Grading

 

Outstanding Editing – Theatrical Feature

Kim Waugh

WINNER: “Tick, Tick… Boom!”

Myron Kerstein, ACE, Andy Weisblum, ACE

 

“Belfast”

Úna Ní Dhonghaíle, ACE

 

“No Time to Die”

Tom Cross, ACE, Elliot Graham, ACE

 

“Top Gun: Maverick”

Eddie Hamilton, ACE

 

“Encanto”

Jeremy Milton, ACE

 

Outstanding Editing – Documentary/Nonfiction – Theatrical Feature

 

WINNER: “Summer of Soul”Summer of Soul

Joshua L. Pearson

 

“Navalny”

Langdon Page, Maya Daisy Hawke

 

“The Janes”

Kristen Huntley

 

“Flee”

Janus Billeskov Jansen

 

“Lucy And Desi”

Robert A. Martinez

 

“How to Survive a Pandemic”

Tyler H. Walk, Adam Evans

 

Outstanding Editing – Episode or Non-Theatrical Feature (30 Minutes and Under)

 

Kim Waugh

Ali Greer, ACE, Barry and Billy Goldenberg, ACE

WINNER: “Barry – starting now”

Ali Greer

 

“Fast Foodies – Nikki Glaser”

Jason Le, Nathan Belt

 

“Before Dawn, Kabul Time”

Shannon Albrink

 

“Hacks – The One, The Only”

John Daigle

 

“Barry – 710N”

Franky Guttman

 

Outstanding Editing – Episode or Non-Theatrical Feature (Over 30 Minutes) TIE

WINNER: “Severance – The We We Are”

Geoffrey Richman, ACE

WINNER: “Succession – The Disruption”

Brian Kates

 

“Black Bird – The Place I Lie”

Rob Bonz

 

“Stranger Things – Chapter Four: Dear Billy”

Dean Zimmerman, ACE, Casey Cichocki

 

“Moon Knight – The Goldfish Problem”

Cedric Nairn-Smith

 

Outstanding Editing: Documentary/Nonfiction – Episode or Non-Theatrical Feature

WINNER: “Dean Martin: King of Cool”

Tom Donahue 

 

“Spring Awakening: Those You’ve Known”

Joshua L. Pearson

 

“Harry Potter 20th Anniversary: Return to Hogwarts”

Simon Bryant, Jim Clark, Will Gilbey, Jacob Proctor, Pablo Noé, Lior Linevitz-Matthews, James Collett, Bill DeRonde, Asaf Eisenberg, Tim Perniciaro
“Prehistoric Planet – Coasts”

Richard Ketteridge, Andy Hague

 

“Selena + Chef – Kwame Onwuachi”

James Ciccarello, Blake Maddox

 

Outstanding Sound – Theatrical Feature 

Kim Waugh

WINNER: “Dune”

Mark Mangini, Theo Green, Doug Hemphill, Ron Bartlett, Mac Ruth // Formosa Group

“The Batman”

William Files, Douglas Murray, Lee Gilmore, Chris Terhune // Pacific Standard Sound

Andy Nelson // Warner Bros. Post Production Services

 

“Encanto”

Shannon Mills, Nia Hansen, David E. Fluhr, CAS, Gabriel Guy, CAS Paul McGrath, CAS // Walt Disney Animation Studios

 

“Elvis”

Wayne Pashley, Jamieson Shaw, David Lee // Big Bang Sound Design

Andy Nelson, Michael Keller // Warner Bros. Post Production Services

 

“The Matrix Resurrections”

Dane Davis, Stephanie L. Flack, Lars Ginzel, Matthias Lempert, Frank Kruse, Barry O’Sullivan // Warner Bros. Post Production Services

 

 

Outstanding Sound – Episode or Non-Theatrical Feature

 

WINNER: “Barry – 710N”

Sean Heissinger, Matthew E. Taylor, Rickley Dumm // Warner Bros. Post Production Services

Elmo Ponsdomenech, Teddy Salas // Sony Pictures Entertainment

 

“Euphoria – Stand Still Like the Hummingbird”

Wylie Stateman, Anne Jimkes-Root, Austin Roth, Beso Kacharava, Bryant Fuhrmann // 247SND

 

“The Sandman – 24/7”

Aaron Glascock, Christopher S. Aud, Curt Schulkey, Albert Gasser, Walter Spencer, Mike Horton // Warner Bros. Post Production Services

“Candy – The Fight”

Mark Binder, Elliot Hartley, Trevor Cress // IMN Creative

 

“Baymax! – Kiko”

Shannon Mills, Cameron Barker, David E. Fluhr, CAS, Paul McGrath, CAS, Kendall Demarest // Walt Disney Animation Studios

 

Outstanding Sound – Documentary/Nonfiction

WINNER: “The Biggest Little Farm: The Return”

Sue Gamsaragan Cahill, Keith Rogers, Steve Bucino, Johanna Turner, Jane Boegel-Koch // NBCUniversal StudioPost

 

“Endangered”

Lewis Goldstein, Bennett Kerr, Jerrell Suelto, Linzy Elliott, Alfred DeGrand // Parabolic

 

“Becoming Cousteau”

Tony Volante, Daniel Timmons // Harbor

 

“Prehistoric Planet – Freshwater”

Richard Lambert, Jonny Crew, Tim Owens, Tim Mercer, Paul Ackerman // Films at 59

 

“The Princess”

Andrew Stirk, Jack Cheetham, Simon Gershon, Mike Grimes, Jonathan Smith // The Project Post Ltd

 

 

Outstanding Visual Effects – Theatrical or Non-Theatrical Feature -TIE

WINNER: “Doctor Strange in the Multiverse of Madness”

Julian Foddy, Jan Maroske, Koen Hofmeester, Sally Wilson, John Seru // Industrial Light & Magic

WINNER: “Encanto”

Scott Kersavage, Erin V. Ramos, David Hutchins, Christopher Hendryx // Walt Disney Animation Studios

 

“No Time to Die”

Mark Bakowski, Bruno Baron, Rob Shears, Steve Ellis, Denis Scolan // Industrial Light & Magic

 

“Doctor Strange in the Multiverse of Madness”
Joel Behrens, Alex Millet, Kazuki Takahashi, Juan Pablo Allgeier, Bryan Smeall // Digital Domain

 

“The Batman”

Dan Lemmon, Russell Earl, Anthony Smith, Malcolm Humphreys, Michael James Allen // Industrial Light & Magic

 

Outstanding Visual Effects – Episode or Series Season

 

WINNER: “The Book of Boba Fett – Complete Season”
Richard Bluff, Abbigail Keller, Paul Kavanagh, Peter Demarest // Industrial Light & Magic

Abbigail Keller of ILM accepting the trophy for VFX for The Book of Boba Fett from Paul Debevec.

Bobo Skipper // Important Looking Pirates VFX

 

“Foundation – Season One”
Danilo Ivanisevic, Anthony Mimoz, Matthieu Bidault, Maxime Laroche, Lee Brunet // Rodeo FX

 

“Prehistoric Planet – Deserts”

Elliot Newman, Kirstin Hall, Seng Lau, Matt Marshall, Andy Hargreaves // MPC

 

“Stranger Things – Season Four”

Julien Hery, Antoine Sitruk, François Couette, Karim El-Masry, Philip Harris-Genois // Rodeo FX

“Foundation – Season One”
Adica Manis, Chris MacLean, Mike Enriquez // Apple TV+

Chris Keller, Jess Brown // DNEG

 

“Obi-Wan Kenobi – Complete Season”
Patrick Tubach, Pablo Helman, Eddie Pasquarello, David Shirk, Christopher Balog // Industrial Light & Magic

 

Outstanding Supporting Visual Effects – Episode or Series Season

WINNER: “See – Rock-a-Bye”

Chris Wright, Parker Chehak, Scott Riopelle
Javier Roca // El Ranchito

Tristan Zerafa // Pixomondo

 

“Severance – Season One”
Vadim Turchin, Nicole Melius, David Piombino, David Rouxel, Sean R. Findley // The Mill

 

“The Morning Show – My Least Favorite Year”
Zsolt Poczos, Gary Romey, Jeremy Renteria, Sean Roth, Chris Stark // FuseFX

 

“Love, Death & Robots – Night of the Mini Dead”

Thomas Hullin, Josianne Côté, Tim Emeis, José Maximiano // Rodeo FX

 

“The Terminal List – Transience”

Jon Massey, Anthony Ceccomancini, Tom Reeder // Amazon Studios

Josephine Noh // FuseFX

Lawson Deming // Barnstorm VFX

The HPA Lifetime Achievement Award was presented to Kim Waugh (Warner Bros. EVP of Worldwide Post Production Creative Services) by Warner Bros. Discovery president of Worldwide Studio Operations Jeff Nagler.

Ignite Films took home the inaugural Jury Award for Outstanding Restoration for its work on Invaders from Mars.

Winners of the coveted Engineering Excellence Award, announced previously, include Color in the Cloud by Amazon Web Services, ARRI for Reveal color science, LG for UltraFine Pro OLED HDR monitor and Mo-Sys Engineering for LED Key.

 

 

 

 

Kim Waugh

Warner Bros.’ Kim Waugh to Get HPA Lifetime Achievement Award

The Hollywood Professional Association (HPA) Awards Committee has selected studio post exec Kim Waugh to receive this year’s HPA Lifetime Achievement Award. Waugh is EVP of worldwide post production creative services at Warner Bros. Studio Operations (WBSO), part of Warner Bros. Discovery. He will receive the award on November 17 when the HPA Awards gala returns to the Hollywood Legion Theater in Los Angeles. The HPA Awards honor technical and creative excellence in content creation.

Waugh immigrated to the US in the early 1980s, leaving behind a position within the Civil Aviation Authority of New Zealand working alongside key aeronautical engineers to begin his post career in Los Angeles. His introduction to the post production business began at sound editorial company Soundelux, headed by industry veterans Lon Bender and Wylie Stateman. Waugh started out as a sound recordist before moving through the creative ranks into facility management, eventually taking on a partnership role at Soundelux. His field recording landed him credits on films including Glory, Braveheart, JFK and Home Alone.

Waugh (along with Bender and Stateman) received a Technical Achievement Award from the Academy of Motion Picture Arts and Sciences in 1994 for work on developing the ADE (Advanced Data Encoding) System, or “Kiwi 9000.” The system creates an encoded timecode track and database during the initial transfer of the production sound dailies, allowing for a bridge between linear and nonlinear editorial.

In 1995, Waugh oversaw the acquisition and rebuild of Motown Hitsville Hollywood (Signet Sound Hollywood) and Soundelux Vine Street (Ryder Sound), managing and directing both facilities. After Soundelux Entertainment Group was sold to Liberty Media/Ascent Media in 2001, Ascent Media elevated Waugh to SVP of operations and business development, Creative Group.

Waugh joined WBSO in 2004 as VP of post services (PPS), where he focused on building creative relationships, overseeing facility infrastructure and managing talent recruitment. In 2007, he became the senior vice president of PPS, during which time he initiated and oversaw the asset purchase of De Lane Lea post facilities in the Soho neighborhood of London in 2012. Waugh then successfully rolled Warner Bros. Tech Ops’ Motion Picture Imaging group into PPS in 2015. A year later, he oversaw the purchase of Digital Cinema New York, known as WB Sound New York since 2016.

In his current role at WBSO, Waugh oversees creative teams, facilities and operations in Burbank, New York and London, including the company’s mastering, localization, archive and preservation business units. He is currently supervising the build-out of the company’s new flagship post production facility in Soho, London, to further support the growth and workflow from Warner Bros. Studios Leavesden.

“The Lifetime Achievement Award recognizes individuals who make our industry a better place to work on every level, fostering innovation and collegiality as well as accomplishment,” says HPA president Seth Hallen. “Kim is special because he possesses that rare combination of talent and personality that earns both respect and affection within his team, his organization and in the wider industry. One of the best aspects of the HPA Awards is having an opportunity to recognize remarkable people accomplishing amazing things. That’s especially true this year. Kim is respected, accomplished and well-loved, and it is our sincere honor to present this award to him.”

 

 

Cinematographer Greig Fraser

DP Greig Fraser on Dune’s Digital/Film Process and Look

By Iain Blair

Australian cinematographer Greig Fraser (ACS, ASC), whose film credits include Zero Dark Thirty, Bright Star and Rogue One: A Star Wars Story, enjoyed the challenges that came with his latest film, Dune — he was nominated for a Best Cinematography Oscar for his work. Directed by Denis Villeneuve and based on Frank Herbert’s book of the same name, Dune features inhospitable alien worlds, monsters and wars set thousands of years in the future as it charts a hero’s dangerous journey.

Cinematographer Greig Fraser

Cinematographer Greig Fraser

Dune tells the story of Paul Atreides (Timothée Chalamet), a young man propelled into an intergalactic power struggle that pulls him to the sands of the remote planet Arrakis, home to an indigenous human civilization called the Fremen. In this hostile environment, humanity fights for control of the Spice, a rare and mind-expanding natural resource upon which space travel, knowledge, commerce and human existence all rely.

In addition to Fraser, Villeneuve (read our conversation with him here) assembled a lineup of Academy Award-winning and nominated artisans, reteaming with two-time Oscar-nominated production designer Patrice Vermette, two-time Oscar-nominated editor Joe Walker and two-time Oscar-winning visual effects supervisor Paul Lambert.

Here, Fraser, whose credits also include Lion (for which he earned Oscar and BAFTA noms), Mary Magdalene, Vice and Foxcatcher, talks about the challenges of making the ambitious epic, his unique analog/digital approach to the cinematography and working with VFX.

Cinematographer Greig Fraser

Cinematographer Greig Fraser

You shot this digitally. Was that the plan from the start?
No. We were unsure, so we went and shot a ton of tests — everything from 35mm film to the large-format ARRI Alexa 65 and IMAX, and we shot anamorphic, spherical — it basically ran the gamut. It was a test of how the film would feel. Then we projected it at the IMAX theater in Playa Vista and compared all the looks. It was funny to see Denis’ reaction. Film did not impress Denis as much as we thought it might. I thought we’d shoot film, but Denis felt it had a nostalgic quality which, despite being beautiful, wasn’t what he envisioned Dune to be. But on the other hand, digital didn’t feel organic enough.

The last time we spoke, you’d been developing a process and look that combines digital with the warmth of analog.
It’s something I’d been working on for a few years before this came along, so I suggested we try this technique as the next step. In theory and in simple testing, it works like this: You basically shoot the movie digitally, give it a quick grade, output it to film and then grade the scan of that. This gives you the best out of digital and the best out of film, and we found it to be a really interesting process.

Is this the first time you’ve used it on a feature film?
Yes, and every filmmaker I’ve worked with since then that I’ve told about this has had this sort of light bulb moment. We all remember what it was like to work on film — all the bad things and problems with the lab — but also all the great things — the beautiful emotional images. I still have a very strong love of an emulsion, and the big question is, “Where does emulsion come in the process?” Does it come by acquiring image, or afterwards? I’m sure there are very film-centric filmmakers out there who’ll have my head on a platter for saying this, but I felt that for this film, putting an emulsion in the process after the fact was the right approach. You get that analog film look just like in the old days, when you could sculpt a look depending on what stock you went with — Kodak, Fuji, etc. Whether you underexposed and overdeveloped it or overexposed and underdeveloped it, or flashed the film, you had all these opportunities to give it a certain feel and look.

Cinematographer Greig FraserWhen we went digital, and when film stocks got reduced to a core number, we lost options. But with this process, you can shoot with any of the leading cameras you want, whichever one suits the project, and then the world opens up again. You can choose whatever stock you want to print it onto — negative stocks, print stocks, 35mm, 60mm. So you effectively go back to film. Now some may say it’s creating a faux grain, along with film problems like a bit of gate weave and noise, but it also creates this analog feel that film lovers have always loved about film.

What cameras and lenses did you use?
We used the ARRI Alexa LF 4K and Mini with Panavision H-series and Ultra Vista lenses. The first part of the film has a more formal look, then we decided to shoot IMAX for all the desert sequences and went for a much looser, hand-held style.

Cinematographer Greig Fraser

We deliberately went for an unsaturated look. Our skies aren’t blue, our rocks aren’t red, our sand isn’t golden, and we designed our LUT to take away the blues of the sky and so on. We used different LUTs, and colorist Dave Cole and lab FotoKem were able to combine the elements of highlights and shadows to create a LUT that worked for us. They were very much partners in creating the look, partly because they have a lab at their disposal.

We shot all the rock desert scenes in Jordan and all the desert sand dune scenes in Abu Dhabi. My DIT, Dan Carling, was on set, and we worked very closely because it was so crucial that we got the right images and graded the right way.

What about dealing with all the VFX? How involved were you in that side of the project?
I was fairly involved. I believe we had well over 2,000 VFX shots. Denis and I shot as much in-camera as possible, but even that stuff always has some VFX work on it. I visited with VFX supervisor Paul Lambert and his team a few times during the process, and they did ask my opinion about a number of things, but after the shoot wrapped, I had to move on to another film, so that limited it.

Cinematographer Greig Fraser

But one of the great things about this partnership between Denis and me and Patrice and Paul was that we built a lot of the sets for real, and we built what we could using a very simple technique — scaffolding wrapped in the materials and colors of what would have been the real stone.

By using that method, all the lighting behaved the way it should behave, because if you only are able to build 12 feet and you use blue- or greenscreen to fill in the other 18 feet to the top of the stage, that doesn’t help the lighting. If you need a shaft of light, I’d have to create that, so why don’t we do it in conjunction with the art department, where it’s actually built out of the material? That gives you the best chance of succeeding in making it look as real as possible. So for the most part, that’s exactly what we did, and all the set extensions that Paul added using VFX could only really look a certain way, as they were lit to be correct. There was a lot of coordination between the camera department and VFX, as well as all the other departments, and that was key to the production pipeline.

As you mentioned earlier, you did the DI at FotoKem. How important is this part of the process to you?
It’s very important. I did a color bible and worked on it before I had to start on another film. So I went into editorial and did a little cut — a sort of DP cut of the movie that was in no way an edit. It was a color bible, literally, of every scene that we knew would be in the movie at that point. Obviously, scenes got added and cut during post, but at least fundamentally we knew what the film would look like.

I graded all that with Dave [who used Blackmagic’s DaVinci Resolve], and then that color bible became what VFX used as a reference. Then Denis came in and looked at it, and he had some notes, and we made some adjustments. Of course, things change as the film gets cut and becomes what it is, and Denis and Dave worked very closely together on the final look and fulfilling that original vision we all had. This is what I love so much about filmmaking and collaboration — it’s not just my grade, my lighting and so on. To me, the grade is as much Denis’ and the colorist’s and the production designer’s. It’s a communal effort. The grade is the movie’s grade, and I trust Denis’ and Dave’s opinions implicitly. (Editor’s note: Watch this space for our upcoming interview with Cole.)

This was your first time working with Denis. How did you get involved in this?
We’d actually met a long time ago at a barbecue at Roger Deakins’ home. They’d just finished shooting Sicario, and I’d seen his previous films and loved his work, but I’d never met him before. We ended up having this great conversation, and over the years we stayed in touch and saw each other at awards shows. Then he called and asked if I’d be interested in meeting to talk about Dune. He’s a master filmmaker, so I jumped at the chance.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

 

 

The Underground Railroad's Sound

Emmy-Nominated Sound for Barry Jenkins’ The Underground Railroad

By M. Louis Gordon

This May, Amazon subscribers were treated to a wondrous and brutal odyssey crafted out of the 19th century escapes from slavery in the Antebellum South. Writer/director Barry Jenkins adapted The Underground Railroad, the historical fantasy novel by Colson Whitehead, into a 10-part series. It follows Cora, a young woman who escapes slavery on a Georgia plantation by way of a literal subterranean railroad, making stops throughout the Southern states that each embody a parable of racism in America.

The Underground Railroad's Sound

Onnalee Blank

The Underground Railroad was nominated for seven Emmy Awards, Outstanding Sound Editing and Outstanding Sound Mixing on a Limited Or Anthology Series. Warner Bros.’ supervising sound editor and re-recording mixer Onnalee Blank, of Game of Thrones fame, was nominated for her work in both categories.

We reached out to Blank, who mixed at Universal Dub Stage 6, to discuss the soundscape of this haunting, magical-historical show. She had worked on Jenkins’ two previous films, Moonlight and If Beale Street Could Talk.

What drew you to The Underground Railroad?
Anything that Barry Jenkins does is an art piece, and I was very excited to be involved. I’ve only heard about the book, so once I knew Barry was adapting a miniseries based off of it, I read it and was blown away. Trying to imagine what Barry would bring to the table was just a fraction of what he actually did.

I sent my ideas and my notes to Barry and his picture editor Joi McMillon, and we got some ideas flowing really early. Barry is so collaborative, kind and willing to try many ideas. He embraces that. You don’t feel scared or judged to try anything, even if you think it’s crazy, weird or different, which is liberating.

The Underground Railroad's Sound How was that process working with the team from pre-production through post?
When dailies were being shot, I would look at them the next day and make a list of requests that I would want the production sound mixer Joseph White to record if he had time. Any day that he had off, he would record. For instance, there was this very particular typewriter that they used. He recorded that, as well as baby carriages, ambiances, anything weird.

Watching the dailies, I start making a library and categorizing it and I send that over to the cutting room, so they can use sounds I’ve been pulling. I’m always begging to start working on [scenes that Joi is assembling] and she’s like, “I haven’t even cut a scene yet.” And I said, “Just send it.” She sends me very early cuts, just scenes. It could be a minute, two minutes, whatever. And I start coming up with a palette for the scenes. Whether it’s the right palette or the wrong palette, it just gets me into the headspace of this world.

The show gets very heavy at times. Was there a particular scene that was difficult for you to work on from an emotional standpoint?
Episode 1 in Georgia was the hardest one to work on. Anthony, when he’s trying to escape the plantation — he’s running and then gets dragged back into a cage. The day-in and day-out meanness of slave life that they were living… we mixed that episode last, so we found the show by that time. When we got to Episode 1, we brought back a lot of motifs and ideas that we had been creating through the other episodes as it evolved.

The Underground Railroad's Sound

It’s such a harsh and striking episode. There’s a moment when Cora and a little boy are being lashed at the whipping post and the whip cracks sound enormous.
At first, they were a little tamer, then Harry Cohen, one of our sound designers, had some ideas about making them big, and super sharp. Then we went bigger and hyper-real , but the final version is toned down. In that particular scene, we hear some crowd people crying a little bit — it’s really just about everybody watching what’s happening here. It’s not because they want to watch, it’s because Master Randall is making them watch. Everything should be pretty quiet except for the whips, and Cora and the little boys breathing and screaming. There are a lot of panning tricks, a lot of slap.

I love reverb. I love echo. I love delay. So does effects mixer Mathew Waters. Mathew Waters and  I make sure we use the same revers and pan exactly the same percentage. We really want all our sounds to meld together. We don’t want an audience to think or hear that, “Oh, that’s dialogue. Oh, that’s an effect.” We really want it to just be one moment, almost like that’s how it was recorded on set.

Onnalee Blank

How about designing the underground steam engine?
It’s nice to have a bit of escapism from reality, a bit of fantasy. The train could be whatever we wanted it to be. Field recordist Watson Wu put microphones all over a 1831 steam train that was being moved from one museum to another. It took three conductors to run the train because it was so heavy. He lined the track with microphones as well. A few of them were very distorted, but some of them were gems of a recording that we morphed and changed. Those recordings are everywhere. All the elevators were made out of train sounds. There are tones of wind that we morphed that are actually train sounds. Anything that we could do to make it weird.

In Episode 8, Cora goes down to a big, marbled train terminal run and used by only black people. It’s bustling, but also very ghostly. There are sporadic, unintelligible vocals, baby crying and screams, all in a wash of reverb until she gets to the ticket counter.
This scene was fun to do. We initially wanted to play the scene straight, but I was like, “We can’t play completely straight. We’ve got to make it slightly weird.” I wanted to feel as if you’re at a strange party and you’ve had one too many drinks or something, and you don’t know anybody. You just hear all these conversations, and you feel like you’re sticking out like a sore thumb. How do we portray that feeling, that uncomfortableness in her own psyche and her own dream?

Harry at one point said, “I’m giving up. I don’t know what to do there.” And our composer Nicholas Britell told me that the music for this scene is beautiful. The score has a wondering quality to it, so we tried to create the opposite of that. The scary, creepy vibe. So having those two different, almost dissonant pieces on top of each other, it’s beauty and not, at the same time. I love that sequence.

When the ticket teller is looking up Cora in her records, those are everybody’s slave names in those books. So when her hand go over the names, you hear distant screaming and laughs, and when she closes the book, the sound gets sucked out.  It’s all the souls that have made it on the train, they’re alive in the books.

You mentioned motifs are everywhere in the series. Can you talk about those?
What made this whole series so challenging on the sound front was that every chapter takes place in a different state. Georgia, Tennessee, Indiana Autumn, Indiana Winter, and those all have to sound different. Tennessee was slave catcher Ronald Ridgeway’s very character-driven episodes. I told Jay Jennings, one of our sound designers, to “give me your own sound here, try anything that comes to you.” Then we realized that this whole series is about Ridgeway’s time running out. So Jay started creating this ticking clock theme with a blacksmith anvil. Then that morphed into all different kinds of pocket watches and stopwatches, and you really hear it in Episode 9. They get faster and faster, and they change. It’s a big payoff, at least for us on the sound front.

In Episode 2, Cesar leads Cora down the Railroad and assures her he won’t leave her side. He delivers his dialogue directly into camera, and it sounds tremendously present.
Everything in that scene is so reverb-y in this cave, so the question was: How do we make him sound different? At first, we added more delay and tried to pan it around the room, but it just took us out. Then I put his dialogue in our Dolby Atmos speakers [above the audience], so it was very dry and very clear in-your-face.

It worked well in the down-mix too.
Thank you. I really don’t like to have dialogue just here at the same level. “Can we feel their performances? Can we get close to the screen? Can we have the audience lean back?” Sometimes that’s hard to portray on TV, so it’s a very fine line because you don’t want people at home to be like, “What? What are they saying?” It’s trying to find a balance. We did that with Ronald Ridgeway in Episode 1, when he’s looking into camera. His dialogue spreads to every front speaker.

How about your approach to mixing dialogue for the bulk of the show?
There’s some good tried-and-true EQs out there, but I feel that less is more on the EQ front on dialogue. To not over-no-noise dialogue .The dialogue session is very wide because I have every microphone at all times in sync and in-phase. You can do a lot with perspective with just microphone balance.

Did you get to use new tools or techniques?
I’m always trying new stuff, and I like outboard reverbs. I use a lot of them. People make fun of me, but they sound great. I like to be bold with music and mix music in object tracks. A lot of people are scared to do that or think the fold-down will be weird. But one thing about mixing music and panning it, it can add an element of almost sound design-ness to it that can merge with stuff that I’m doing. Composer Nicholas Britell is great with that. We would talk about, “What key is he writing in? Are you doing something to cover the whole scene? Okay, I’m going to start my design elements here.”

There was a lot of back and forth. Mathew Waters had a lot of percussive, almost musical sound design, and he did a lot of cool tricks with delay to make sure that they were on the beat in the same tempo. We tried different EQs, Used some UAD [analogue emulation] plugins. I like to try almost every different kind of reverb. I even made a joke, like, “Let’s bring in an EMT spring reverb,” and you should have seen the look on the engineer’s face.

L-R: Mathew Waters and Onnalee Blank

You’ve worked on all sorts of film and television projects. What does it mean to work on a series like this?
Exhaustion? We all worked on it for a very long time. It was pretty heavy. I worked on Game of Thrones for so long as just a re-recording mixer, and that show really molded me and made me become the mixer that I am today. I can’t thank that show enough for one, not firing me, and two, giving me the opportunity to work on so many different kinds of battles and naturalistic sounds. I really got my headspace into a different zone of detail and creativity on that show.

It was interesting to work on Underground Railroad, because I was trying to take everything I learned from Game of Thrones and then just heighten that by 100. How can I make this different, but great and big?


M. Louis Gordon is a sound editor, designer and location sound mixer at Silver Sound NYC. His credits include Sundance 2016’s Equity, The CW’s Tough Mudder: The Challenge Within miniseries and Tribeca 2021’s limited series In the Cards. You can follow him on Instagram @mlouisgordon.

Post Sound: Skip Lievsay and Rich Bologna Talk Collaboration, Judas

By Patrick Birk

Re-recording mixer Skip Lievsay and supervising sound editor are highly regarded members of the audio post world. Bologna has worked on many notable projects, including The Hunt, Marriage Story and Fahrenheit 451. Lievsay, who has an Oscar for his work on Gravity, counts Roma, Ma Rainey’s Black Bottom, Uncut Gems, Lady Bird and No Country for Old Men among his many impressive credits. The two teamed up to bring director Shaka King’s vision to life and tell the story of Fred Hampton (Daniel Kaluuya) in Judas and the Black Messiah, which was released on DVD and Blu-ray this month.

Skip Lievsay and his furry friend

Judas and the Black Messiah tells the story of the FBI’s infiltration of the Illinois chapter of the Black Panther Party. Hampton becomes a target of the US government, and the FBI co-opts car thief Bill O’Neal (LaKeith Stanfield) to assist them in destroying the Panthers from the inside.

Here, Lievsay and Bologna talk about the collaboration — which took place on dub Stage A at Warner Bros. Post Production Services in New York City — the film and more…

Rich, can you talk about your collaboration with Skip?
Bologna: Skip mixed the dialog, ADR and music, and I mixed the sound effects, Foley, backgrounds and group ADR.

Before mixing Judas and the Black Messiah, I edited and supervised the sound editorial crew in New York, working mostly remotely with director Shaka King and picture editor Kristan Sprague for several months. Skip and I then spent a week or so predubbing the film separately. Since Skip was coming at things with fresh eyes and ears as well as his vast experience and insight, having worked on pivotal films like Malcom X and Do the Right Thing. He provided instrumental guidance into what scenes required special attention and where we should focus our energy. The collaboration was a true joy.

In a story that is so dependent on the speeches of Fred Hampton, how did you go about treating the dialogue?
Lievsay: Throughout the movie, we had high-quality recordings for every line. We had a little bit of ADR in some places, but in general, we had a boom and a lav track on every line. In some of the bigger scenes, like the “I Am a Revolutionary” event, we had a lot of tracks, with several microphones for Daniel and several microphones for the audience.

Rich Bologna

For the most part that speech was done live. He spoke directly to the audience, and the audience responded live in the recording. So we had this really fantastic interplay between the character and a real audience. We just had to add reverb or EQ to make it match better.

Then we had the digging-in part of the job, which was finding all the good crowd reactions and filling out a complete set. We just had to do some sorting, since some of the different angles didn’t match; sometimes the crowd didn’t respond the same way, not loud enough or too loud. So we had to make a consistent audience track for that scene. Those things together are a fantastic document of that event.

Bologna: There were a couple of instances when Skip put some cool microphone-type futzes on certain speeches. And I think Shaka was very sensitive about that stuff. He ended up pulling away from that very realistic approach because I think it may have stepped on the sheer power of Daniel Kaluuya’s performance. I do think Shaka took an unadulterated approach to some of those Fred Hampton speeches because he wanted it to come through really clear and powerfully.

Would you say you broke the fourth wall a little bit by not processing speech realistically?
Lievsay: Dialogue presented in many dramatic films portrays the talking very directly without regard to whether the performer is in an extreme close-up or across the room. It’s not differentiated in terms of perspective and reverberation. The track is consistent by choice. This approach is illustrated well by British dramas, such as The Crown.

On this film, and most films that I have mixed, we chose a presentation that places the performances in a space. Distance from the camera perspective is accentuated by adding reverb or drying it up. This presentation is more consistent with the reality of reverberation in each location design of the production.

We didn’t go wild on the “I Am a Revolutionary” scene, where it was crucial to have a consistent sound for Fred Hampton and to give a consistency to the speech so the audience in a movie theater could believe that they’re in the auditorium with the speech as well. In that case, you don’t want to have that kind of documentary, across-the-room change of perspective, because then you become a watcher instead of a participant. I think you grant the audience access when it’s closer and dryer. At the same time, audiences who are inclined to a certain realism get pushed back when they don’t feel changes of perspective. My particular approach is to have a dry sound and a reverberant sound, which I use two types of reverb for. Then I change the perspective by changing the balance between the dry sound and the reverberant sound.

You have to have taste, which I hope I do, but also you really have to be careful to choose the reverb sounds wisely so that it doesn’t become a distraction or make the dialogue harder to hear or comprehend.

A lot of times that’s a problem with lower levels of discussion. It’s not a problem when people are speaking loudly and clearly, but it’s a problem when actors are having a discourse; sometimes there’s a mumbling thing that happens. Then a reverb can just be an added problem. That’s why I like to keep it separate, so that during the mix, we can change that at any point, we can change that balance.

Do you have a specific method for dealing with soft lines of dialogue?
Lievsay: I usually start by looking for a hard line, like a plant or a boom microphone. Most production mixers like to record a boom when they can, as long as the shot’s not too wide. In this movie, I believe we had a boom and lavs for every line. Once you sync them up with VocAlign or something like that, you can play them together without having phase cancellation.

From my experience, the boom mic will have a little more reverb and sound a little more natural. The lav is generally very bass-y and off-axis, and oftentimes it’s also covered by costumes. So you can filter the lav mic so it sounds like the boom mic. That’s my general approach: Try to get it to sound like a boom, and that way you can mix and match. So if the boom mic becomes noisy or muddy, then you can substitute the lav for that. But, in general, it’s a mix.

What are the reverbs you like to blend for perspective shifts?
Lievsay: I like to use Altiverb — I make a recording, then I’ll take a copy of the track and find what I think is a suitable reverb setting in Altiverb. Then I AudioSuite a copy, so I make a mono, reverb-only copy. Then I take a traditional send and return and use that as an on-the-fly reverb for everything. So I can add a little bit of room sweetener, like a stereo reverb, to all the talking or all the sound that’s happening to give it a little more high-fidelity sensation. I use ReVibe for that one, generally, and pan in the stereo to glue the voice to the center of the action.

I always tell people, if you have something beautiful happening that you’ve invested a lot of time in, print it. Put that in your session, put it next to the units. If something happens, like the director doesn’t like one component, you can reactivate the plugins and follow your trail back. But if you have a printed version, 99% of the time filmmakers say, “That sounds great. Can you make it a little louder?” That’s your fix. Changing the volume of a printed version is really simple.

Bologna: I did that in the refinery shootout, with a Soundtoys EchoBoy flammy delay that I would put on the guns because they’re surrounded by metal, and I wanted to get a cool slappy echo. But I know that line of plugins is notoriously weird with automation. So I would just print all the echoed, delayed responses and then just have them as separate stereo audio files.

Lievsay: I even use EchoBoy for my exterior patches. Exterior reverbs are the hardest thing to come by these days, and EchoBoy is pretty good for that.

Were there any scenes with more complex reverb setups? If so, how did you organize them?
Lievsay: I only did that in a couple of scenes where I had a foreground and a background, like in the shootout at the Black Panthers’ headquarters. We had an exterior street sound and an up-in-building sound. Then for the “I Am a Revolutionary” scene, I had several reverbs going, which were a combination of Altiverb, stereo Altiverbs for the whole audience and then ReVibe. I think I had two ReVibe pairs there for the audience and then for Fred Hampton’s speech as well.

How did you make the violence in this film feel so visceral?
Bologna: I’m glad that you give it that adjective. I wanted it to feel like a big movie, which it is, but also not be like the super-stylized, over-the-top violence that I think we’re all used to seeing from big Hollywood movies. The clearest example, and I think where we were most successful in giving it that kind of terrifying real-world impact, is the last shootout, where Fred is assassinated. The other shootouts have music and, I think, rightfully so. They’re cinematic approaches. But the last thing has no music, and really it does scare the shit out of you because you’re in that room, and it’s happening to you. And you don’t know where the task force is. I wanted it to feel kind of chaotic, to put you into the Panthers mindset of like, “What is going on? We are being ripped apart.”

Shaka didn’t want to lead by the hand, and music sometimes can do that, where you’re emotionally directed by the music — instead, it’s just pretty stark. The hardest thing for me to see in this movie is Deb’s face while she’s hearing Fred being murdered behind her. That did have some music at some point, and it had a stylized thing, and we just stripped everything out. It’s incredibly powerful, and it’s terrifying. We didn’t want to go crazy, but we wanted it to feel immersive and big. I think we got there in the end.

What was it like collaborating with Shaka King?
Bologna: I knew Shaka from a couple of Sundances ago, and we had kept in touch and hung out. I was thrilled that he was making this movie, and I came on pretty early in the process. In fact, I came on the week the lockdown happened.

I ended up working from home. It was cumbersome in the sense that we had to improvise and figure out the pipeline and workflows, but I think we benefited from the fact that we got a lot more time, which I love. The movie was supposed to originally come out in August of last year instead of 2021.

Shaka had a laundry list of things that he sent me initially from the picture edit. Just workaday-type things I was expecting, like, “Hey, can you give us big gun sounds for the shootouts?” But it was really just little things that they couldn’t really pull together in the picture edit that were bugging him.

I had my initial list and knocked those out, and then, to Shaka’s credit, he let us run with it. Because he has a strong appreciation for sound and music, he really trusted us to do our thing. For the most part, he was totally on board with a lot of the decisions. And because we started early enough, I was feeding the picture department bounces, so they were quickly incorporating those elements into the edit, which is always great because if they get used to our sound without temp effects and temp music, it’s just the way the movie is, and then we can really fine-tune stuff.

Shaka was very open, and he has a very specific sensibility. Since we started early, we learned what this movie’s sound design was naturally. For example, early on we tried some really stylized things — initially we had a lot more of this inner turmoil thing with Bill’s character. That had a naturally stylized sound design element to it. I think it just fought with the way the movie should be. So we ended up, both on the picture and the sound side, naturally kind of pulling back from any of that super-stylized stuff.

Shaka is a really smart and collaborative guy, and he would always be interested in my two cents on whatever edit iteration we were at. It felt like we were all part of the team. By the time we got to Skip and re-dubbing, things were pretty ironed out on the sound side; there weren’t any big surprises, and we just focused on fine-tuning and making it sound great.

Lievsay: It was a joy to have my two wing men, Rich and Shaka, have such a fully fleshed out soundtrack before we started, then to be able to go through the movie with both of them and work on the things that made the movie as good as it can be. Which, after all, should be the primary mission at all times — getting the filmmaker to be in their happy place, so they can relax and enjoy their own film. Usually that involves solving a handful of problems particular to the film, which nobody wants to talk about.

In this film, the shootout at the Black Panther headquarters and the “I Am a Revolutionary” scenes were problems in that a lot of elements had to be organized, which then had to be set up and mixed properly to everyone’s satisfaction. Until that got done, there were still going to be problems. I’ve had a lot of luck in my career as an editor/mixer going head-on with those things, really embracing those hard things right at the beginning. Because there’s less hot in the process until those problems are dealt with.

If you can save all of your time and your energy for making stylistic choices with the director, then you’re really doing a great service to the project. So you really are getting into that zone where the things that you’re spending your time on are making the film better. Sometimes it’s a delicate thing: It might not seem like the film is better, but the filmmaker feels that it’s more like what they want. I really believe you have to let go and let that be the thing that’s important.

I heard this quote from a press secretary the other day, “The main thing is to keep the main thing the main thing.” I love that idea. What else is there? Why wouldn’t we want to be doing that?

Bologna: That completely applies to this film. The technical challenges of the “I Am A Revolutionary” scene were pretty ironed out, to the point that Skip will joke and say the only note that Shaka had for us was to say, “Wow.” He could just kick back and watch it.

I will say that Shaka is even into the nitpicky granular stuff that we do, like he would be fine sitting behind Skip predubbing and EQing dialogue and me getting the Foley ready. I think he’s jazzed by the whole process. It’s great to work with guys like that.

Lievsay: It is a pleasure to work on a movie that’s about something that’s very important, particularly at this time. Sadly, it’s been important as long as I’ve been alive. You could say that for everyone alive right now. Yet amazingly, not a whole lot of progress is being made. I was born in 1953. A lot has changed since then, but it doesn’t feel like we’re getting that much closer to the ultimate goal.

Bologna: It almost seems like this movie is instructive, like people are gravitating toward it. People need some sort of a roadmap, and I think Fred Hampton was a very powerful, enlightened guy. His message and what he was talking about is super-relevant to the moment that we’re living through. I’m very proud that I got to be a part of that and to work with such great people like Skip and Shaka.

At one point, Skip and I were outside on a break, and Skip was basically like, “I hope this is the first of many gigs that we’re working for somebody besides just a white guy.” I hope it’s a trend that continues because there’s a lot of power behind these messages, and people like Shaka are well-suited to bring them to the fore.


Patrick Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City. He releases original material under the moniker Carmine Vates. Check out his recently released single, Virginia.

Judas and the Black Messiah Director Shaka King

By Iain Blair

Director Shaka King has been getting a lot of attention for his timely debut film, Judas and the Black Messiah. In fact, he picked up two Oscar nominations for his efforts, one for his role as producer (Best Picture) and one for writing (Best Original Screenplay).

Director Shaka King

Set in the late 1960s and inspired by true events, Judas and the Black Messiah tells the story of Fred Hampton (Daniel Kaluuya), a Black college student whose community activism brought him to the attention of the FBI’s J. Edgar Hoover. When Hampton became the chairman of the Illinois chapter of the Black Panther Party in 1968, the FBI placed him on its “Agitator” index. Simultaneously, the FBI planted an informant, William O’Neal (LaKeith Stanfield), into the ranks of the Black Panthers to track Hampton’s messaging and movements. In 1969, the FBI assassinated him. He was 21.

The film’s behind-the-scenes creative team includes DP Sean Bobbitt, BSC, editor Kristan Sprague, supervising sound editor Rich Bologna and re-recording mixer Skip Lievsay, CAS.

I spoke with King, whose credits include Random Acts of Flyness for HBO and People of Earth for TBS, about his work on the film and his self-declared love of post.

How did you prep for your studio movie directorial debut? Did you get advice from directors you know?
Not really. I’ve done a lot of TV directing, and I did the small indie film, Newlyweeds, back in 2013, so I was pretty comfortable with that side of it. But the whole development process on this was all new to me, and I asked (producer) Ryan Coogler for a ton of advice on that throughout the process. Basically, you have to find your own way through it all.

Why did you shoot in Cleveland instead of Chicago, and how tough was the shoot?
Tax credits. It was the only way we could make the film with the budget we had, and we only had 41 days for the shoot, which was almost all practical locations. But the shoot was pretty great. We had incredible locations, everyone was so hospitable, and we had a great cast and a great crew. We all had a lot of fun.

What did DP Sean Bobbitt, a frequent collaborator with director Steve McQueen, bring to the mix?
He brought so much — an incredible eye, great attention to detail and specificity, and an incredible understanding of light and color, as well as of motion and performance. He’s a storyteller in his own right, and we complemented each other in many ways, both in our aesthetic approach and temperament and in not being wasteful, but economical and purposeful, in our decision-making.

We shot on the ARRI Alexa LS with the LS lenses, and we all did a lot of scouting and archival research to help with the shot list.

Tell us about post. Where did you do it?
All at Company 3 in New York, except for the sound, which we did at the Warner Bros. stage, also in New York, where I’m based. I love every part of post, especially as it’s your final rewrite, and you get to cherry-pick the best moments from the shoot. You also get to try stuff you’d never have thought of at the moment and reshape everything, from the overall narrative to a tiny moment in a scene.

Editor Kristan Sprague cut this. How did you work together, and what were the main editing challenges?
We went to college together, and he’s cut all my projects except for one. The pandemic was a big challenge, as we had to do it remotely after the first cut, and then you’re dealing with all the usual elements — tone, pacing, performance, rhythm and so on.

But the biggest challenge was dealing with all the music, which is so important in this and to me. I just love pairing imagery and music, and I actually love music more than any other art form. Our original plan was to get composers Craig Harris and Mark Isham in a room with a bunch of jazz musicians, record some improvised sessions and then notate them. But when COVID hit, we couldn’t do that, so Craig and Mark were working remotely.

Ultimately, I also hired music supervisor Zach Cowie, who sent me all these great jazz temp pieces that worked great. I also brought in Quelle Chris, a musician friend who came up with an amazing percussive click track for a scene. Then Mark structured some more music around it, and eventually we arrived at a score that really nailed it for me. We recorded it at Manhattan Center Studios and Reservoir and then mixed at Valhalla. Then it was delivered to mixer Skip Lievsay, who did the final mix of the film.

There are a few VFX by Zoic and Powerhouse VFX. What was entailed?
All the cleanup and removal stuff you need to do for any period piece. Then the whole Hoover sequence had a ton of VFX and prosthetic makeup cleanup, and the stuff playing on the screen in the background had to be replaced. Jeremy Newmark was our VFX supervisor, and he did a great job, as it’s all invisible work.

What about the DI at Company 3 and working with colorist Tom Poole?
Tom is amazing and he and our DP Sean have been working together a long time. We all based the whole look on these vintage photos I had of Chicago in the late ’60s and early ’70s, and that was our reference point. So when I came to the grade after they’d done the first pass, I was like, this is perfect. They’d nailed it, and the few changes we made were very minimal.

What sort of film did you set out to make?
It was a chance to present Fred’s ideas, politics and beliefs to a wide audience in the shape of a genre movie — a drama about power and politics. But I didn’t want to make a straightforward biopic. I also wanted to frame it in the bigger picture of the causes he was fighting for.

Director Shaka King on set with LaKeith Stanfield

Did it turn out the way you first envisioned it?
No, not at all. It got radically reworked, and the end result is far better than I’d ever imagined.

Awards season is treating you kindly. How important is that for a film like this?
It’s very important, especially this year, as we don’t have the usual wide release and theater box office to determine its success. So awards really count. And you also have to recognize that movies like this don’t get made without a huge global hit like Black Panther and someone like Ryan Coogler, who helped develop this and produced it.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Wonder Woman and Tenet Sound Designer Richard King

By Patrick Birk

We’ve all needed to find ways to escape reality since the pandemic began, and movies have given many of us that much-needed transport. They allow us to get lost in a different time and a different place. Two films that were just what 2020 needed were Patty Jenkins’ Wonder Woman 1984 and Christopher Nolan’s Tenet. These two blockbusters each tell the story of a hero who must fight to save the world, but they have more than that in common: supervising sound editor and sound designer Richard King.

Richard King

This four-time Oscar winner’s collection of credits include Inception, Master And Commander, War of the Worlds, Dunkirk, Interstellar and the Dark Knight films.

Here he shares his general philosophy when it comes to sound design, and how he applied that to create these excellent, yet vastly different, sonic experiences.

How did you manage the large teams on Wonder Woman 1984 and Tenet
Wonder Woman and Tenet were very different in their preparation. Wonder Woman 1984, which we finished before Tenet, actually comprised two sound crews. I co-supervised the film with Jimmy Boyle, a UK-based sound designer and sound supervisor.

I began the process at Warner Bros. Burbank with several effects editors prepping for a temp mix in anticipation of Patty Jenkins’ screening for the studio. We put a lot into the temp mix editorially, here in Los Angeles with most of my regular crew. Then we segued to WB’s De Lane Lea studios in London, and Jimmy’s crew took over from there, with me and Andrew Bock continuing to work here in LA.

We had slightly overlapping work hours — they’d go home at roughly 11 in the morning or noon our time. So we had some hours in the morning when we could catch up and trade ideas and chat. Jimmy and his team are great to work with, really dedicated and creative.

Wonder Woman 1984

Andrew Bock (assistant sound editor, WW1984 and Tenet), who’s worked with me for ages and is great at organization, devised a system using a shared server and a Google Whiteboard as a synchronous check-out board so it was easy to see if the Avid Pro Tools session for a reel was up on anyone’s system.

The editors were assigned a reel or a project within a reel or scene. Eventually, when they got to a place where they were satisfied, I’d open that session and almost always do a little editorial or mixing work. When I’m supervising solo, I’m sort of the conduit to the stage. In the case of Wonder Woman, Jimmy and I were both conduits, and he was listening to material and tweaking on his end as well. He and I were communicating a lot and rapidly got in sync with each other aesthetically. So it was kind of a synthesis. I had a cutting room at WB De Lane Lea as well.

On Tenet I was back to my usual way of working, with my regular team, again at WB Burbank. Over the years we’ve developed a working methodology whereby everybody gets their hands into everything at one point or another. When somebody has an idea about a scene that someone else is working on, then that person has a go at it, too. It’s a cumulative process, and everyone’s involved creatively. I premix the sound effects and add and recut and subtract things. I then give it back to the editors, and they tweak some more. If we have enough time, we do this trading back and forth and just keep polishing and refining.

Our temp mix process was somewhat different on Tenet, as the pandemic struck halfway through our first temp. For subsequent temps, (re-recording mixer) Gary Rizzo laid down new dialogue and music stems after the editors were done in his home theater up in San Rafael, while I fixed the effects and foley stems at home here in LA. Then Gary made a new sound master, which Chris Nolan heard in his home theater and gave us notes for the following week’s turnover.

Tenet

Both films feature massive battle sequences, but the tone of those sequences is vastly different. How did you develop those sounds?
Patty designed Wonder Woman 1984 to develop in stages. After the dramatic opening on Themyscira, we land in a fun, lighthearted film that becomes quite dark by the third act. Jimmy and I followed our instincts and Patty’s direction.

Tenet and Wonder Woman both had huge scope and scale, with a lot of sound and action, but tonally, they’re very different.

On Tenet we did a lot of experiments to figure out how to make the inverted sequences sound different without simply reversing the sound. We quickly realized that simply playing all the sounds for the inverted sequences in reverse wasn’t going to work. It actually sounded a little silly and comical. We eventually ended up, for instance, recording all the Foley forward but just adding the odd sweetener here and there to explain a movement that wouldn’t be possible in normal life. We constructed all the inverted sequences with primarily forward-playing sounds. We had worked out a philosophy to try as much as possible to be true to the physics of the situation in the portions of the film that are inverted. So we kind of thought through how different sounds might logically transfer from an inverted character’s perception to a forward-moving character’s perception. Of course, it was all a guess, but it made for some interesting conversations. We developed strict rules with Chris and implemented them throughout to try to take a logical, consistent approach to those sequences.

So it’s really about getting into the spirit of the film. I try to do that with every movie and try to avoid adhering to any kind of style or familiar approach. I start each film as if it’s the first project I’ve done, so it’s a revelatory experience. I mean, you obviously want to take some of that experience with you, like learning from past mistakes. But I just try get a strong sense of what the director is looking for in every movie — what kind of vibe they want — and then kind of feel my way into it. That’s what all sound editors and designers do. The worst thing you can do is copy yourself; it’s much more fun to launch yourself into a movie and rethink everything.

Patty asked us to come up with a lot of new sounds, but one sound that she liked the foundation of was the Lasso of Truth from the first film — only she wanted to elaborate on it. So we stuck with the same general feeling that the Lasso of Truth had in the first Wonder Woman but gave it a bit more articulation because visually it looks a bit different as well. At times its sonic movement had an electrical quality, like high-voltage buzz, to give it some power. Basically, it’s a reflection of what she’s feeling, so it changes a bit from sequence to sequence depending upon what’s happening.

As for the Golden Eagle Armor, Patty wanted that to be massive — that metal is meant to be as hard as a manhole cover; it’s practically impregnable. So we ended up going bigger and bigger with the metal hits that we used for its movements, such as when Diana takes it off or puts it on, and when the Cheetah is banging against it. We started out with a little ring to it. That ring became less and less, and then it became more like full-on solid metal without a whole lot of resonance. Patty wanted a lot of size and dynamics, especially in that sequence. It looks super-impressive and needed to sound as impressive as it looks. The character Barbara, as Cheetah, is pretty bad-ass herself, and she can wreck this impregnable armor.

That sequence definitely contains some supersized sounds, but even during moments like that, which are clearly impossible in our real world, you want to make it feel as if it’s really happening. To me, that makes it much more impressive.

Wonder Woman 1984

Imagine something like that happening in the world that we’re accustomed to — it’s much more aggressive than seeing something fantastic on a screen and hearing a big generic movie sound effect that doesn’t relate to the space they’re in. It’s a fine line, and you can only make it sound so real, but you don’t want to give the audience a moment to think that it seems somehow overblown or false. You want them to feel like, “Oh shit, that really happened.” This is a subtle distinction, but I think it’s a recognizable distinction.

What are some of your favorite tools for getting EQ and reverb right, or for creating new effects?
We use the usual wide array of plugs. Altiverb is such a great tool. We use that a lot for convincing reverb. Phoenix verb as well. The way a sound activates a space is as important as the sound itself in conveying reality. It imparts so much information about where you are, how far you are away from the sound, what direction it is coming from, and what the space you’re in is made of.

I recently discovered Radium, Soundminer’s built-in sampler. They’ve focused on ease of use, so I’ve been using that quite a bit.

I encountered an interesting situation when I was working at home on Tenet. Two of my three dogs really don’t like loud noises like fireworks and thunder. The third one’s oblivious — he doesn’t care. I was downstairs working in my studio, playing machine guns at full movie volume, and I noticed that they just slept through it. I couldn’t figure out why they didn’t react to my guns. I found that reinforcing lower frequencies in the guns got a little more of a reaction, though still not like the real thing. It’s probably body resonance that’s missing.

Since our bodies are like a vessel mostly filled with water, mostly fluid, it resonates. The sensation we call sound is a mixture of aural perception and bodily perception. We perceive those ultra-low frequencies and have a reaction to them.

Tenet

We also did a lot of old-fashioned analog worldizing on Tenet, because we had made these weird sounds, but they didn’t sound real and organic enough. So we recorded them in very specific environments that would impart the kind of room reverb or sense of distance we wanted. For example, sound effects editors Randy Torres and Joseph Fraioli set up several oscillators and modular synthesizers and played back various low-end warbles and punches over speakers and subwoofers on a large sound stage at Warner Bros.

The sounds were massive, especially reverbed out in this vast empty space. We recorded with a lot of different mics — a pair of SASS-P MK II mics, Neumann 191s, Schoeps mics, DPAs. Each mic was placed according to its strengths.

How much did quarantine affect the post production of either film?
Wonder Woman was mostly finished before quarantine. There were some mixing tweaks done in March, but I finished my work on the film in November of 2019 and then moved right on to Tenet. We began our first temp mix at Warner Bros. Stage 9 in Burbank at the beginning of March. We had blocked out eight days, but by day two, it became evident things were about to go pear-shaped. We managed to get the temp mix done in four days, hours before the lockdown.

Warner’s engineering department, led by Kevin Collier, did a fantastic job of getting everybody the equipment they needed to work from home. It was a crazy time, but we figured out the logistics of how to do it. So we all spent five weeks at home — cutting. And we did a couple of temp mix updates from home. I brought enough gear home to set up a 5.1 studio in the movie room at my house. Gary Rizzo has a home theater, so he simply brought in his Wacom tablet and Pro Tools, and he was ready to go.

We didn’t deviate from the schedule by a day, and we finished Tenet in mid-June in time for its original release date. It really didn’t affect the course of the film creatively at all. When we began mixing, we were the only people on the lot besides security. The day that we wrapped the mix in June, the commissary at Warner Bros. opened for take-out for the first time since lockdown. It was very strange being alone on this giant movie lot where normally, on any given day, there are 10,000 to 15,000 people working. It was a ghost town. But in a way, I think it focused all of us, because I don’t think any of us had much going on outside work.

Did the equipment Warner Bros. provided during quarantine include mobile ADR or loop group rigs?
Warners does have mobile ADR rigs that they can send out, but we had done all the loop group before quarantine. We did all the Foley before, too, so we’d gotten those big jobs out of the way — not out of foresight, but simply because I wanted to get it done before the temp mix.

Tenet

Chris generally does very little looping for his films. I think there were half a dozen looped lines in Tenet at the most, and I don’t think there would’ve been many more even if COVID wasn’t an issue. Chris recorded those lines himself in his own home theater.

Face coverings such as masks and helmets were prevalent in Tenet. Were you just working with production audio for scenes that featured those costumes?
The production sound mixer, Willie Burton, did a really good job of dealing with the masks. He experimented fitting DPAs and other tiny mics inside the masks. He also did experiments to figure out the best positioning to capture the voice in addition to the boom. It worked out great. Gary Rizzo did a fantastic job of sharpening the dialogue so it didn’t sound too muffled. He made it sound appropriate to what we were seeing and made it more intelligible. Chris Nolan loves putting his characters in masks — Dunkirk, Bane in The Dark Knight Rises — so Gary’s gotten very good with actors in masks! There was also a lot of hunting for syllables and words, a lot of really forensic dialogue editing by Dave Bach and Russ Farmarco.

Did you approach the mix of Tenet differently at all, given how limited theatrical releases have been this past year?
No, Tenet was going to go out into the theaters. There was no question about that. Toward the end of the mix, no one really knew what was going to happen. The entire film was shot on IMAX film and 65mm film, so there was no way it was going to get released strictly online. That’s very important to Chris. His movies really need to be experienced on a big screen, and we all work with that in mind. Ultimately, it did get its theatrical release, briefly. When this is all over, I hope it gets a real rollout in IMAX.


Patrick Birk is a musician, sound engineer and post pro at Silver Sound, a boutique sound house based in New York City. He releases original material under the moniker Carmine Vates. Check out his recent single, Virginia.

Conan’s Editing Team Cuts DIY Episode While Working From Home

By Randi Altman

Throughout this pandemic, we’ve seen creativity and innovation at its best. In order to keep producing regular content, late-night talk show hosts found ways to keep shooting from their homes while their tech teams created new workflows to keep things moving.

Maybe it was audiences’ ability to accept seeing their favorite shows presented in different ways that led to “DIY Conan.”

What’s “DIY Conan?” Over the summer, Conan O’Brien asked his viewers to get creative and make their own segments for an episode of his TBS show. And, boy, did they come through. The “DIY Conan” episode, which aired on September 21, featured live-action footage, puppetry, animation, stop motion and more.

We recently spoke to Conan’s edit team to find out about the challenges of editing and posting a do-it-yourself episode, with all its varying formats and video and audio qualities.

Rob Ashe

In addition to those challenges, the team was working remotely due to COVID. Let’s find out more from lead editor Rob Ashe and fellow editors Chris Heller and Matt Shaw, all of whom told me how lucky they feel to have been able to continue working through this pandemic… even from home.

How did this all work?
Chris Heller: Our web team took our full edited episode and posted it online to Team Coco’s website. The episode was broken into roughly 30-second pieces. Then fans were able to download and recreate as many chunks as they wanted and then just upload their homemade videos when they were done. Here is a link to what we asked for.

Our clips department (Alex Wallachy) then downloaded all the submissions and uploaded to Frame.io so we could access and download to our shared storage in Burbank.

Chris Heller and his fantastic COVID beard.

How did people send in their submissions? Did you ask for a certain format?
Heller: They mostly uploaded MP4 1080, but there were some pretty small clips that were SD, so we just took whatever they had. Surprisingly, most of my clips were shot horizontally, and those that were vertical made it easier to fit in a split screen, so that worked out well.

How many submissions did you get, and what was the process of choosing the ones that would be featured?
Rob Ashe: The editors were asked to first assemble an editor’s cut with what we thought was the best balance of quality and flow, and then producer Mike Sweeney worked with us to make adjustments for final assembly.

Heller: There were hundreds of submissions. I just began laying and syncing clips into a timeline on top of the original Conan show. Then I would see what clips I liked the best for a certain section and started switching between them. Some clips really matched what Conan did in the actual show, so we used those for split screens showing both Conan’s original video along with the fan’s.

Then each of the editors sent Mike Sweeney the three favorite submissions from our acts. These were later voted on by everyone.

Can you talk about the challenges of making sure all these very varied pieces of content flowed creatively?
Shaw: We had already constructed these “template” cuts with all the Conan clips that were used as reference for the submission recreations, so I began marking different beats throughout my template cut. Once we began getting submissions, I already had a rhythm established to begin a rough cut.

My act has 130 or so submissions, so sorting and pulling selects was a bit of an undertaking, with many, many passes. From there it became a puzzle of picking what select fits best for the beat by lots of trial and error, one section at a time until it was done.

Matt Shaw

Heller: We discovered last time withOccupy Conan” [a contest from 1/31/2013 that asked fans to recreate and film one of 79 clips available on the Team Coco website and send the finished product into the show] that it’s more cohesive to use the audio from the original show most of the time, and use fans’ audio sprinkled in here and there. It makes it smoother jumping around to different clips when the audio is continuous. Also, fans’ audio was so different from clip to clip that it was quite disorienting.

Of course, some fans did things that needed to be heard such as cool voices or singing. For some sections we used both Conan and the fan’s audio together.

Ashe: For me, the audio was probably the most challenging aspect that I had to deal with, but only due to the plethora of various sources. In terms of flow, it was based on a mix of presenting the most creative choices while using split screens to highlight the craftsmanship and attention to detail of Conan’s fans.

Editing was done in Adobe Premiere? How was the job split up among the editors?
Ashe: Yes, in Premiere. Chris edited the first segment with the opening titles, monologue and live sketches. Matt tackled the Jordan Schlansky remote, and I edited the interview segment and helped assemble the Snickers’ Comic-Con segment.

Act 1 timeline

Heller: I picked Act 1 because I did that for the “Occupy Conan” episode and really wanted to do the top of the show again. Occupy was and hour-long episode, and DIY was a half-hour that breaks up into four different segments. Rob did Segment 3 and 4, and Matt Shaw did Segment 2, which ended up having the most submissions by far.

What other tools were used? Color grading, plugins?
Heller: I used Adobe Lumetri for color and render and replaced audio with Adobe Audition, then I applied the Speech volume leveler and saved back to Premiere. That way all the fans’ audio was in the same range and we get rid of the MP4 audio, which takes longer to export in the end.

You were all working remotely for this episode. Can you talk about setting up for this and some of the challenges?
Shaw: Yes, we’ve all been working from home since the shutdown began in March. The first challenges were figuring out how to edit via screen-sharing, how to get media fast enough to edit with for a daily show, and how fast we could upload to send to the network from home. Everyone had to max out internet speeds for smooth enough screen sharing using Evercast and, most importantly, uploads via Aspera.

At first, we were shooting the show entirely on iPhones and Zoom, so it wasn’t terrible to pass around media with Frame.io as the hub.

Eventually, we moved from working off our home systems to using virtual computers and then remote-logging into our edit systems at Warner Bros. Conan had a small crew film him at the Largo Theater, where we are taping his monologues, using Panasonic XDCAM cameras that transmit through LiveU to the cloud and to our WB studios and Zoom interviews (featuring guests) with a skeleton crew.

I think I was the biggest challenge for the tech team because cable internet isn’t available where I live. We tried a few different virtual machines at first, then our IT engineer, Rob Gage, got me a better cellular router so we could work remotely off our CPUs at Warner Bros. This was preferable because it meant we could use the shared storage Avid ISIS and deliver to TBS from Burbank.

Rob Gage: The main challenge we faced was dealing with large media files and a short turnaround time. Bandwidth was only an issue when our editors had to upload media. Downloads rarely impacted our workflow. Prior to getting everything configured for our editors to work from their edit bays on the lot remotely and using the LiveU equipment, our goal was to minimize files having to be moved and limiting movements to downloads as much as possible.

The main tool for use in our WFH workflow is Frame.io. This acts as our central hub for media. It was the easiest for us to adopt since our editors, writers and producers were all already familiar with it. Originally, we used Frame.io to distribute media to editors; they would work from their local systems and simultaneously I would sync the media from Frame.io to an AWS FSx share. The editors would relink, export and upload the final acts to TBS via HBO’s Aspera.

We’ve gone through many changes as we’d added additional equipment and this workflow wouldn’t allow us to reliably go back to a day of show. We’ve added LiveU packs to our cameras at Largo, which we use for the comedy portion of the show as well as a single of Conan for the guest interviews.

That footage is captured to the cloud, which we then download to our shared storage at the WB lot. Our editors remote into their edit bays at the WB lot to edit the show. Our editors use Evercast to work with writers and producers for cutting the show as well as pretaped comedy pieces. Once the final segments are completed, they are exported to EVS, and our playback operator plays out the show to TBS. Our workflow is pretty much the same as it was when we were on site with the exception of how we get the media since we are using LiveU.

Are you all still working remotely? If so, how is that going?
Heller: I love working from home but it does introduce varying levels of distractions from whatever is going on at home at the time.

Shaw: It seems like we’ll be working from home at least until the end of the year. I’ve always been a homebody, so working from home isn’t terrible. I get to hang out with my wife and dogs and get my own bathroom.

My biggest challenge is separating work from home when it’s all the same now. During a workday, I’ll stay in the office for the bulk of the day and not go back in when finished to keep a routine and separation.

I also just really miss seeing my team in person!

What is everyone’s favorite submission and why?
Shaw: I had more of a favorite section than a single submission. When producer Jordan Schlansky describes his morning routine regarding experimenting with oats and whey — we had quite a few very strong animations elaborating with their own styles that you don’t see very often in popular animations.

Heller: I really enjoyed all the fans’ creativity, but the animations were my favorite.

Ashe: Personally, my favorite submission was the “Home Invader” during the interview segment. I thought it showed amazing imagination.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

How CBS’ All Rise Went Remote for Last Season’s Finale

By Daniel Restuccio

When the coronavirus forced just about everything to shut down back in mid-March, many broadcast television series had no choice but to make their last-shot episodes their season finales. Others got creative.

Producer Dantonio Alvarez. Photo Credit: Alen Kajtezovic

While NBC’s The Blacklist opted for a CG/live-action hybrid to end its season, CBS’ courtroom drama, All Rise, chose to address the shutdown head-on with a show that was shot remotely. When CBS/Warner Bros. shut down production on All Rise, EPs Michael M. Robin and Len Goldstein — along with EP/co-showrunners Greg Spottiswood and Dee Harris-Lawrence — began brainstorming the idea of creating an episode that reflected the current pandemic crisis applied to the justice system.

Co-producer Dantonio Alvarez was deep into remote post on the already-shot episodes 19 and 20 when Robin called him. He and consultant Gil Garcetti had looked into how the court system was handling the pandemic and decided to pitch an idea to Warner Bros.: a remote episode of All Rise done via a Zoom-like setup. Alvarez was relieved; it meant a lot of the crew — 50 from the usual 90-person team — could keep working.

In a week’s time, Spottiswood and co-executive producer Greg Nelson wrote the 64-page script that focused on the complications around a virtual bench trial and the virus-jammed court system.

The Logistics
Producer Ronnie Chong reached out to Jargon Entertainment’s Lucas Solomon to see how he could help. Jargon, which provides on-set playback and computer graphics, had been working with network solutions company Straight Up Technologies (SUT) on other projects. Solomon brought SUT into the mix. “We figured out a way to do everything online and to get it to a point where Mike Robin could be at home directing everybody,” he explains.

Straight Up Technologies offers a secure and proprietary broadband network with a broadcast-quality ISP backbone that can accommodate up to 200 simultaneous video feeds at 1920×1080 at 30fps and do 4K (3840×2160 or 4096×2160). For All Rise to record at 1920×1080, each actor needed a network upload speed of 5Mb/s for no lag or packet loss. If the producers had decided to go 4K, it would have needed to be triple that.

Prep started the week of April 10, with Solomon, Alvarez, DP David Harp, Robin and the SUT IT team doing Zoom or WebEx scouts of the actors’ homes for suitable locations. They also evaluated each home’s bandwidth, making a list of what computers and mobile devices everyone had.

“You’re only as good as the connection out of your house and the traffic around your house,” explains SUT’s John Grindley. They used what was in the actors’ houses and enhanced the connection to their network when necessary. This included upgrading the basic download/upload data plan, going from 4G to 5G, putting in signal boosters, adding hard lines to computers and installing “cradle points” — high-end Wi-Fi hotspots — if needed.

The cast got small battery-powered ring lights for their devices.

Cinematographer Harp set out to find what area of the casts’ houses helped tell the story. He asked things like, “What was the architecture? What kind of lights did they have in the room? Were they on dimmers? Where were the windows, and what are the window treatments like?” The answers to those questions determined Harp’s lighting package. He sent small battery-powered ring lights to the cast along with tripods for their iPhones, but mostly they worked with what they had. “We decided that we’re not going to get cameras out to anybody,” explains Alvarez. “We were going to use people’s phones and their home computers for capture.”

As a result, all 22 cast members became camera operators, grips and essentially one-person guerrilla film crews. Their gear was MacBook Pros, MacBook Airs, iPhones, and Cisco DX70s. Harp controlled exposure on the computers by moving lights around and positioning the actors.

Solomon set up his video assist system, QTake, at his shop in Valencia. It was equipped with a bandwidth of 400Mb/s download and 20Mb/s upload to record all the feeds. “We set up two other recording locations — one in Hollywood and one in Chatsworth — as redundancy.”

Production Begins
On Friday, April 17, day one of the six-day shoot, a five-person engineering crew at the COVID-safe SUT offices in San Francisco, Seattle and El Segundo fired up the network, checked the call sheet and connected to the crew.

Actors, Jessica Camacho (Emily Lopez) and Lindsay Mendez (Sara Castillo) logged into the join.sutvideo.com on their MacBook Pro laptop and iPhone, respectively. Their signal strength was good, so they shot their scene.

According to Straight Up Technologies CTO Reinier Nissen, the engineers set up virtual spaces, or “talent rooms,” for each actor and a “main stage” room where “talent rooms” were nested and scenes were played out. Every actor’s camera and mic feeds were married and recorded as individual signals. The “main stage” could be configured into a split-screen “Zoom-like” grid with inputs from any of the actors’ feeds. Some of the virtual spaces were control rooms, like a video village, where crew and IT could see all the actors, give technical and creative direction, monitor the signals, manage network traffic and control whose video and audio were on or muted.

The Cisco DX70s natively output 1920×1080 at 30fps. The MacBook Pro and Air 1280×720 camera feeds were upscaled in the sutvideo.com system to 1920×1080 30fps. The iPhones, 4K capable, were set to 1920×1080 30fps. Solomon recorded both the split-screen main stage and individual actor talent room streams to his QTake system in QuickTime ProRes 1920×1080, recalibrated the frame rate to 23.97 and added timecode.

DP David Harp

Each take was slated just like a normal shoot. From his LA home, director Robin could see everyone in the scene on the main stage and decide how to arrange them in the grid, set their eyelines and even pop into the grid during rehearsal and between takes to give notes.

Staging the scene, you would think that the actor should look straight at the camera so you could see their eyes. However, they noticed that there was “less of a connection when looking at the lens,” says Harp. “When they’re looking around the screen, you can feel a connection because they’re looking at each other.”

In addition to the virtual first unit footage, Harp shot eight days of second unit footage of Los Angeles streets during COVID. With four suction cups, he attached his Sony A7 to the roof of his son’s car and drove around for four or five hours a day shooting essentially a stock library of Los Angeles during a pandemic.

Post Production
Alvarez used the remote post infrastructure he set up for Episodes 19 and 20 for the new show. All of the editors, assistant editors, visual effects artists and audio team were working from home on their own systems or ones provided by Warner Bros. Since there was no Avid Unity shared storage, they did old-school shuttling of drives from location to location.

“We had three teams tackling this thing because our schedule was ridiculously short,” says Alvarez. “Every single day, feeding everybody material, we were able to get everyone cutting. We’d send live feeds or links to producers to get their eyes on editorial approvals on scenes in real time. We just moved.”

MTI Film EP Barbara Marshall reports that all the footage was ingested into the post house’s Signiant server system. From those masters, they made DNxHD 36 dailies using the MTI Cortex v5 software and sent them to the editors and assistant editors.

The edit team included Craig Bench, Leah Breuer and Chetin Chabuk, who worked with three assistants: Bradford Obie, Diana Santana and Douglas Staffield. They edited from home on six Avid Media Composers. They worked 13-hour days for 14 days in a row, says Bench.

Everyone on the editorial team got the same pool of dailies and started editing Saturday morning, April 18. Once they reviewed the footage, the team decided to rebuild the split-screen grids from scratch to get the pace of the show right. They wanted to retain, as much as possible, both the cadence of the dialog and the syncopated cutting style that Spottiswood and Bench had set in the pilot.

Rebuilding the grids, explains Bench, “gave us the freedom to treat everyone’s coverage separately. Even though the grid appears to be one take, it’s really not. We were creating our own world.” Rough cuts were sent every night to Robin.

During the first couple of production days, all three teams would jump on cutting the dailies as well as working through the previous day’s notes. As the show came together, Bench worked on the teaser and Act 1, Chabuk did Acts 2 and 3, and Breuer did Act 4 and the party scene at the end.

“There was a lot of experimenting,” explains Bench. “In the grid, should the actors be side by side or one on top of the other? There was also a lot of back and forth about grid background colors and textures.”

The assistants had their bins full setting up grid templates. This would allow them to drop an iso shot on a track so it would go to that spot on the grid and keep it consistent. They also built all the sound effects of the frames animating on and off.

Editorial gave MTI online editor Andrew Miller a “soft lock” of the episode early on April 30. Miller got the Avid project file that was “a big stack of split screens” and a reference video from Bench.

MTI colorist Greg Strait

Miller worked over the weekend with post supervisor Cat Crimins putting the episode together remotely. They replaced all the proxies with the high-res masters in the timeline and made necessary last-minute adjustments.

MTI colorist Greg Strait got a baked, uncompressed 10-bit MXF mixdown of the Avid timeline from Miller. Strait, who graded virtually the entire season of All Rise in Digital Vision’s Nucoda, had a good idea where the look was going. “I tried to keep it as familiar as possible to the other 20 episodes,” he says. “Sharpening some things, adding contrast and putting a lot of power windows around things had the best result.”

After laying in the audio stems, post was wrapped Sunday night at 11pm. Alvarez did a quality-control review of the episode. On Monday, May 4, they output XDCAM as the network deliverable.

Despite the tight time crunch, things went pretty smoothly, which MTI Film’s Marshall attributes to the trust and longtime relationship MTI has with Robin and the show. “That’s the cool thing about Mike. He definitely likes to push the envelope,” she says.

All Rise has been renewed for Season 2, and the team promises the innovations will continue.

Destin Daniel Cretton talks directing Warner’s Just Mercy

By Iain Blair

An emotionally powerful and thought-provoking true story, Just Mercy is the latest film from award-winning filmmaker Destin Daniel Cretton (The Glass Castle, Short Term 12), who directed the film from a screenplay he co-wrote. Based on famed lawyer and activist Bryan Stevenson’s memoir, “Just Mercy: A Story of Justice and Redemption,” which details his crusade to defend, among others, wrongly accused prisoners on death row, it stars Michael B. Jordan and Oscar winners Jamie Foxx and Brie Larson.

The story starts when, after graduating from Harvard, Stevenson (Jordan) — who had his pick of lucrative jobs — instead heads to Alabama to defend those wrongly condemned or who were not afforded proper representation, with the support of local advocate Eva Ansley (Larson).

One of his first cases is that of Walter McMillian (Foxx), who, in 1987, was sentenced to die for the murder of an 18-year-old girl, despite evidence proving his innocence. In the years that follow, Stevenson becomes embroiled in a labyrinth of legal and political maneuverings as well as overt racism as he fights for Walter, and others like him, with the odds — and the system — stacked against them.

This case becomes the main focus of the film, whose cast also includes Rob Morgan as Herbert Richardson, a fellow prisoner who also sits on death row; Tim Blake Nelson as Ralph Myers, whose pivotal testimony against Walter McMillian is called into question; Rafe Spall as Tommy Chapman, the DA who is fighting to uphold Walter’s conviction and sentence; O’Shea Jackson Jr. as Anthony Ray Hinton, another wrongly convicted death row inmate whose cause is taken up by Stevenson; and Karan Kendrick as Walter’s wife, Minnie McMillian.

Cretton’s behind-the-scenes creative team included DP Brett Pawlak, co-writer Andrew Lanham, production designer Sharon Seymour, editor Nat Sanders and composer Joel P. West, all of whom previously collaborated with the director on The Glass Castle.

Destin Daniel Crettin

I spoke with the director about making the film, his workflow and his love of post.

When you read Brian’s book, did you feel compelled to take this on?
I did. His voice and the way he tells the story about these characters, who seem so easy to judge at first. Then he starts peeling off all the layers, and the way he uses humor in certain areas and devastation in others. Somehow it still makes you feel hopeful and inspired to do something about all the injustice – all of it just hit me so hard, and I felt I had to be involved in it some way.

Did you work very closely with him on the film?
I did. Before we even began writing a word, we went to meet him in Montgomery, and he introduced us to the real Anthony Ray Hinton and a bunch of lawyers working on cases. Brian was with us through the whole writing process, filling in the blanks and helping us piece the story together. We did a lot of research, and we had the book, but it obviously couldn’t include everything. Brian gave us all the transcripts of all the hearings, and a lot of the lines were taken directly from those.

This is different from most other courtroom dramas, as the trial’s already happened when the movie begins. What sort of film did you set out to make?
We set out to make the book in as compelling a way as possible. And it’s a story about this young lawyer who’s trying to convince the system and state they made a terrible mistake, with all the ups and downs, and just how long it takes him to succeed. That’s the drama.

What were the main challenges in pulling it all together?
Telling a very intense, true story about people, many of whom are still alive and still doing the work they were doing then. So accuracy was a huge thing, and we all really felt the burden and responsibility to get it right. I felt it more so than any film I’ve ever done because I respect Brian’s work so much. We’re also telling stories about people who were very vulnerable.

Trying to figure out how to tell a narrative that still moved at the right pace and gave you an emotional ride, but which stayed completely accurate to the facts and to a legal process that moves incredibly slowly was very challenging. A big moment for me were when Brian first saw the film and gave me a big hug and thank you; he told me it was not for how he was portrayed, but for how we took care of his clients. That was his big concern.

What did Jamie and Michael bring to their roles?
They’ve been friends for a long time, so they already had this great natural chemistry, and they were able to play through scenes like two jazz musicians and bring a lot of stuff that wasn’t there on the page.

I heard you actually shot in the south. How tough was the shoot?
Filming in some of the real locations really helped. We were able to shoot in Montgomery — such as the scenes where Brian’s doing his morning jogs, the Baptist church where MLK Jr. was the pastor, and then the cotton fields and places where Walter and his family actually lived. Being there and feeling the weight of history was very important to the whole experience. Then we shot the rest of the film in Atlanta.

Where did you post?
All in LA on the Warner lot.

Do you like the post process?
I love post and I hate it (laughs). And it depends on whether you’re finding a solution to a problem or you’re realizing you have a big problem. Post, of course, is where you make the film and where all the problems are exposed… the problems with all the choices I made on set. Sometimes things are working great, but usually it’s the problems you’re having to face. But working with a good post team is so fulfilling, and you’re doing the final rewrite, and we solved so many things in post on this.

Talk about editing with your go-to Nat Sanders, who got an Oscar nom for his work (with co-editor Joi McMillon) on Moonlight and also cut If Beale Street Could Talk.
Nat wasn’t on set. He began cutting material here in LA while we shot on location in Atlanta and Alabama, and we talked a lot on the phone. He did the first assembly which was just over three hours long. All the elements were there but shaping all the material and fine-tuning it all took nearly a year as we went through every scene, talking them out.

Finding the correct emotional ride and balance was a big challenge, as this has so many emotional highs and lows and you can easily tire an audience out. We had to cut some storylines that were working, but we were sending people on another down when they needed something lighter. The other part of it was performance, and you can craft so much of that in the edit; our leads gave us so many takes and options to play with. Dealing with that is one of Nat’s big strengths. Both of us are meticulous, and we did a lot of test screenings and kept making adjustments.

Writer Iain Blair (left) and director Destin Daniel Crettin.

Nat and I both felt the hardest scene to cut and get right was Herb’s execution scene, because of the specific tone needed. If you went too far in one direction, it felt too much, but if you went too far the other way, it didn’t quite hit the emotional beat it needed. So that took a lot of time, playing around with all the cross-cutting and the music and sound to create the right balance.

All period films need VFX. What was entailed?
Crafty Apes did them, and we did a lot of fixes, added period stuff and did a lot of wig fixes — more than you’d think (laughs). We weren’t allowed to shoot at the real prison, so we had to create all the backdrops and set extensions for the death row sequences.

Can you talk about the importance of sound and music.
It’s always huge for me, and I’ve worked with my composer, Joel, and supervising sound editor/re-recording mixer Onnalee Blank, who was half of the sound team, since the start. For both of them, it was all about finding the right tone to create just the right amount of emotion that doesn’t overdo it, and Joel wrote the score in a very stripped-down way and then got all these jazz musicians to improvise along with the score.

Where did you do the DI and how important is it to you?
That’s huge too, and we did it at Light Iron with colorist Ian Vertovec. He’s worked with my DP on almost every project I’ve done, and he’s so good at grading and giving you a very subtle palette.

What’s next?
We’re currently on preproduction on Shang-Chi and the Legend of the Ten Rings, featuring Marvel’s first Asian superhero. It’s definitely a change of pace after this.


 

MPI restores The Wizard of Oz in 4K HDR

By Barry Goch

The classic Victor Fleming-directed film The Wizard of Oz, which was released by MGM in 1939 and won two of its six Academy Award nominations, has been beautifully restored by Burbank’s Warner Bros. Motion Picture Imaging (MPI).

Bob Bailey

To share its workflow on the film, MPI invited a group of journalists to learn about the 4K UHD HDR restoration of this classic film. The tour guide for our high-tech restoration journey was MPI’s VP of operations and sales Bob Bailey, who walked us through the entire restoration process — from the original camera negative to final color.

The Wizard of Oz, which starred Judy Garland, was shot on a Technicolor three-strip camera system. According to Bailey, it ran three black and white negatives simultaneously. “That is why it is known as three-strip Technicolor. The magazine on top of the camera was triple the width of a normal black and white camera because it contained each roll of negative to capture your red, green and blue records,” explained Bailey.

“When shooting in Technicolor, you weren’t just getting the camera. You would rent a package that included the camera, a camera crew with three assistants, the film, the processing and a Technicolor color consultant.”

George Feltenstein, SVP of theatrical catalog marketing for Warner Bros. Home Entertainment, spoke about why the film was chosen for restoration. “The Wizard of Oz is among the crown jewels that we hold,” he said. “We wanted to embrace the new 4K HDR technology, but nobody’s ever released a film that old using this technology. HDR, or high dynamic range, has a color range that is wider than anything that’s come before it. There are colors [in The Wizard of Oz] that were never reproducible before, so what better a film to represent that color?”

Feltenstein went on to explain that this is the oldest film to get released in the 4K format. He hopes that this is just the beginning and that many of the films in Warner Bros.’ classic library will also be released on 4K HDR and worked on at MPI under Bailey’s direction.

The Process
MPI scanned each of the three-strip Technicolor nitrate film negatives at 8K 16-bit, composited them together and then applied a new color grain. The film was rescanned with the Lasergraphics Director 10K scanner. “We have just under 15 petabytes of storage here,” said Bailey. “That’s working storage, because we’re working on 8K movies since [some places in the world] are now broadcasting 8K.”

Steven Anastasi

Our first stop was to look at the Lasergraphics Director. We then moved on to MPI’s climate-controlled vault, where we were introduced to Steven Anastasi, VP of technical operations at Warner Bros. Anastasi explained that the original negative vault has climate-controlled conditions with 25% humidity at 35 degrees Fahrenheit, which is the combination required for keeping these precious assets safe for future generations. He said there are 2 million assets in the building, including picture and sound.

It was amazing to see film reels for 2001: A Space Odyssey sitting on a shelf right in front of me. In addition to the feature reels, MPI also stores millions of negatives captured throughout the years by Warner productions. “We also have a very large library,” reported Anastasi. “So the original negatives from the set, a lot of unit photography, head shots in some cases and so forth. There are 10 million of these.”

Finally, we were led into the color bay to view the film. Janet Wilson, senior digital colorist at MPI, has overseen every remaster of The Wizard of Oz for the past 20 years. Wilson used a FilmLight Baselight X system for the color grade. The grading suite housed multiple screens: a Dolby Pulsar for the Dolby Vision pass, a Sony X300 and a Panasonic EZ1000 OLED 4K HDR.

“We have every 4K monitor manufactured, and we run the film through all of them,” said Bailey. “We painstakingly go through the process from a post perspective to make sure that our consumers get the best quality product that’s available out in the marketplace.”

“We want the consumer experience on all monitors to be something that’s taken into account,” added Feltenstein. “So we’ve changed our workflow by having a consumer or prosumer monitor in these color correction suites so the colorist has an idea of what people are going to see at home, and that’s helped us make a better product.”

Our first view of the feature was a side-by-side comparison of the black and white scanned negative and the sepia color corrected footage. The first part of the film, which takes place in Kansas, was shot in black and white, and then a sepia look was applied to it. The reveal scene, when Dorothy passes through the door going into Oz, was originally shot in color. For this new release, the team generated a matte so Wilson could add this sepia area to the inside of the house as Dorothy transitioned into Oz.

“So this is an example of some of the stuff that we could do in this version of the restoration,” explained Wilson. “With this version, you can see that the part of the image where she’s supposed to be in the monochrome house is not actually black and white. It was really a color image. So the trick was always to get the interior of the house to look sepia and the exterior to look like all of the colors that it’s supposed to. Our visual effects team here at MPI — Mike Moser and Richie Hiltzik — was able to draw a matte for me so that I could color inside of the house independently of the exterior and make them look right, which was always a really tricky thing to do.”

Wilson referred back to the Technicolor three-strip, explaining that because you’ve got three different pieces of film — the different records — they’re receiving the light in different ways. “So sometimes one will be a little brighter than the other. One will be a little darker than the other, which means that the Technicolor is not a consistent color. It goes a little red, and then it goes a little green, and then it goes a little blue, and then it goes a little red again. So if you stop on any given frame, it’s going to look a little different than the frames around it, which is one of the tricky parts of color correcting technical art. When that’s being projected by a film projector, it’s less noticeable than when you’re looking at it on a video monitor, so it takes a lot of little individual corrections to smooth those kinds of things out.”

Wilson reported seeing new things with the 8K scan and 4K display. “The amount of detail that went into this film really shows up.” She said that one of the most remarkable things about the restoration was the amazing detail visible on the characters. For the first time in many generations, maybe ever, you can actually see the detail of the freckles on Dorothy’s face.

In terms of leveraging the expanded dynamic range of HDR, I asked Wilson if she tried to map the HDR, like in kind of a sweet spot, so that it’s both spectacular yet not overpowering at the same time.

“I ended up isolating the very brightest parts of the picture,” she replied. “In this case, it’s mostly the sparkles on their shoes and curving those off so I could run those in, because this movie is not supposed to have modern-day animation levels of brightness. It’s supposed to be much more contained. I wanted to take advantage of brightness and the ability to show the contrast we get from this format, because you can really see the darker parts of the picture. You can really see detail within the Wicked Witch’s dress. I don’t want it to look like it’s not the same film. I want it to replicate that experience of the way this film should look if it was projected on a good print on a good projector.”

Dorothy’s ruby slippers also presented a challenge to Wilson. “They are so red and so bright. They’re so light-reflective, but there were times when they were just a little too distracting. So I had to isolate this level at the same track with slippers and bring them down a little bit so that it wasn’t the first and only thing you saw in the image.”

If you are wondering if audio was part of this most recent restoration, the answer is no, but it had been remastered for a previous version. “As early at 1929, MGM began recording its film music using multiple microphones. Those microphonic angles allowed the mixer to get the most balanced monophonic mix, and they were preserved,” explained Feltenstein. “Twenty years ago, we created a 5.1 surround mix that was organically made from the original elements that were created in 1939. It is full-frequency, lossless audio, and a beautiful restoration job was made to create that track so you can improve upon what I consider to be close to perfection without anything that would be disingenuous to the production.”

In all, it was an amazing experience to go behind the scenes and see how the wizards of MPI created a new version of this masterpiece for today and preserved it for future generations.

This restored version of The Wizard of Oz is a must-see visual extravaganza, and there is no better way to see it than in UHD, HDR, Dolby Vision or HDR10+. What I saw in person took my breath away, and I hope every movie fan out there can have the opportunity to see this classic film in its never-before-seen glory.

The 4K version of The Wizard of Oz is currently available via an Ultra HD Blu-ray Combo Pack and digital.


Barry Goch is a finishing artist at LA’s The Foundation as well as a UCLA Extension Instructor, Post Production. You can follow him on Twitter at @Gochya

MovieLabs, Film Studios Release ‘Future of Media Creation’ White Paper

MovieLabs (Motion Pictures Laboratories), a nonprofit technology research lab that works jointly with member studios Sony, Warner Bros., Disney, Universal and Paramount, has published a new white paper presenting an industry vision for the future of media creation technology by 2030.

The paper, co-authored by MovieLabs and technologists from Hollywood studios, paints a bold picture of future technology and discusses the need for the industry to work together now on innovative new software, hardware and production workflows to support and enable new ways to create content over the next 10 years. The white paper is available today for free download on the MovieLabs website.

The 2030 Vision paper lays out key principles that will form the foundation of this technological future, with examples and a discussion of the broader implications of each. The key principles envision a future in which:

1. All assets are created or ingested straight to the cloud and do not need to move.
2. Applications come to the media.
3. Propagation and distribution of assets is a “publish” function.
4. Archives are deep libraries with access policies matching speed, availability and security to the economics of the cloud.
5. Preservation of digital assets includes the future means to access and edit them.
6. Every individual on a project is identified and verified and their access permissions are efficiently and consistently managed.
7. All media creation happens in a highly secure environment that adapts rapidly to changing threats.
8. Individual media elements are referenced, tracked, interrelated and accessed using a universal linking system.
9. Media workflows are non-destructive and dynamically created using common interfaces, underlying data formats and metadata.
10. Workflows are designed around realtime iteration and feedback.

Rich Berger

“The next 10 years will bring significant opportunities, but there are still major challenges and inherent inefficiencies in our production and distribution workflows that threaten to limit our future ability to innovate,” says Richard Berger, CEO of MovieLabs. “We have been working closely with studio technology leaders and strategizing how to integrate new technologies that empower filmmakers to create ever more compelling content with more speed and efficiency. By laying out these principles publicly, we hope to catalyze an industry dialog and fuel innovation, encouraging companies and organizations to help us deliver on these ideas.”

The publication of the paper will be supported with a panel discussion at the IBC Conference in Amsterdam. The panel, “Hollywood’s Vision for the Future of Production in 2030,” will include senior technology leaders from the five major Hollywood motion picture studios. It will take place on Sunday, September 15 at 2:15pm in the IBC Conference in the Forum room of the RAI. postPerspective’s Randi Altman will moderate the panel made up of Sony’s Bill Baggelaar, Disney’s Shadi Almassizadeh, Universal’s Michael Wise and Paramount’s Anthony Guarino. More details can be found here.

“Sony Pictures Entertainment has a deep appreciation for the role that current and future technologies play in content creation,” says CTO of Sony Pictures Don Eklund. “As a subsidiary of a technology-focused company, we benefit from the power of Sony R&D and Sony’s product groups. The MovieLabs 2030 document represents the contribution of multiple studios to forecast and embrace the impact that cloud, machine learning and a range of hardware and software will have on our industry. We consider this a living document that will evolve over time and provide appreciated insight.”

According to Wise, SVP/CTO at Universal Pictures, “With film production experiencing unprecedented growth, and new innovative forms of storytelling capturing our audiences’ attention, we’re proud to be collaborating across the industry to envision new technological paradigms for our filmmakers so we can efficiently deliver worldwide audiences compelling entertainment.”

For those not familiar with MovieLabs, their stated goal is “to enable member studios to work together to evaluate new technologies and improve quality and security, helping the industry deliver next-generation experiences for consumers, reduce costs and improve efficiency through industry automation, and derive and share the appropriate data necessary to protect and market the creative assets that are the core capital of our industry.”

Wonder Park’s whimsical sound

By Jennifer Walden

The imagination of a young girl comes to life in the animated feature Wonder Park. A Paramount Animation and Nickelodeon Movies film, the story follows June (Brianna Denski) and her mother (Jennifer Garner) as they build a pretend amusement park in June’s bedroom. There are rides that defy the laws of physics — like a merry-go-round with flying fish that can leave the carousel and travel all over the park; a Zero-G-Land where there’s no gravity; a waterfall made of firework sparks; a super tube slide made from bendy straws; and other wild creations.

But when her mom gets sick and leaves for treatment, June’s creative spark fizzles out. She disassembles the park and packs it away. Then one day as June heads home through the woods, she stumbles onto a real-life Wonderland that mirrors her make-believe one. Only this Wonderland is falling apart and being consumed by the mysterious Darkness. June and the park’s mascots work together to restore Wonderland by stopping the Darkness.

Even in its more tense moments — like June and her friend Banky (Oev Michael Urbas) riding a homemade rollercoaster cart down their suburban street and nearly missing an on-coming truck — the sound isn’t intense. The cart doesn’t feel rickety or squeaky, like it’s about to fly apart (even though the brake handle breaks off). There’s the sense of danger that could result in non-serious injury, but never death. And that’s perfect for the target audience of this film — young children. Wonder Park is meant to be sweet and fun, and supervising sound editor John Marquis captures that masterfully.

Marquis and his core team — sound effects editor Diego Perez, sound assistant Emma Present, dialogue/ADR editor Michele Perrone and Foley supervisor Jonathan Klein — handled sound design, sound editorial and pre-mixing at E² Sound on the Warner Bros. lot in Burbank.

Marquis was first introduced to Wonder Park back in 2013, but the team’s real work began in January 2017. The animated sequences steadily poured in for 17 months. “We had a really long time to work the track, to get some of the conceptual sounds nailed down before going into the first preview. We had two previews with temp score and then two more with mockups of composer Steven Price’s score. It was a real luxury to spend that much time massaging and nitpicking the track before getting to the dub stage. This made the final mix fun; we were having fun mixing and not making editorial choices at that point.”

The final mix was done at Technicolor’s Stage 1, with re-recording mixers Anna Behlmer (effects) and Terry Porter (dialogue/music).

Here, Marquis shares insight on how he created the whimsical sound of Wonder Park, from the adorable yet naughty chimpanzombies to the tonally pleasing, rhythmic and resonant bendy-straw slide.

The film’s sound never felt intense even in tense situations. That approach felt perfectly in-tune with the sensibilities of the intended audience. Was that the initial overall goal for this soundtrack?
When something was intense, we didn’t want it to be painful. We were always in search of having a nice round sound that had the power to communicate the energy and intensity we wanted without having the pointy, sharp edges that hurt. This film is geared toward a younger audience and we were supersensitive about that right out of the gate, even without having that direction from anyone outside of ourselves.

I have two kids — one 10 and one five. Often, they will pop by the studio and listen to what we’re doing. I can get a pretty good gauge right off the bat if we’re doing something that is not resonating with them. Then, we can redirect more toward the intended audience. I pretty much previewed every scene for my kids, and they were having a blast. I bounced ideas off of them so the soundtrack evolved easily toward their demographic. They were at the forefront of our thoughts when designing these sequences.

John Marquis recording the bendy straw sound.

There were numerous opportunities to create fun, unique palettes of sound for this park and these rides that stem from this little girl’s imagination. If I’m a little kid and I’m playing with a toy fish and I’m zipping it around the room, what kind of sound am I making? What kind of sounds am I imagining it making?

This film reminded me of being a kid and playing with toys. So, for the merry-go-round sequence with the flying fish, I asked my kids, “What do you think that would sound like?” And they’d make some sound with their mouths and start playing, and I’d just riff off of that.

I loved the sound of the bendy-straw slide — from the sound of it being built, to the characters traveling through it, and even the reverb on their voices while inside of it. How did you create those sounds?
Before that scene came to us, before we talked about it or saw it, I had the perfect sound for it. We had been having a lot of rain, so I needed to get an expandable gutter for my house. It starts at about one-foot long but can be pulled out to three-feet long if needed. It works exactly like a bendy-straw, but it’s huge. So when I saw the scene in the film, I knew I had the exact, perfect sound for it.

We mic’d it with a Sanken CO-100k, inside and out. We pulled the tube apart and closed it, and got this great, ribbed, rippling, zuzzy sound. We also captured impulse responses inside the tube so we could create custom reverbs. It was one of those magical things that I didn’t even have to think about or go hunting for. This one just fell in my lap. It’s a really fun and tonal sound. It’s musical and has a rhythm to it. You can really play with the Doppler effect to create interesting pass-bys for the building sequences.

Another fun sequence for sound was inside Zero-G-Land. How did you come up with those sounds?
That’s a huge, open space. Our first instinct was to go with a very reverberant sound to showcase the size of the space and the fact that June is in there alone. But as we discussed it further, we came to the conclusion that since this is a zero-gravity environment there would be no air for the sound waves to travel through. So, we decided to treat it like space. That approach really worked out because in the scene proceeding Zero-G-Land, June is walking through a chasm and there are huge echoes. So the contrast between that and the air-less Zero-G-Land worked out perfectly.

Inside Zero-G-Land’s tight, quiet environment we have the sound of these giant balls that June is bouncing off of. They look like balloons so we had balloon bounce sounds, but it wasn’t whimsical enough. It was too predictable. This is a land of imagination, so we were looking for another sound to use.

John Marquis with the Wind Wand.

My friend has an instrument called a Wind Wand, which combines the sound of a didgeridoo with a bullroarer. The Wind Wand is about three feet long and has a gigantic rubber band that goes around it. When you swing the instrument around in the air, the rubber band vibrates. It almost sounds like an organic lightsaber-like sound. I had been playing around with that for another film and thought the rubbery, resonant quality of its vibration could work for these gigantic ball bounces. So we recorded it and applied mild processing to get some shape and movement. It was just a bit of pitching and Doppler effect; we didn’t have to do much to it because the actual sound itself was so expressive and rich and it just fell into place. Once we heard it in the cut, we knew it was the right sound.

How did you approach the sound of the chimpanzombies? Again, this could have been an intense sound, but it was cute! How did you create their sounds?
The key was to make them sound exciting and mischievous instead of scary. It can’t ever feel like June is going to die. There is danger. There is confusion. But there is never a fear of death.

The chimpanzombies are actually these Wonder Chimp dolls gone crazy. So they were all supposed to have the same voice — this pre-recorded voice that is in every Wonder Chimp doll. So, you see this horde of chimpanzombies coming toward you and you think something really threatening is happening but then you start to hear them and all they are saying is, “Welcome to Wonderland!” or something sweet like that. It’s all in a big cacophony of high-pitched voices, and they have these little squeaky dog-toy feet. So there’s this contrast between what you anticipate will be scary but it turns out these things are super-cute.

The big challenge was that they were all supposed to sound the same, just this one pre-recorded voice that’s in each one of these dolls. I was afraid it was going to sound like a wall of noise that was indecipherable, and a big, looping mess. There’s a software program that I ended up using a lot on this film. It’s called Sound Particles. It’s really cool, and I’ve been finding a reason to use it on every movie now. So, I loaded this pre-recorded snippet from the Wonder Chimp doll into Sound Particles and then changed different parameters — I wanted a crowd of 20 dolls that could vary in pitch by 10%, and they’re going to walk by at a medium pace.

Changing the parameters will change the results, and I was able to make a mass of different voices based off of this one, individual audio file. It worked perfectly once I came up with a recipe for it. What would have taken me a day or more — to individually pitch a copy of a file numerous times to create a crowd of unique voices — only took me a few minutes. I just did a bunch of varieties of that, with smaller groups and bigger groups, and I did that with their feet as well. The key was that the chimpanzombies were all one thing, but in the context of music and dialogue, you had to be able to discern the individuality of each little one.

There’s a fun scene where the chimpanzombies are using little pickaxes and hitting the underside of the glass walkway that June and the Wonderland mascots are traversing. How did you make that?
That was for Fireworks Falls; one of the big scenes that we had waited a long time for. We weren’t really sure how that was going to look — if the waterfall would be more fiery or more sparkly.

The little pickaxes were a blacksmith’s hammer beating an iron bar on an anvil. Those “tink” sounds were pitched up and resonated just a little bit to give it a glass feel. The key with that, again, was to try to make it cute. You have these mischievous chimpanzombies all pecking away at the glass. It had to sound like they were being naughty, not malicious.

When the glass shatters and they all fall down, we had these little pinball bell sounds that would pop in from time to time. It kept the scene feeling mildly whimsical as the debris is falling and hitting the patio umbrellas and tables in the background.

Here again, it could have sounded intense as June makes her escape using the patio umbrella, but it didn’t. It sounded fun!
I grew up in the Midwest and every July 4th we would shoot off fireworks on the front lawn and on the sidewalk. I was thinking about the fun fireworks that I remembered, like sparklers, and these whistling spinning fireworks that had a fun acceleration sound. Then there were bottle rockets. When I hear those sounds now I remember the fun time of being a kid on July 4th.

So, for the Fireworks Falls, I wanted to use those sounds as the fun details, the top notes that poke through. There are rocket crackles and whistles that support the low-end, powerful portion of the rapids. As June is escaping, she’s saying, “This is so amazing! This is so cool!” She’s a kid exploring something really amazing and realizing that this is all of the stuff that she was imagining and is now experiencing for real. We didn’t want her to feel scared, but rather to be overtaken by the joy and awesomeness of what she’s experiencing.

The most ominous element in the park is the Darkness. What was your approach to the sound in there?
It needed to be something that was more mysterious than ominous. It’s only scary because of the unknown factor. At first, we played around with storm elements, but that wasn’t right. So I played around with a recording of my son as a baby; he’s cooing. I pitched that sound down a ton, so it has this natural, organic, undulating, human spine to it. I mixed in some dissonant windchimes. I have a nice set of windchimes at home and I arranged them so they wouldn’t hit in a pleasing way. I pitched those way down, and it added a magical/mystical feel to the sound. It’s almost enticing June to come and check it out.

The Darkness is the thing that is eating up June’s creativity and imagination. It’s eating up all of the joy. It’s never entirely clear what it is though. When June gets inside the Darkness, everything is silent. The things in there get picked up and rearranged and dropped. As with the Zero-G-Land moment, we bring everything to a head. We go from a full-spectrum sound, with the score and June yelling and the sound design, to a quiet moment where we only hear her breathing. For there, it opens up and blossoms with the pulse of her creativity returning and her memories returning. It’s a very subjective moment that’s hard to put into words.

When June whispers into Peanut’s ear, his marker comes alive again. How did you make the sound of Peanut’s marker? And how did you give it movement?
The sound was primarily this ceramic, water-based bird whistle, which gave it a whimsical element. It reminded me of a show I watched when I was little where the host would draw with his marker and it would make a little whistling, musical sound. So anytime the marker was moving, it would make this really fun sound. This marker needed to feel like something you would pick up and wave around. It had to feel like something that would inspire you to draw and create with it.

To get the movement, it was partially performance based and partially done by adding in a Doppler effect. I used variations in the Waves Doppler plug-in. This was another sound that I also used Sound Particles for, but I didn’t use it to generate particles. I used it to generate varied movement for a single source, to give it shape and speed.

Did you use Sound Particles on the paper flying sound too? That one also had a lot of movement, with lots of twists and turns.
No, that one was an old-fashioned fader move. What gave that sound its interesting quality — this soft, almost ethereal and inviting feel — was the practical element we used to create the sound. It was a piece of paper bag that was super-crumpled up, so it felt fluttery and soft. Then, every time it moved, it had a vocally whoosh element that gave it personality. So once we got that practical element nailed down, the key was to accentuate it with a little wispy whoosh to make it feel like the paper was whispering to June, saying, “Come follow me!”

Wonder Park is in theaters now. Go see it!


Jennifer Walden is a New Jersey-based audio engineer and writer. Follow her on Twitter @audiojeney.

A Star is Born: Live vocals, real crowds and venues

By Jennifer Walden

Warner Bros. Pictures’ remake of A Star is Born stars Bradley Cooper as Jackson Maine, a famous musician with a serious drinking hobby who stumbles onto singer/songwriter Ally (Lady Gaga) at a drag bar where she’s giving a performance. Jackson is taken by her raw talent and their chance meeting turns into something more. With Jackson’s help, Ally becomes a star but her fame is ultimately bittersweet.

Jason Ruder

Aside from Lady Gaga and Bradley Cooper (who also directed and co-wrote the screenplay), the other big star of this film is the music. Songwriting started over two years ago. Cooper and Gaga collaborated with several other songwriters along the way, like Lukas Nelson (son of Willie Nelson), Mark Ronson, Hillary Lindsey and DJ White Shadow.

According to supervising music editor/re-recording mixer Jason Ruder from 2 Pop Music — who was involved with the film from pre-production through post — the lyrics, tempo and key signatures were even changing right up to the day of the shoot. “The songwriting went to the 11th hour. Gaga sort of works in that fashion,” says Ruder, who witnessed her process first-hand during a sound check at Coachella. (2 Pop Music is located on the Warner Bros. lot in Burbank.)

Before each shoot, Ruder would split out the pre-recorded instrumental tracks, reference vocals and have them ready for playback, but there were days when he would get a call from Gaga’s manager as he was driving to the set. “I was told that she had gone into the studio in the middle of the night and made changes, so there were all new pre-records for the day. I guess she could be called a bit of a perfectionist, always trying to make it better.

“On the final number, for instance, it was only a couple hours before the shoot and I got a message from her saying that the song wasn’t final yet and that she wanted to try it in three different keys and three different tempos just to make sure,” continues Ruder. “So there were a lot of moving parts going into each day. Everyone that she works with has to be able to adapt very quickly.”

Since the music is so important to the story, here’s what Cooper and Gaga didn’t want — they start singing and the music suddenly switches over to a slick, studio-produced track. That concern was the driving force behind the production and post teams’ approach to the on-camera performances.

Recording Live Vocals
All the vocals in A Star is Born were recorded live on-set for all the performances. Those live vocals are the ones used in the film’s final mix. To pull this off, Ruder and the production sound team did a stage test at Warner Bros. to see if this was possible. They had a pre-recorded track of the band, which they played back on the stage. First, Cooper and Gaga did live vocals. Then they tried the song again, with Cooper and Gaga miming along to pre-recorded vocals. Ruder took the material back to his cutting room and built a quick version of both. The comparison solidified their decision. “Once we got through that test, everyone was more confident about doing the live vocals. We felt good about it,” he says.

Their first shoot for the film was at Coachella, on a weekday since there were no performances. They were shooting a big, important concert scene for the film and only had one day to get it done. “We knew that it all had to go right,” says Ruder. It was their first shot at live vocals on-set.

Neither the music nor the vocals were amplified through the stage’s speaker system since song security was a concern — they didn’t want the songs leaked before the film’s release. So everything was done through headphone mixes. This way, even those in the crowd closest to the stage couldn’t hear the melodies or lyrics. Gaga is a seasoned concert performer, comfortable with performing at concert volume. She wasn’t used to having the band muted and the vocals live (though not amplified), so some adjustments needed to be made. “We ended up bringing her in-ear monitor mixer in to help consult,” explains Ruder. “We had to bring some of her touring people into our world to help get her perfectly comfortable so she could focus on acting and singing. It worked really well, especially later for Arizona Sky, where she had to play the piano and sing. Getting the right balance in her ear was important.”

As for Jackson Maine’s band on-screen, those were all real musicians and not actors — it was Lukas Nelson’s band. “They’re used to touring together. They’re very tight and they’re seasoned musicians,” says Ruder. “Everyone was playing and we were recording their direct feeds. So we had all the material that the musicians were playing. For the drums, those had to be muted because we didn’t want them bleeding into the live vocals. We were on-set making sure we were getting clean vocals on every take.”

Real Venues, Real Reverbs
Since the goal from the beginning was to create realistic-sounding concerts, Ruder decided to capture impulse responses at every performance location — from big stages like Coachella to much smaller venues — and use those to create reverbs in Audio Ease’s Altiverb.

The challenge wasn’t capturing the IRs, but rather, trying to convince the assistant director on-set that they needed to be captured. “We needed to quiet the whole set for five or 10 minutes so we could put up some mics and shoot these tones through the spaces. This all had to be done on the production clock, and they’re just not used to that. They didn’t understand what it was for and why it was important — it’s not cheap to do that during production,” explains Ruder.

Those IRs were like gold during post. They allowed the team to recreate spaces like the main stage at Coachella, the Greek Theatre and the Shrine Auditorium. “We were able to manufacture our own reverbs that were pretty much exactly what you would hear if you were standing there. For Coachella, because it’s so massive, we weren’t sure if they were going to come out, but it worked. All the reverbs you hear in the film are completely authentic to the space.”

Live Crowds
Oscar-winning supervising sound editor Alan Murray at Warner Bros. Sound was also capturing sound at the concert performances, but his attention was away from the stage and into the crowd. “We had about 300 to 500 people at the concerts, and I was able to get clean reactions from them since I wasn’t picking up any music. So that approach of not amplifying the music worked for the crowd sounds too,” he says.

Production sound mixer Steven Morrow had set up mics in and around the crowd and recorded those to a multitrack recorder while Murray had his own mic and recorder that he could walk around with, even capturing the crowds from backstage. They did multiple recordings for the crowds and then layered those in Avid Pro Tools in post.

Alan Murray

“For Coachella and Glastonbury, we ended up enhancing those with stadium crowds just to get the appropriate size and excitement we needed,” explains Murray. They also got crowd recordings from one of Gaga’s concerts. “There was a point in the Arizona Sky scene where we needed the crowd to yell, ‘Ally!’ Gaga was performing at Fenway Park in Boston and so Bradley’s assistant called there and asked Gaga’s people to have the crowd do an ‘Ally’ chant for us.”

Ruder adds, “That’s not something you can get on an ADR stage. It needed to have that stadium feel to it. So we were lucky to get that from Boston that night and we were able to incorporate it into the mix.”

Building Blocks
According to Ruder, they wanted to make sure the right building blocks were in place when they went into post. Those blocks — the custom recorded impulse responses, the custom crowds, the live vocals, the band’s on-set performances, and the band’s unprocessed studio tracks that were recorded at The Village — gave Ruder and the re-recording mixers ultimate flexibility during the edit and mix to craft on-scene performances that felt like big, live concerts or intimate songwriting sessions.

Even with all those bases covered, Ruder was still worried about it working. “I’ve seen it go wrong before. You get tracks that just aren’t usable, vocals that are distorted or noisy. Or you get shots that don’t work with the music. There were those guitar playing shots…”

A few weeks after filming, while Ruder was piecing all the music together in post, he realized that they got it all. “Fortunately, it all worked. We had a great DP on the film and it was clear that he was capturing the right shots. Once we got to that point in post, once we knew we had the right pieces, it was a huge relief.”

Relief gave way to excitement when Ruder reached the dub stage — Warner Bros. Stage 10. “It was amazing to walk into the final mix knowing that we had the material and the flexibility to pull this off,” he says.

In addition to using Altiverb for the reverbs, Ruder used Waves plug-ins, such as the Waves API Collection, to give the vocals and instrumental tracks a live concert sound. “I tend to use plug-ins that emulate more of a tube sound to get punchier drums and that sort of thing. We used different 5.1 spreaders to put the music in a 5.1 environment. We changed the sound to match the picture, so we dried up the vocals on close-ups so they felt more intimate. We had tons and tons of flexibility because we had clean vocals and raw guitars and drum tracks.”

All the hard work paid off. In the film, Ally joins Jackson Maine on stage to sing a song she wrote called “Shallow.” For Murray and Ruder, this scene portrays everything they wanted to achieve for the performances in A Star is Born. The scene begins outside the concert, as Ally and her friend get out of the car and head toward the stage. The distant crowd and music reverberate through the stairwell as they’re led up to the backstage area. As they get closer, the sound subtly changes to match their proximity to the band. On stage, the music and crowd are deafening. Jackson begins to play guitar and sing solo before Ally finds the courage to join in. They sing “Shallow” together and the crowd goes crazy.

“The whole sequence was timed out perfectly, and the emotion we got out of them was great. The mix there was great. You felt like you were there with them. From a mix perspective, that was probably the most successful moment in the film,” concludes Ruder.


Jennifer Walden is a New Jersey-based writer and audio engineer. You can follow her on Twitter at @audiojeney

Optical Art DI colorist Ronney Afortu on In the Fade

Chicago-born, Germany-raised Ronney Afortu has been enjoying a storied career at Hamburg-based studio Optical Art. This veteran senior DI colorist has an impressive resume, having worked on the Oscar-nominated film Mongol, with Oscar-winning director Bille August on Night Train to Lisbon, as well as the recent Golden Globe-winning movie In the Fade (Aus dem Nichts), a crime drama starring Diane Kruger and Denis Moschitto.

TheresaJosuttis

Ronney Afortu (Photo Credit: Theresa Josuttis)

Afortu believes that HDR and a wider color gamut is the technology to watch for the in future. He says, “It has had a big impact on DPs in how they set up a shot, how they light it.”

Let’s find out more about his path to colorist, his workflow in In the Fade, and trends he is seeing.

What led you to become a colorist?
After school, I started studying media engineering. But I also worked with a production company specializing in advertising. Having been on the shoot of a Coca-Cola commercial, I was invited to join the director for the telecine. I knew right away that was what I wanted to do.

The first experience of color grading for cinema — on a Thomson Specter with Pandora Pogle controller — was at VCC in Hamburg, the former parent company of Optical Art. I asked them if there were any opportunities to train as a colorist with them, and that was it.

What sort of projects do you work on?
At the time I joined them, Optical Art was a pioneer in digital intermediate. So from the start I have worked a lot on movies, and that is still what I do the most. But I also graded television features.

The boundaries between the two have become much more fluid in recent years. Television has become much more sophisticated. You meet the same DPs and directors on movies and television. The only difference is that in television you will have less time!

You currently work on FilmLight Baselight?
Yes. When I started out as a colorist, the Specter/Pogle combination was seen as state-of-the-art for 2K grading work, but it also represented a challenge in DI for movies. It was difficult to manage color spaces when writing back to film.

Frank Hellmann, the DI supervisor at Optical Art, learned about an outfit in London called Computer Film Company. They had developed a system that allowed you to communicate with the lab in printer lights. It transformed the way we worked — we were convinced that this was the right way to go.

That system developed by Computer Film Company was spun out into a new company, FilmLight, and the grading platform became Baselight. Optical Art decided to buy a Baselight system, and we became beta testers very early on. We still keep that serial number 0001 on one of our machines, though it has been upgraded a few times to the latest hardware.

Though I started in telecine, today we rarely see film because most of the labs in Europe have gone. Film meant many days of struggling to get a perfect print. So in that way I don’t miss it. In digital, you get a new [sensor] chip every couple of months. Kodak and Fuji would produce a new stock every few years. So we have constant improvement and new opportunities.

Can you tell us more about In the Fade?
I had worked with director Fatih Akin and DP Rainer Klausmann on a couple of movies previously, so the working relationship was very close right from the start.

In the Fade is a complex and dark movie. Each of its three acts has a distinctly different feel to it, and it was important for everyone to set these looks before the first day of shooting. This was one of those rare projects when the production company talked to us early to determine how best to do it. Rainer is a true DP — he lights really well. We ran six to eight tests to get the right kit, which allowed us to agree on how to get the looks in each section of the movie. But both Rainer and Fatih are quite “analog” thinkers. They believe that if you can do it on set, you should do so.

The tests went all the way to make-up. The director wanted lead actor Diane Kruger to look “not so good” in some of the more harrowing sequences. They wanted to ensure that every detail of the performance was captured.

What was the workflow for the movie?
In the Fade was shot using Arri Alexa cameras with wide gamut and that allowed for a high-quality DCP finish. Because of the way that Fatih and Rainer work, I was able to handle the dailies as well as the final grade. I used FilmLight’s Daylight system. This has the same grading toolkit as Baselight, and allows grades to be exported as BLG metadata so nothing is lost.

Fatih and Rainer prefer to watch dailies in the editing room — the old-fashioned way. On set they liked to concentrate on shooting, having faith in everyone else in the team. Daylight suits this workflow really well in creating graded dailies for the editing department, that was also located at Optical Art, as well as giving me the same starting point in the final Baselight grade.

Did you run into any challenges on the film?
Given that a lot of the “effects” were done in-camera, and we had seen everything in the dailies, by the time of the final grade we were pretty much on top of everything.

An interesting part of the movie is the big scenes in the rain. Most of the tension was created with lighting, but Fatih and Rainer encouraged me to enhance it. They wanted the audience to really feel getting drenched by the rain.

What about HDR, 4K and other trends in technology?
When I sit in the cinema, I don’t usually see pixels. So more resolution is not important to me. HDR and wider color gamut is what is exciting — provided we can get that all the way to the big screen.

That has the most impact I have seen over the last couple of years. You cannot compare it to film, but it has a big impact on DPs, in how they set up a shot, how they light it. Say the script says the villain moves out of a bar. Normally you could cut from interior to exterior. In HDR, you could simply follow the villain. Or the camera could stay inside and still see what is happening outside. This is a big shift for writers as well as for directors and DPs.

What do you do when you are not grading?
I love to be outside, because I spend my working time in the dark. I do a lot of sport, but most of all I spend time with my daughter.


Film Stills Photo Credit: Gordon Timpen