Tag Archives: Dneg

OPPENHEIMER

Oppenheimer VFX Supervisors Talk Explosive Visual Effects

By Iain Blair

Directed by Christopher Nolan, and based on the novel “American Prometheus,” Oppenheimer tells the story of theoretical physicist J. Robert Oppenheimer and follows the work of his team of scientists during the Manhattan Project, leading to the development of the atomic bomb. It stars Cillian Murphy as Oppenheimer.

It also features VFX depicting everything from the first atomic test in the New Mexico desert to physical phenomena ranging from subatomic particles to exploding stars and black holes. As the film’s sole VFX partner, Dneg delivered over 100 shots, crafted from more than 400 practically shot elements, to help create some of the film’s most important and explosive sequences. Oscar-winning production VFX supervisor Andrew Jackson, and Dneg VFX supervisor Giacomo Mineo led the team. Here they talk about creating the VFX and how they did it.

Oppenheimer

Andrew Jackson

What were the big challenges of creating the VFX for this?
Andrew Jackson: One of the biggest challenges, which was also one of the most rewarding aspects of the work, was the set of creative rules that we imposed on the project. We wanted all of the images on the screen to be generated from real photography, shot on film and preferably in IMAX. The process involved shooting an extensive library of elements. The final shots ranged from using the raw elements as shot, through to complex composites of multiple filmed elements. This process of constraining the creative process forces you to dig deeper to find solutions that are often more interesting than if there were no limits.

OPPENHEIMER

Giacomo Mineo

Giacomo Mineo: The movie presented two significant challenges. First, the recreation of the Trinity test and second, the fascinating task of immersing ourselves in Oppenheimer’s mind and figuring out how to capture his ideas and imagination, considering the limited knowledge and visual references available during that era. Concepts like modern physics or the Earth seen from space were relatively new at the time. To truly portray Oppenheimer’s mindset, we had to let go of our modern understanding and delve into his world.

How closely did you work with Chris Nolan? What guidance and input did you get?
Jackson: This is my third film with Chris, and during that time, I have developed a strong understanding of his filmmaking philosophy. His approach to effects is very similar to mine in that we don’t see a clear divide between VFX and SFX and believe that if something can be filmed, it will always bring more richness and depth to the work.

I feel he has a level of trust in my approach to the work, and I really appreciate the freedom he gives me to experiment with ideas and the collaborative approach we take as we refine solutions for the individual shots. As well as having the creative experience from years of working with Chris, Dneg also has a huge benefit when it comes to solving the technical challenges of working with IMAX resolution in a largely optical production pipeline.

Mineo: Chris Nolan was well aware that, without the use of CG, we had a limited set of options. He was flexible and receptive throughout the entire process of exploration and image creation. Whenever he discovered intriguing elements within our tests, he was swift to integrate them into the film and see if he could make them work. This was a really positive and rewarding part of the collaboration.

Also, Andrew played a pivotal role by working closely alongside us, providing the right framework and great creative guidance and bringing his experience and vision to the team.

Is it true Nolan didn’t want to use any CGI? Was that liberating or constraining?
Mineo: This was the challenge set forth by Christopher Nolan, and we embraced it. His vision was that every on-screen image should originate from authentic photography captured on film, preferably in the IMAX format. In pursuit of this goal, we employed traditional techniques such as miniatures, a range of explosions (from massive to micro), thermite fire, long-exposure shots and many more.

The majority of our VFX work revolved around these tangible elements, intentionally avoiding CGI and primarily relying on compositional treatments. This unconventional approach, characterized by self-imposed limitations, had a profound influence on the image-creation process. These constraints compelled us to think innovatively, leading us to creative outcomes that were both distinct and captivating while remaining undeniably rooted in reality.

How many VFX shots are there?
Mineo: There are around 100 VFX shots in the film, plus around another 100 shots that were directly extracted from the vast IMAX library of elements created by Andrew.

How did you recreate the nuclear tests and show the scale of an atomic blast? Break it down for us.
Mineo: For the Trinity test sequence, the goal was to craft an authentically real and awe-inspiring depiction. To achieve this, Andrew and SFX supervisor Scott Fisher embarked on an extensive shoot, capturing a wide spectrum of explosions using IMAX technology. The range included grand-scale detonations featuring various lenses as well as smaller-scale and even underwater detonations. Notably, the billowing dust from the ground and the shockwaves were achieved using small or macro-scale elements.

At Dneg, fully aware of the significance of the task, we began exploring various options right from the first day. We maintained an ongoing dialogue, frequently presenting our preliminary tests to Andrew and Chris Nolan. While archived footage served as inspiration, we allowed for a degree of interpretation, focusing on capturing the essence of the event rather than an exact recreation.

One example is the plasma ball atomic test featured in the high-speed archive footage. To achieve that, we used underwater micro explosions combined with a massive explosion. Subsequently, extensive compositing work was undertaken to seamlessly integrate the elements. Special credit goes to Jay Murray for recreating this iconic moment.

How did you create the crackly rings of fire that Oppenheimer kept visualizing?
Jackson: I built a contraption with multiple spheres, each spinning on different vibrating arcs. These were shot with very long exposures to create the curved, wavy lines.

How did you create elements such as subatomic particles, exploding stars and black holes forming?
Mineo: In the preliminary phase, Andrew dedicated months solely to experimentation at Scott Fisher’s workshop in LA. Armed with his digital camera, he captured a range of tests and presented them to Chris Nolan for review. These tests encompassed a mix of old-style techniques, including miniatures, minute explosions, thermite fire, spinning beads and much more.

Upon Chris Nolan’s approval, the production transitioned to filming in IMAX format. The outcome was a compilation of hundreds of distinct elements. While some seamlessly aligned with the script’s narrative and found their way into the edit, many others contributed to a vast library of elements. Subsequently, for portions of the script still awaiting attention, we embarked on exploring these recorded elements, aiming to complete the work exclusively with this material.

Throughout this process, we discovered that simplicity often yielded the most effective results. However, for instances like the chain reaction or implosion/explosion shots, we employed a diverse assortment of elements, always mindful of preserving the raw authenticity of the footage. Our goal was to maintain the sensation of genuine photography captured on film.

What was the toughest sequence to deal with and why?
Mineo: Without a doubt, the Trinity test was one of the most significant challenges. This was arguably the most complex task of the show, demanding great attention to detail in terms of compositing. It encompassed a range of elements, from massive explosions made in collaboration with Scott Fisher, to relatively small and macro elements shot at the highest frame range IMAX allows.

Examples include tiny, sandy shockwaves and underwater churning dust, to name just a couple. It is worth noting that in the film, the depiction of the explosion from various perspectives was realized through a combination of techniques, including the retiming of practical large-scale explosions and the intricate process of compositing extensive practical elements together. The “wall of fire” shot, designed by Peter Howlett, is an example of that brilliant type of work.

What gear did you use? Any new or cutting-edge systems and methods?
Jackson: We used old-style practical effects methods and techniques, perhaps “cutting-edge” in their era.

You’ve both worked on a lot of huge projects. Where does this rate in terms of complexity and challenges?
Jackson: The approach for this film was so unique that it’s difficult to compare with other projects. Some of the biggest challenges for me were during the shoot. We had a very small IMAX film unit working alongside the main unit in New Mexico, in the winter, in a tent. We needed to move the whole setup every few days, and the weather conditions were less than ideal — we were dealing with snow, freezing water tanks, mud, wind and rain.

Mineo: Undoubtedly, this project stands as the most distinct and extraordinary endeavor I’ve been a part of. We were in full creative mode from the beginning to the end. All the work relied on us embracing the set of rules imposed from the beginning and thinking out of the box.

Our days were really spent looking at the elements, in constant exploration, trying to find something interesting that could be utilized in the movie. The process of continually experimenting with ideas involved creating numerous images, with some eventually making the final cut and others not. A notable illustration of this creative approach is the “birth of the star” shot, designed by Ashley Mohabir. In this instance, we combined a thermite fire element, slowed it down and merged it with a starfield look element coming from the underwater metal particles shoot. The outcome was a striking image that resembled stars and the cosmos.

Images Courtesy of Dneg © Universal Pictures. 


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

Windows Port Available for Open-Source xStudio Review App

Epic Games, creator of Fortnite and Unreal Engine, has teamed up with innovation studio Ahead, creator of Cezanne Studio, to build the open-source Windows port of xStudio, available now on GitHub.

Designed by visual effects company DNeg, xStudio is a modern, flexible and feature-rich playback and review application. DNeg contributed xStudio to the Academy Software Foundation as an open-source project in early 2023 to meet the needs of content creators throughout the production process. Initially open-sourced for Linux, this release marks the first step to make xStudio a cross-platform solution for the open-source community and to extend xStudio as a candidate for deployment as a playback and review solution across workflows.

xStudio has been carefully engineered to meet the diverse needs of a broad range of review scenarios, whether on-set, in the office, in remote reviews or across teams of connected users. Heavily customizable, xStudio can be used as a stand-alone player or can be integrated into any pipeline via plugins and Python scripting APIs.

The collaboration between Epic and Ahead on the Windows port signals the ongoing evolution and broader adoption of xStudio, which is a part of the Academy Software Foundation’s Open Review Initiative, an umbrella project working to build a unified open-source toolset for playback, review and approval of motion picture and related professional media. Both Epic and DNeg are founding members of the Academy Software Foundation, while Ahead is an active contributor to open source and released the OpenAnnotationIO project in 2020.

“Accelerating the development of open standards and investing in open-source software is foundational to building the open metaverse,” says Sebastien Miglio, VP of product management and engineering at Epic Games. “We are thrilled to contribute the Windows port of xStudio to the open-source community and, by extension, to all creators and storytellers.”

“We are grateful for the opportunity Epic has given us to actively contribute to the expansion of a unified open-source playback and review workflow,” says Alex Santo, co-founder of Ahead. “Our mission is to lend a hand to transform xStudio into an even more feature-rich powerhouse, allowing creatives across different communities to share their ideas, annotations and reviews. We’re looking forward to collaborating with other studios to build new functionalities. We’re thrilled to work toward reaching this objective of the Academy Software Foundation’s Open Review Initiative.”

Chas Jarrett, creative director at DNeg, says the vision for xStudio from the outset was to create a tool that enables seamless collaboration for everyone working in the media creation industries. “Following DNeg’s release of xStudio as an open-source project, the open-source Windows port from Epic Games and Ahead is the next step in sharing this modern, flexible and feature-rich application with a wider community of content creators. I am delighted to see the increased adoption of xStudio as a common playback and review toolset across industries, and its success is testament to the hard work and dedication of the talented developers in DNeg’s technology team.”

The open source code for xStudio is available now on GitHub, and information about the application’s features and capabilities is available at dneg.com/xstudio.

 

Guillaume Rocheron

Oscar Winner Guillaume Rocheron Joins DNeg as VFX Supervisor

DNeg, a VFX and animation studio working in feature film, television and multiplatform content, has added Guillaume Rocheron as visual effects supervisor. The multiple Oscar- and BAFTA-winning supervisor joins DNeg with more than two decades of experience. He is based at the company’s Los Angeles studio.

Rocheron’s list of production VFX supervisor credits includes Jordan Peele’s Nope, Sam Mendes’ 1917, Michael Dougherty’s Godzilla: King of the Monsters and Rupert Sanders’ Ghost in the Shell. More recently, he has overseen the visual effects work for Alejandro G. Iñárritu’s Bardo: False Chronicle of a Handful of Truths, which will release on Netflix on December 16.

In 2020, Rocheron was honored with an Oscar and a BAFTA for his work on 1917, for which he and his teams created long, seamless shots that maintained the illusion of the whole two-hour movie being filmed in one continuous take. He had previously taken home an Oscar, a BAFTA and a Visual Effects Society (VES) Award for his work on Ang Lee’s Life of Pi.

“As a widely respected and highly admired supervisor, Guillaume has produced some extraordinary work over the course of his career — he is a supervisor who always pushes the envelope and knows how to get the very best from his teams,” says Namit Malhotra, DNeg chairman/CEO. “2023 will be a thrilling year for DNeg, with some big opportunities on the horizon, and I am excited to have Guillaume onboard as part of our senior creative team to help chart the course for DNeg through next year and beyond.”

“It’s been incredible to see how DNeg has evolved over the last few years, and to see how Namit and his team have been transforming the company into a home for filmmakers to create amazing visual effects,” says Rocheron. “I am excited to embark on this collaboration with DNeg’s outstanding artists, engineers and technicians and to join a global team that includes so many outstanding supervisors. DNeg’s filmmaker-oriented mentality, focus on innovation and commitment to its people are all very appealing to me.”

 

DNeg’s Nonstop VFX for Bullet Train

By Ben Mehlman

Bullet Train is a neon-infused, violent and hilarious Agatha Christie-style romp starring Brad Pitt and an ensemble of who’s who, including Aaron Taylor-Johnson and Sandra Bullock. The film follows a plethora of assassins whose different missions all coalesce around a common thread: White Death, the leader of a Yakuza-like crime syndicate. It all takes place on a bullet train speeding from Tokyo to Kyoto.

Stephen James

The film is directed by David Leitch (Deadpool 2, Atomic Blonde, Hobbs & Shaw) and based on a 2010 Japanese novel of the same name. At the center of this action-packed blockbuster are eye-popping visual effects. I recently spoke with DNeg VFX supervisor Stephen James (Dune, Deadpool 2, Blade Runner 2049) about the film, its challenges and the 1,015 shots the studio oversaw.

Tell us about your working relationship with David Leitch. What were your initial conversations like?
I worked with David on Deadpool 2 and Hobbs & Shaw, under Michael Brazelton, who was the VFX supervisor on those shows. Conversations started as I was finishing up work on Dune: Part 1, so I was able to jump on as they were wrapping up the LED pre-production phase of the film.

Can you elaborate on what was built during the LED pre-production phase?
We used LED content on-set for hundreds of shots. Since it takes place on a train, mostly at night, with a lot of repeating environments passing by, we decided this was the perfect use for LEDs. Due to the film’s tight schedule, we came at it from a few different approaches.

They filmed an array of footage traveling down Japanese highway systems for practical backdrops for the LED content. This allowed a realistic range in detail for the environments, but since you can’t attach cameras to a real bullet train, we had to heavily stabilize the highway system footage to make it feel like a smooth and fast ride.

Since we’re traveling so fast, we created long stretches of prerendered CG environments using Clarisse and Unreal. We built city blocks, parks, track structures, overhead structures, everything close to the track — things to give you that sense of speed. Then everything that was filmed, as well as the pre-rendered environments, were rendered out between 12K and 20K resolutions, split out into massive LED panels running around the on-set cars. On-set, they could pick the content they wanted to use, swap it out, speed it up or slow it down. In real time, Michael Brazelton could add or remove layers — like a train passing or a tunnel to go through — if it was necessary for things like action or lighting cues.

Were any greenscreens or bluescreens used?
At night it’s all LED, with an occasional shot if they couldn’t quite get it in-camera. Then they would use the LED screens for a clean, beautiful bluescreen. Once the sun rose, we switched to LED bluescreens and greenscreens. We picked that point because the LED screen brightness isn’t quite there yet and once the train leaves Kyoto, things get really crazy.

What software and hardware were you using? Did you use Unreal?
The postviz team used Unreal to fly cameras around and try out different ideas so David could give quick feedback and have it rendered in a way where you understand what the shot’s going to look like. As far as hardware, we used Disguise to handle the massive resolution and allow us the flexibility for playback and adding layers to the environments. For anything pre-rendered at DNeg, we use Clarisse, which allows us to have massive environments where we can scatter thousands of our tree, building and street assets, which are necessary when travelling across vast distances.

Tell us about creating the train collision.
We called the moment when our train collided with the oncoming train “The Can Opener.” That sequence was fun because it’s so over the top. We treated the physics of the train peeling over our hero train almost like a wave, which plays into the movie’s theme of bad luck versus good luck. We built our train with this destruction in mind, accounting for the various cars and interior contents like chairs, snack trays, etc. We also decided what materials to use to build the train, taking into account how the elements would warp, shred, split or explode. A lot of time went into building all those assets for this destruction and then making sure all the materials behaved realistically.

One of the tricky parts is that trains in Japan are super-clean and they travel on tracks extremely smoothly. Secondary motion and dirt can be useful for us; they help add some complexity. So since the trains are so clean you have to do a lot more to make it realistic and focus on the finer details, like giving every panel individual screws, surface scuffs, subtle bends across the surface.

How did you create the shots of Brad Pitt flying through the train in slow motion?
There was a lot of previz — led by DNeg head of previz Alex Cannon — to make sure what they filmed was going to work with VFX in post. They shot Brad, The Elder (Hiroyuki Sanada), his son and some chairs in front of a bluescreen. Everything else was CG. At that point we had completely taken over the train interior because we wanted to have pieces of it shearing away, and it’s way too dangerous to film anything like that with an actor. So they built this rig for Brad so he could fly in the air and moved in any direction needed while hitting the necessary comedic beats. For example, they knew they wanted him flying out of the driver car, getting smacked in the face with the coffee pot and safely crashing into the Momonga costume. Even the costume ended up being fully CG.

Speaking of comedy, quite a few VFX moments are super-funny. Can you elaborate on creating VFX that are intended to get a laugh?
It can be difficult. You have to find the right artist who understands that kind of humor. Already having worked with David on Deadpool 2 helped. I think they even make fun of the VFX in the narration of those films (laughs). So we had a good head start on that stuff.

What was the most difficult VFX sequence to crack?
I’d say the third act, when things were very CG-heavy, especially the train crash outside Kyoto. Normally, a lot of the environments would’ve been filmed, but because of pandemic restrictions, no one was able to go to Japan due to their strict lockdown. We worked with a Japanese company, Jaid Productions, who talked remotely with our set supervisor, Dan Kunz, who taught them to use shoot kits we sent to Japan. Jaid then went out and captured these varied environments that we had to build, such as rice fields, the mountains surrounding Kyoto and more. We were guiding them remotely on exactly what to shoot. That ended up being a fun experience.

Do you have a favorite VFX moment?
It’s funny because I love and am proud of all the big stuff, but I think it’s some of the smaller, more intimate sequences. There’s one where Brad Pitt and The Elder are talking and we’re traveling through rice fields as the sun’s rising, which was our CG environment. It’s a nice moment in this beautiful environment and it worked really well with the lighting of the scene.


Ben Mehlman is a writer/director. His script Whittier was featured on the 2021 Annual Black List after being selected for the 2020 Black List Feature Lab, where he was mentored by Beau Willimon and Jack Thorne. He has interviewed Oscar-nominee Peter Sciberras, Emmy-Nominee Amy Duddelston, Emmy-nominee Nona Khodai and many others.

VES Announces 2021 Board, Lisa Cooke Named Chair

The Visual Effects Society (VES) has named the officers on its 2021 board of directors. The officers include newly named board chair Lisa Cooke, who is the first woman to hold this role in the history of the VES. The five officers embody the global makeup up the VES, coming from Sections in the United States, New Zealand and London.

The 2021 officers of the VES board of directors are:

  • Chair: Lisa Cooke
  • 1st Vice Chair: Emma Clifton Perry
  • 2nd Vice Chair: David H. Tanaka
  • Treasurer: Jeffrey A. Okun, VES
  • Secretary: Gavin Graham

Lisa Cooke has spent several decades in the entertainment industry as an animation/FX industry producer, story consultant, screenwriter and actor. She is the founder of Green Ray Media, and for the past 10-plus years, she has been producing animation and VFX to create scientific, medical and environmental media. Entertainment clients include Lucasfilm, Fox, Nickelodeon Films, ABC, CBS, Paramount and Universal. In the VFX and animation industry, she has worked for companies including Pixar, Glasgow‐based Digital Animations Group and Tippett Studio. As senior producer at Rearden Studios, she helped bring Mova Contour Reality Capture to the film industry.

Emma Clifton Perry has more than 15 years of experience across feature films, longform/TV series, commercials and advertising. Clifton Perry has lived everywhere from Saudi Arabia to Canada and has worked both in-house with Fox and at VFX facilities worldwide  including  Weta Digital, Framestore, MPC, Rising Sun Pictures, Method Studios and The Mill, among others, working both as an artist and in leadership roles. She is currently based in Wellington, New Zealand, providing remote compositing and VFX consulting/lecturing services worldwide.

This is Clifton Perry’s second consecutive term on the Executive Committee, serving as 2nd vice chair in 2020. Clifton Perry has served for four consecutive years on the board of directors and eight cumulative years on the New Zealand board of managers. She was the second-ever New Zealand Section chair, serving as chair for three years and as secretary/treasurer for a year.

David H. Tanaka has worked in visual effects and animation for over 25 years. For 15 years he served at ILM in VFX editorial on such films as Jurassic ParkForrest Gump and Star Wars. He went on to work on CarsRatatouilleWALL-EUp, and Toy Story 3 as a special projects editor at Pixar Animation Studios over the next 10 years.

In 2015, in addition to freelance editing for studios, corporations and private clientele, Tanaka was editor, VFX supervisor, post supervisor and co-executive producer on the independent feature film Guitar Man. Tanaka is also an adjunct professor for the Academy of Art University, San Francisco, specializing in editing, producing and cinema history. 

Jeffrey A. Okun, VES, is known for creating invisible effects as well as VFX that blend seamlessly into the storytelling aspect of the project. He has won the VES Award for Outstanding Supporting Visual Effects for his work on The Last Samurai. Okun has also delivered wide-ranging effects in films such  as  Alpha,  Blood  Diamond,  StargateSphereRed PlanetDeep Blue SeaLolita and The Last Starfighter and television programs such as Cosmos: Possible Worlds for National Geographic and Fox TV.

A VES fellow and  recipient  of  the  Founder’s  Award, he  created  and  co-edited the VES Handbook of Visual Effects. As VES chair for seven years, Okun focused attention on bringing business and creative education to artists, facilities and studios and guided the VES to help create a worldwide software anti-pirating alliance with the US government to ensure that all facilities have a fair and level playing field from which to bid. He created visual effects tracking and bidding software in wide use within the industry as well as the revolutionary visual effects techniques dubbed the “PeriWinkle Effect” and the “Pencil Effect.”

Gavin Graham is the GM of DNeg Montréal, where he has spent almost 20 years of his 21­‐year career, having also worked for MPC and Cinesite. Originally an FX artist with a background in computer science, he has also created tools and worked in development in the area of simulation and rendering. He has FX and CG supervisor credits on projects such as Stardust, 2012, multiple films in the Harry Potter franchise and Captain America: The First Avenger.

In 2011 he moved into a management position with DNeg, initially as the London head of 3D and then as global head of CG before moving to Montréal in 2019 to take his current role.

Graham joined the VES board in 2019 and previously served as secretary during his six-­year tenure on the London Section board.

 

Jim Geduldick to Lead Virtual Production/XR for Dimension

Cinematographer, technologist and post veteran Jim Geduldick has joined Dimension as SVP, virtual production supervisor and head of North America. The virtual-entertainment production studio is expanding its team as it gears up to meet increased demand in virtual production for film, drama and sport around the world. The team is pioneering virtual, volumetric, mixed reality and LED production techniques, providing more efficient and creative solutions for film and TV content creators.

Based in Portland, Oregon, Geduldick is known for his work as a cinematographer, director and visual effects artist on projects for Epic Games, Intel, Google, GoPro, Red Bull, Nike, Adidas, Nat Geo, Hulu and more. He has worked on features, broadcast TV, music videos, docs, virtual reality and action sports films.

Over the years, Geduldick has also been a frequent contributor at postPerspective, advising on future technology and current trends.

Geduldick will lead Dimension’s North American presence and spearhead the company’s virtual production and XR ambitions in the US. He will be supervising productions for clients and partners and working closely with film studios and filmmakers to drive craft in the rapidly developing discipline of virtual production.

Dimension has a track record for pushing the artistic possibilities with Unreal Engine and has ramped up its efforts in virtual production. The studio recently teamed up with DNeg, Unreal Engine (Epic Games), ARRI, ROE LED, Mo-Sys, 80six, Brompton Technology and Malcolm Ryan Studios on LED stage production tests.

“Virtual production is revolutionizing the creative process for filmmakers, broadcasters and storytellers, and Dimension is at the forefront of this movement,” Geduldick says. “It’s terrific to be heading the US mission and working with the team behind groundbreaking productions including the Madonna holograms at the Billboard Music Awards, The War of the Worlds, Notting Hill Carnival and recent DNeg collaborations. My priority is to help drive further innovation in virtual and XR entertainment production.”

 

 

 

Greyhound: Director Aaron Schneider on Tom Hanks WWII film

By Iain Blair

Tom Hanks enjoys telling stories about World War II. After his Oscar-nominated role in Steven Spielberg’s Saving Private Ryan, Hanks — together with his Playtone producing partner Gary Goetzman — and Spielberg produced the miniseries Band of Brothers and The Pacific.

Aaron Schneider

His latest World War II project is the naval thriller Greyhound, for which he also wrote the screenplay based on the novel “The Good Shepherd” by C.S. Forester.

Set against the backdrop of the Battle of the Atlantic, the film stars Hanks as Ernest Krause, a longtime US Navy officer with no combat experience who finally receives his first command: leading the destroyer Keeling (code-named Greyhound) and three other escort ships to protect a convoy of 37 merchant vessels carrying supplies and troops to England. It’s a dangerous assignment as German submarines patrol the waters, brutally enforcing a German blockade.

To direct Greyhound, Hanks and Goetzman tapped Aaron Schneider, whose 2009 debut film, Get Low, was honored as Best First Feature Film at the Independent Spirit Awards. A former cinematographer (he shot second unit for James Cameron’s Titanic), his directing career began after winning an Oscar for his short-film adaptation of William Faulkner’s Two Soldiers.

His Greyhound team included director of photography Shelly Johnson, production designer David Crank, VFX supervisor Nathan McGuinness and editors Sidney Wolinsky and Mark Czyzewski.

I recently spoke with Schneider about making the Apple Original film, the workflow and his love of VFX.

Filming in water is notoriously difficult. I was on the set of The Abyss, and after Titanic, Jim Cameron told me, “Whatever you do, never ever shoot on water or at sea.” You obviously paid attention.
I did, and none of this was shot at sea. In fact, there’s close to zero real water in the film. You couldn’t even find the period ships to take out to sea, anyway, so we built it all digitally.

What were the main technical challenges in pulling it all together?
It was a relatively low-budget film considering what we had to do, and we couldn’t afford to build all the sets for different parts of the destroyer, so we shot this mostly on stage and on the USS Kidd. The Kidd is a WWII destroyer docked in a museum, which we used as our touchstone, as our Keeling. And to tell the detailed story of how a destroyer works, our best strategy was to build matching sets. Most of it takes place up in the pilot house, so we matched that to the Kidd and used a giant gimbal, and then we could intercut.

Very early on — back in 2016, when Tom, Gary and I teamed up — I began building an online photo-reference bible with all the research and imagery. I then shot over 10,000 photographs of the Kidd and used photogrammetry to generate a high-resolution 3D model of the ship. That was so important as an asset because I could then open it up with 3D software, which allowed me to explore the ship with a virtual camera and do previz and experiment with camera ideas and concept art. I could see exactly what our production camera would see, play around and discover any potential problems.

Tom Hanks and Aaron Schneider

I heard you also used an ocean simulator plugin that Nvidia created for game developers that floats objects on the water based on the underlying physics of open-ocean waves?
Yes, and there’s been some reporting indicating that’s how we made the movie — how we floated our ships — but that’s not quite accurate in how we used Nvidia in our pipeline. It was more of a look-at tool in that I wanted all of our VFX to feel like we were out in the ocean shooting it.

In preproduction, I was doing some of my own animation and previz to help prepare myself. I’m also a hobbyist VFX artist, and this plugin was very useful in exploring the ocean environment. It allowed me to float a digital camera ship I could look through, and suddenly I had all the chaos and energy of actual ship-to-ship, open-ocean photography. I could figure out how we’d shoot stuff like a ship taking a sharp turn and have the camera ship float in the opposite direction to give the shot energy. We kept the speed of the ships realistic, so the camera wasn’t doing anything it couldn’t have done in the real world. That grounded the shot concepts and VFX in reality, and a lot of our previz and postviz were generated by this plugin, WaveWorks, which helped us fold into Autodesk Maya.

Did you do a lot of previz?
A lot. We did our own four-wall previz, hired artists and rented workstations, and built up our own infrastructure and workflow. The previz fell into two categories. First, we created overhead animation in real time of all the ships in the naval battles. We do this because when you get to set, you need to be able to tell everyone where to look and set eyelines, so when all the digital footage is married to it in post, you have a foundation.

I’d show up on set, gather the actors who’d be engaged in these virtual battles and play back the animation so they could get a good sense of it all — a tactical awareness. Second was classic previz, where you’re trying to meet a VFX budget, get a sense of the shot count and how creative you can be in those parameters. The team came back in post to do postviz so that if we needed a missing shot, they could do it and drop it into the server so we could see if it worked. If we’d had a bigger budget, we’d have previz’d the whole film, like a Pixar or Marvel project.

What did Tom Hanks bring to the project and the lead role? Any surprises?
Beyond all the excitement and terror and suspense of the battles, he wanted it to be a very emotional experience, and Tom acts as the audience’s guide. He’s the human way into the story, and he always saw it as this 90-minute, highly detailed procedural about a world most people know nothing about. He’s the perfect actor for this challenge, as he wrote this somewhat experimental film and counted on his own ability to lead the audience through it. And he didn’t write himself this big acting piece. The drama comes from experiencing it along with his character.

Aaron Schneider directing Hanks

Where did you post?
We did it all at Playtone’s offices in Santa Monica, which was a perfect setup for us.

Talk about your two editors, Sidney Wolinsky and Mark Czyzewski. How did that work?
Sidney was the main editor, and he came on at the start. He hadn’t done a big action film before, but I wanted an editor I could team up with on the story and narrative side. I didn’t want to get lost in all the visceral and visual elements. I wanted an editor who would challenge me and the film to be as narratively cohesive and strong as possible. Then near the end of post, as the burden of dealing with all the VFX and action scenes got heavier, we brought on Mark to help out.

What were the big editing challenges?
The big one was connecting 35 days of production shooting with VFX stuff that doesn’t exist yet, like, say, a shot of Tom looking at a submarine, which I don’t have yet. So we had to use some of the previz material, and if that didn’t work, we had to put the postviz guys to work and start shaping the film. So you’re cutting in plates, slugs, all in a very piecemeal way. And when you watch the rough cut, you have to use your imagination, just like the actors did on the shoot, and you have discussions about shots that aren’t even there yet. And at the same time, you can’t lose sight of the larger context. Do we understand how we got to this point? The tactical dilemma? Why he can’t shoot yet? It was like solving a very complex puzzle.

VFX play a huge role. How many were there and what did they entail?
We had over 1,200, and we used just one company — DNeg — to streamline it all. The VFX supervisor Nathan McGuinness and VFX producer Mike Chambers had a great relationship with them because of their own careers there, and they did a great job considering our very tight post schedule and the challenges of making it all photo-real. Every shot was tricky.

Where did you do the DI, and how important was it to you?
At Company 3 with colorist Bryan Smaller, who used Resolve. The DI was crucial because we made the film for Sony, who then sold it to Apple right when we were in the middle of the DI. Then the DP and I had to shift our focus to the Dolby Vision master. I love the DI, as I was a DP before I became a director, and it’s that final chance to improve the image and the whole look, and I’m really happy with the way it all turned out.


Industry insider Iain Blair has been interviewing the biggest directors in Hollywood and around the world for years. He is a regular contributor to Variety and has written for such outlets as Reuters, The Chicago Tribune, The Los Angeles Times and the Boston Globe.

VFX in Features: Hobbs & Shaw, Sextuplets

By Karen Moltenbrey

What a difference a year makes. Then again, what a difference 30 years make. That’s about the time when the feature film The Abyss included photoreal CGI integrated with live action, setting a trend that continues to this day. Since that milestone many years ago, VFX wizards have tackled a plethora of complicated problems, including realistic hair and skin, resulting in realistic digital humans, as well as realistic water, fire and other elements. With each new blockbuster VFX film, digital artists continually raise the bar, challenging the status quo and themselves to elevate the art even further.

The visual effects in today’s feature films run the gamut from in-your-face imagery that can put you on the edge of your seat through heightened action to the kind that can make you laugh by amping up the comedic action. As detailed here, Fast & Furious Presents: Hobbs & Shaw takes the former approach, helping to carry out amazing stunts that are bigger and “badder” than ever. Opposite that is Sextuplets, which uses VFX to carry out a gag central to the film in a way that also pushes the envelope.

Fast & Furious Presents: Hobbs & Shaw

The Fast and the Furious film franchise, which has included eight features that collectively have amassed more than $5 billion worldwide since first hitting the road in 2001, is known for its high-octane action and visual effects. The latest installment, Fast & Furious Presents: Hobbs & Shaw, continues that tradition.

At the core of the franchise are next-level underground street racers who become reluctant fugitives pulling off big heists. Hobbs & Shaw, the first stand-alone vehicle, has Dwayne Johnson and Jason Statham reprising their roles as loyal Diplomatic Security Service lawman Luke Hobbs and lawless former British operative Deckard Shaw, respectively. This comes after facing off in Furious 7 (2015) and then playing cat and mouse as Shaw tries to escape from prison and Hobbs tries to stop him in 2017’s The Fate of the Furious. (Hobbs first appeared in 2011’s Fast Five and became an ally to the gang. Shaw’s first foray was in 2013’s Fast & Furious 6.)

Now, in the latest installment, the pair are forced to join forces to hunt down anarchist Brixton Lorr (Idris Elba), who has control of a bio weapon. The trackers are hired separately to find Hattie, a rogue MI6 agent (who is also Shaw’s sister, a fact that initially eludes Hobbs) after she injects herself with the bio agent and is on the run, searching for a cure.

The Universal Pictures film is directed by David Leitch (Deadpool 2, Atomic Blonde). Jonathan Sela (Deadpool 2, John Wick) is DP, and visual effects supervisor is Dan Glass (Deadpool 2, Jupiter Ascending). A number of VFX facilities worked on the film, including key vendor DNeg along with other contributors such as Framestore.

DNeg delivered 1,000-plus shots for the film, including a range of vehicle-based action sequences set in different global locations. The work involved the creation of full digi-doubles and digi-vehicle duplicates for the death-defying stunts, jumps and crashes, as well as complex effects simulations and extensive digital environments. Naturally, all the work had to fit seamlessly alongside live-action stunts and photography from a director with a stunt coordinator pedigree and a keen eye for authentic action sequences. In all, the studio worked on 26 sequences divided among the Vancouver, London and Mumbai locations. Vancouver handled mostly the Chernobyl break-in and escape sequences, as well as the Samoa chase. London did the McLaren chase and the cave fight, as well as London chase sequences. The Mumbai team assisted its colleagues in Vancouver and London.

When you think of the Fast & Furious, the first thing that comes to mind are intense car chases, and according to Chris Downs, CG supervisor at DNeg Vancouver, the Chernobyl beat is essentially one long, giant car-and-motorcycle pursuit, describing it as “a pretty epic car chase.”

“We essential have Brixton chasing Shaw and Hattie, and then Shaw and Hattie are trying to catch up to a truck that’s being driven by Hobbs, and they end up on these utility ramps and pipes, using them almost as a roadway to get up and into the turbine rooms, onto the rooftops and then jump between buildings,” he says. “All the while, everyone is getting chased by these drones that Brixton is controlling.”

The Chernobyl sequences — the break-in and the escape — were the most challenging work on the film for DNeg Vancouver. The villain, Brixton, is using the Chernobyl nuclear power plant in Russia as the site of his hideaway, leading Hobbs and Shaw to secretly break into his secret lab underneath Chernobyl to locate a device Brixton has there — and then not-so-secretly break out.

The break-in was filmed at a location outside of London, at the decommissioned Eggborough coal-powered plant that served as a backdrop. To transform the locale into Chernobyl, DNeg augmented the site with cooling towers and other digital structures. Nevertheless, the artists also built an entire CG version of the site for the more extreme action, using photos of the actual Chernobyl as reference for their work. “It was a very intense build. We had artistic liberty, but it was based off of Chernobyl, and a lot of the buildings match the reference photography. It definitely maintained the feeling of a nuclear power plant,” says Downs.

Not only did the construction involve all the exteriors of the industrial complex around Chernobyl, but also an interior build of an “insanely complicated” turbine hall that the characters race through at one point.

The sequence required other environment work, too, as well as effects, digi-doubles and cloth sims for the characters’ flight suits and parachutes as they drop into the setting.

Following the break-in, Hobbs and Shaw are captured and tortured and then manage to escape from the lab just in time as the site begins to explode. For this escape sequence, the crew built a CG Chernobyl reactor and power station, automated drones, a digital chimney, an epic collapse of buildings, complex pyrotechnic clouds and burning material.

“The scope of the work, the amount of buildings and pipes, and the number of shots made this sequence our most difficult,” says Downs. “We were blowing it up, so all the buildings had to be effects-friendly as we’re crashing things through them.” Hobbs and Shaw commandeer vehicles as they try to outrun Brixton and the explosion, but Brixton and his henchmen give chase in a range of vehicles, including trucks, Range Rovers, motorcycles and more — a mix of CGI and practical with expert stunt drivers behind the wheel.

As expected for a Fast & Furious film, there’s a big variety of custom-built vehicles. Yet, for this scene and especially in Samoa, DNeg Vancouver crafted a range of CG vehicles, including motorcycles, SUVs, transport trucks, a flatbed truck, drones and a helicopter — 10 in all.

According to Downs, maintaining the appropriate wear and tear on the vehicles as the sequences progressed was not always easy. “Some are getting shot up, or something is blown up next to them, and you want to maintain the dirt and grime on an appropriate level,” he says. “And, we had to think of that wear and tear in advance because you need to build it into the model and the texture as you progress.”

The CG vehicles are mostly used for complex stunts, “which are definitely an 11 on the scale,” says Downs. Along with the CG vehicles, digi-doubles of the actors were also used for the various stunt work. “They are fairly straightforward, though we had a couple shots where we got close to the digi-doubles, so they needed to be at a high level of quality,” he adds. The Hattie digi-double proved the most difficult due to the hair simulation, which had to match the action on set, and the cloth simulation, which had to replicate the flow of her clothing.

“She has a loose sweater on during the Chernobyl sequence, which required some simulation to match the plate,” Downs adds, noting that the artists built the digi-doubles from scratch, using scans of the actors provided by production for quality checks.

The final beat of the Chernobyl escape comes with the chimney collapse. As the chase through Chernobyl progresses, Shaw tries to get Hattie to Hobbs, and Brixton tries to grab Hattie from Shaw. In the process, charges are detonated around the site, leading to the collapse of the main chimney, which just misses obliterating the vehicle they are all in as it travels down a narrow alleyway.

DNeg did a full environment build of the area for this scene, which included the entire alleyway and the chimney, and simulated the destruction of the chimney along with an explosive concussive force from the detonation. “There’s a large fireball at the beginning of the explosion that turns into a large volumetric cloud of dust that’s getting kicked up as the chimney is collapsing, and all that had to interact with itself,” Downs says of the scene. “Then, as the chimney is collapsing toward the end of the sequence, we had the huge chunks ripping through the volumetrics and kicking up more pyrotechnic-style explosions. As it is collapsing, it is taking out buildings along the way, so we had those blowing up and collapsing and interacting with our dust cloud, as well. It’s quite a VFX extravaganza.”

Adding to the chaos: The sequence was reshot. “We got new plates for the end of that escape sequence that we had to turn around in a month, so that was definitely a white-knuckle ride,” says Downs. “Thankfully we had already been working on a lot of the chimney collapse and had the Chernobyl build mostly filled in when word came in about the reshoot. But, just the amount of effects that went into it — the volumetrics, the debris and then the full CG environment in the background — was a staggering amount of very complex work.”

The action later turns from London at the start of the film, to Russia for the Chernobyl sequences, and then in the third act, to Samoa, home of the Hobbs family, as the main characters seek refuge on the island while trying to escape from Brixton. But Brixton soon catches up to them, and the last showdown begins amid the island’s tranquil setting with a shimmering blue ocean and green lush mountains. Some of the landscape is natural, some is man-made (sets) and some is CGI. To aid in the digital build of the Samoan environment, Glass traveled to the Hawaiian island of Kauai, where the filming took place, and took a good amount of reference footage.

For a daring chase in Samoa, the artists built out the cliff’s edge and sent a CG helicopter tumbling down the steep incline in the final battle with Brixton. In addition to creating the fully-digital Samoan roadside, CG cliff and 3D Black Hawk, the artists completed complex VFX simulations and destruction, and crafted high-tech combat drones and more for the sequence.

The helicopter proved to be the most challenging of all the vehicles, as it had a couple of hero moments when certain sections were fairly close to the camera. “We had to have a lot of model and texture detail,” Downs notes. “And then with it falling down the cliff and crash-landing onto the beach area, the destruction was quite tricky. We had to plan out which parts would be damaged the most and keep that consistent across the shots, and then go back in and do another pass of textures to support the scratches, dents and so forth.”

Meanwhile, DNeg London and Mumbai handled a number of sequences, among them the compelling McLaren chase, the CIA building descends and the final cave fight in Samoa. There were also a number of smaller sequences, for a total of approximately 750 shots.

One of the scenes in the film’s trailer that immediately caught fans’ attention was the McLaren escape/motorcycle transformation sequence, during which Hobbs, Shaw and Hattie are being chased by Brixton baddies on motorcycles through the streets of London. Shaw, behind the wheel of a McLaren 720S, tries to evade the motorbikes by maneuvering the prized vehicle underneath two crossing tractor trailer rigs, squeezing through with barely an inch to spare. The bad news for the trio: Brixton pulls an even more daring move, hopping off the bike while grabbing onto the back of it and then sliding parallel inches above the pavement as the bike zips under the road hazard practically on its side; once cleared, he pulls himself back onto the motorbike (in a memorable slow-motion stunt) and continues the pursuit thanks to his cybernetically altered body.

Chris Downs

According to Stuart Lashley, DNeg VFX supervisor, this sequence contained a lot of bluescreen car comps in which the actors were shot on stage in a McLaren rigged on a mechanical turntable. The backgrounds were shot alongside the stunt work in Glasgow (playing as London). In addition, there were a number of CG cars added throughout the sequence. “The main VFX set pieces were Hobbs grabbing the biker off his bike, the McLaren and Brixton’s transforming bike sliding under the semis, and Brixton flying through the double-decker bus,” he says. “These beats contained full-CG vehicles and characters for the most part. There was some background DMP [digital matte-painting] work to help the location look more like London. There were also a few shots of motion graphics where we see Brixton’s digital HUD through his helmet visor.”

As Lashley notes, it was important for the CG work to blend in with the surrounding practical stunt photography. “The McLaren itself had to hold up very close to the camera; it has a very distinctive look to its coating, which had to match perfectly,” he adds. “The bike transformation was a welcome challenge. There was a period of experimentation to figure out the mechanics of all the small moving parts while achieving something that looked cool at the same time.”

As exciting and complex as the McLaren scene is, Lashley believes the cave fight sequence following the helicopter/tractor trailer crash was perhaps even more of a difficult undertaking, as it had a particular VFX challenge in terms of the super slow-motion punches. The action takes place at a rock-filled waterfall location — a multi-story set on a 30,000-square-foot soundstage — where the three main characters battle it out. The film’s final sequence is a seamless blend of CG and live footage.

Stuart Lashley

“David [Leitch] had the idea that this epic final fight should be underscored by these very stylized, powerful impact moments, where you see all this water explode in very graphic ways,” explains Lashley. “The challenge came in finding the right balance between physics-based water simulation and creative stylization. We went through a lot of iterations of different looks before landing on something David and Dan [Glass] felt struck the right balance.”

The DNeg teams used a unified pipeline for their work, which includes Autodesk’s Maya for modeling, animation and the majority of cloth and hair sims; Foundry’s Mari for texturing; Isotropix’s Clarisse for lighting and rendering; Foundry’s Nuke for compositing; and SideFX’s Houdini for effects work, such as explosions, dust clouds, particulates and fire.

With expectations running high for Hobbs & Shaw, filmmakers and VFX artists once more delivered, putting audiences on the edge of their seats with jaw-dropping VFX work that shifted the franchise’s action into overdrive yet again. “We hope people have as much fun watching the result as we had making it. This was really an exercise in pushing everything to the max,” says Lashley, “often putting the physics book to one side for a bit and picking up the Fast & Furious manual instead.”

Sextuplets

When actor/comedian/screenwriter/film producer Marlon Wayans signed on to play the lead in the Netflix original movie Sextuplets, he was committing to a role requiring an extensive acting range. That’s because he was filling not one but seven different lead roles in the same film.

In Sextuplets, directed by Michael Tiddes, Wayans plays soon-to-be father Alan, who hopes to uncover information about his family history before his child’s arrival and sets out to locate his birth mother. Imagine Alan’s surprise when he finds out that he is part of “identical” sextuplets! Nevertheless, his siblings are about as unique as they come.

There’s Russell, the nerdy, overweight introvert and the only sibling not given up by their mother, with whom he lived until her recent passing. Ethan, meanwhile, is the embodiment of a 1970s pimp. Dawn is an exotic dancer who is in jail. Baby Pete is on his deathbed and needs a kidney. Jaspar is a villain reminiscent of Austin Powers’ Dr. Evil. Okay, that is six characters, all played by Wayans. Who is the seventh? (Spoiler alert: Wayans also plays their mother, who was simply on vacation and not actually dead as Russell had claimed.)

There are over 1,100 VFX shots in the movie. None, really, involved the transformation of the actor into the various characters — that was done using prosthetics, makeup, wigs and so forth, with slight digital touch-ups as needed. Instead, the majority of the effects work resulted from shooting with a motion-controlled camera and then compositing two (or more) of the siblings together in a shot. For Baby Pete, the artists also had to do a head replacement, comp’ing Wayans onto the body of a much smaller actor.

“We used quite a few visual effects techniques to pull off the movie. At the heart was motion control, [which enables precise control and repetition of camera movement] and allowed us to put multiple characters played by Marlon together in the scenes,” says Tiddes, who has worked with Wayans on multiple projects in the past, including A Haunted House.

The majority of shots involving the siblings were done on stage, filmed on bluescreen with a TechnoDolly for the motion control, as it is too impractical to fit the large rig inside an actual house for filming. “The goal was to find locations that had the exterior I liked [for those scenes] and then build the interior on set,” says Tiddes. “This gave me the versatility to move walls and use the TechnoDolly to create multiple layers so we could then add multiple characters into the same scene and interact together.”

According to Tiddes, the team approached exterior shots similarly to interior ones, with the added challenge of shooting the duplicate moments at the same time each day to get consistent lighting. “Don Burgess, the DP, was amazing in that sense. He was able to create almost exactly the same lighting elements from day to day,” he notes.

Michael Tiddes

So, whenever there was a scene with multiple Wayans characters, it would be filmed on back-to-back days with each of the characters. Tiddes usually started off with Alan, the straight man, to set the pace for the scene, using body doubles for the other characters. Next, the director would work out the shot with the motion control until the timing, composition and so forth was perfected. Then he would hit the Record button on the motion-control device, and the camera would repeat the same exact move over and over as many times as needed. The next day, the shot was replicated with the other character, and the camera would move automatically, and Wayans would have to hit the same marks at the same moment established on the first day.

“Then we’d do it again on the third day with another character. It’s kind of like building layers in Photoshop, and in the end, we would composite all those layers on top of each other for the final version,” explains Tiddes.

When one character would pass in front of another, it became a roto’d shot. Oftentimes a small bluescreen was set up on stage to allow for easier rotoscoping.

Image Engine was the main visual effects vendor on the film, with Bryan Jones serving as visual effects supervisor. The rotoscoping was done using a mix of SilhouetteFX’s Silhouette and Foundry’s Nuke, while compositing was mainly done using Nuke and Autodesk’s Flame.

Make no mistake … using the motion-controlled camera was not without challenges. “When you attack a scene, traditionally you can come in and figure out the blocking on the day [of the shoot],” says Tiddes. “With this movie, I had to previsualize all the blocking because once I put the TechnoDolly in a spot on the set, it could not move for the duration of time we shot in that location. It’s a large 13-foot crane with pieces of track that are 10 feet long and 4 feet wide.”

In fact, one of the main reasons Tiddes wanted to do the film was because of the visual effects challenges it presented. In past films where an actor played multiple characters in a scene, usually one character is on one side of the screen and the other character is on the other side, and a basic split-screen technique would have been used. “For me to do this film, I wanted to visually do it like no one else has ever done it, and that was accomplished by creating camera movement,” he explains. “I didn’t want to be constrained to only split-screen lock-off camera shots that would lack energy and movement. I wanted the freedom to block scenes organically, allowing the characters the flexibility to move through the room, with the opportunity to cross each other and interact together physically. By using motion control, by being able to re-create the same camera movement and then composite the characters into the scene, I was able to develop a different visual style than previous films and create a heightened sense of interactivity and interaction between two or multiple characters on the screen while simultaneously creating dynamic movement with the camera and invoking energy into the scene.”

At times, Gregg Wayans, Marlon’s nephew, served as his body double. He even appears in a very wide shot as one of the siblings, although that occurred only once. “At the end of the day, when the concept of the movie is about Marlon playing multiple characters, the perfectionist in me wanted Marlon to portray every single moment of these characters on screen, even when the character is in the background and out of focus,” says Tiddes. “Because there is only one Marlon Wayans, and no one can replicate what he does physically and comedically in the moment.”

Tiddes knew he would be challenged going into the project, but the process was definitely more complicated than he had initially expected — even with his VFX editorial background. “I had a really good starting point as far as conceptually knowing how to execute motion control. But, it’s not until you get into the moment and start working with the actors that you really understand and digest exactly how to pull off the comedic timing needed for the jokes with the visual effects,” he says. “That is very difficult, and every situation is unique. There was a learning curve, but we picked it up quickly, and I had a great team.”

A system was established that worked for Tiddes and Burgess, as well as Wayans, who had to execute and hit certain marks and look at proper eyelines with precise timing. “He has an earwig, and I am talking to him, letting him know where to look, when to look,” says Tiddes. “At the same time, he’s also hearing dialogue that he’s done the day before in his ear, and he’s reacting to that dialog while giving his current character’s lines in the moment. So, there’s quite a bit going on, and it all becomes more complex when you add the character and camera moving through the scene. After weeks of practice, in one of the final scenes with Jaspar, we were able to do 16 motion-controlled moments in that scene alone, which was a lot!”

At the very end of the film, the group tested its limits and had all six characters (mom and all the siblings, with the exception of Alan) gathered around a table. That scene was shot over a span of five days. “The camera booms down from a sign and pans across the party, landing on all six characters around a table. Getting that motion and allowing the camera to flow through the party onto all six of them seamlessly interacting around the table was a goal of mine throughout the project,” Tiddes says.

Other shots that proved especially difficult were those of Baby Pete in the hospital room, since the entire scene involved Wayans playing three additional characters who are also present: Alan, Russell and Dawn. And then they amped things up with the head replacement on Baby Pete. “I had to shoot the scene and then, on the same day, select the take I would use in the final cut of the movie, rather than select it in post, where traditionally I could pick another take if that one was not working,” Tiddes adds. “I had to set the pace on the first day and work things out with Marlon ahead of time and plan for the subsequent days — What’s Dawn going to say? How is Russell going to react to what Dawn says? You have to really visualize and previsualize all the ad-libbing that was going on and work it out right there in the moment and discuss it, to have kind of a loose plan, then move forward and be confident that you have enough time between lines to allow room for growth when a joke just comes out of nowhere. You don’t want to stifle that joke.”

While the majority of effects involved motion control, there is a scene that contains a good amount of traditional effects work. In it, Alan and Russell park their car in a field to rest for the night, only to awake the next morning to find they have inadvertently provoked a bull, which sees red, literally — both from Alan’s jacket and his shiny car. Artists built the bull in CG. (They used Maya and Side Effects Houdini to build the 3D elements and rendered them in Autodesk’s Arnold.) Physical effects were then used to lift the actual car to simulate the digital bull slamming into the vehicle. In some shots of the bull crashing into the car doors, a 3D car was used to show the doors being damaged.

In another scene, Russell and Alan catch a serious amount of air when they crash through a barn, desperately trying to escape the bull. “I thought it would be hilarious if, in that moment, cereal exploded and individual pieces flew wildly through the car, while [the cereal-obsessed] Russell scooped up one of the cereal pieces mid-air with his tongue for a quick snack,” says Tiddes. To do this, “I wanted to create a zero-gravity slow-motion moment. We shot the scene using a [Vision Research] high-speed Phantom camera at 480fps. Then in post, we created the cereal as a CG element so I could control how every piece moved in the scene. It’s one of my favorite VFX/comedy moments in the movie.”

As Tiddes points out, Sextuplets was the first project on which he used motion control, which let him create motion with the camera and still have the characters interact, giving the subconscious feeling they were actually in the room with one another. “That’s what made the comedy shine,” he says.


Karen Moltenbrey is a veteran writer/editor covering VFX and post production.

Andy Williams to head up new DnegTV in LA

Oscar-winning VFX house Double Negative (Dneg), which has its headquarters in London, is opening a studio in LA focusing on visual effects for television. Headng up DnegTV:LA is Andy Williams, former Stargate Studios head of production.

DNegTV, the television arm of Double Negative, was formed in 2013. It currently provides VFX for television shows including, Emerald City (NBC), BrainDead (CBS), Fortitude (Sky) and The Young Pope (Sky/HBO).

“Since our inception, we’ve enjoyed excellent working relationships with many of the major US production companies and to ensure we can continue to offer and expand our high levels of service it’s become key for us to have an LA presence,” explains DNegTV executive producer/co-founder Louise Hussey. “The fact that we will now be able to provide local facilities, support and investment in the US is very important to us and to our future plans.”

Williams has over 20 years experience in television. Prior to his time at Stargate, he spent seven years as head of production and executive producer at DIVE (now Alkemy X) in New York. During that time, he also served as DI producer, VFX producer and VFX production supervisor on the shows like The Leftovers, Silver Linings Playbook, The Visit, Power, The Road and How to Get Away With Murder.

“With the elevated ambitions of networks and streaming service content providers, the demands for quality are higher than ever before,” explains Williams. “DNegTV is suited to leverage the creativity and pipeline of an Oscar-winning facility, and then harness those resources in response to the budget and scheduling demands of TV clients. Opening in Los Angeles means we can make that more accessible to US-based productions and expand DNeg’s footprint in television. For me, it’s a chance to forge something new with the full support of one of the best brands in the business. It was just too attractive a collaboration to pass up.”

We asked Williams about the set-up in LA. He said this: “The physical facility presence of DNegTV:LA is still in its development phase. That said, not unlike Double Negative’s operations in Vancouver and Mumbai, any facility in Los Angeles will be modeled after, and tie into, London’s pipeline and toolset. The intent is to make sure that all studios operate with the same integration of tech, security and employee support that DNeg is known for. More to come as things develop on this front.”

‘Ex Machina’ VFX team gets Oscar

Artists from London’s Double Negative and Milk VFX took home the Oscar for Best Visual Effects for their work on Alex Garland’s Ex Machina. In winning, Milk’s co-founder, Sara Bennett, became one of two women to ever win the Academy Award for VFX — the other was Suzanne Benson for her work on Aliens during the 59th Academy Awards.

Bennett got the gold along with Double Negative’s Andrew Whitehurst, Paul Norris and Mark Ardington. This is Dneg’s second win in as many years, taking home the statue for work on last year’s Interstellar.

“I am beyond excited!! We are thrilled and honored to be recognized by The Academy for our work on Ex Machina,” says Bennett. “It was a privilege to work with Alex Garland and to bring his incredible vision to life, alongside Andrew Whitehurst and the Dneg team. I would love to see more women in prominent creative roles in our industry — I was a little shocked to find out I was the third-ever female VFX Oscar nominee.”

“I’m still in shock, I think, but what an incredible experience and what an amazing group of people to represent,” says Double Negative’s Whitehurst

Norris says, “It was an honor and privilege to represent out team at the 88th Academy Awards — it was an amazing experience that’s still sinking in!”

Double Negative

“The whole crew did an incredible job, and should be, rightfully, proud,” says Ardington.  “Our work stands on the shoulders of every other department from day one — from reading Alex Garland’s amazing script, through to the beautiful cinematography, striking production design, ingenious costume and make-up, awesome soundtrack and, of course, the wonderful performances from Alicia, Oscar, Domhnall and Sonoya.  What a journey this has been.”

Double Negative delivered 303 shots for Ex Machina, but that number is slightly deceptive due to the length of the shots — their average shot length was eight seconds and their longest shot was 1,800 frames.

According to Whitehurst, “The work on Ex Machina was focused around the creation of Ava, a robotic character, realized through the careful duplication of Alicia Vikhander’s performance mapped onto a fully articulated CG robotic body.”

Milk VFX, which worked on about 100 shots on the film, designed and created Ava’s CG brain, which is seen during the conversation between Nathan and Caleb in the construction lab. For the design of Ava’s brain, Milk was briefed to use jelly fish references while incorporating a computerized “tech” feel in its design.

Using Side Effects Houdini, the build was fully procedural with strong emphasis on the ability to quickly choreograph and combine major features to reduce the turnaround of versions during the look development phase.

Milk VFX

Starting from a sculpted core mesh, a complex set-up was built to create the main features of the brain, including frills, tentacles, pores, antennae, wireframe cages and air bubbles. The Milk team opted for noise-driven animation over simulations in order to avoid having to rig and animate each shot separately. Collisions were solved by post deforming wires and meshes using volume collision approaches where necessary. The resulting brain asset was then brought into Maya for shading and lighting using Arnold and finally composited in Nuke.

Milk was also tasked with devising a look and style for Ava’s visual point of view — seen at the start of the film when lead character Caleb wins the office lottery, and in the bathroom scene. A range of supporting 2D shots was also created, including environment fixes, compositing and monitor inserts with animated graphics.