Tag Archives: Randi Altman

Heater

postPerspective Hires Post Industry Veteran Alyssa Heater

By Randi Altman

postPerspective is happy to welcome our newest team member, Alyssa Heater, who joins as sales and business development manager. Many of you may know Alyssa from her time at Technicolor and then Streamland, where she was marketing manager.

Alyssa and I have worked together for years, collaborating on user/artist stories while she was at Technicolor and then Streamland and its companies. She was always so professional, talented and such a pleasure to work with, so when she became available for writing last year, I jumped at the chance to work with her again. Then when a full-time spot opened within postPerspective for sales and business development, it was clear she was a perfect fit. She speaks our common language.

Alyssa has been working in the entertainment industry for over a decade. After graduating from UC Santa Cruz with a degree in film and digital media, she moved to Los Angeles where she landed at Laser Pacific and then moved on to producing, marketing and sales roles at Technicolor and Streamland Media and its subsequent brands.

Most recently, she led marketing strategy and branding for Picture Shop, Formosa Group, Ghost VFX and Picture Head under the Streamland Media umbrella. This included working with artists, partnerships, event and trade shows, PR, social media and beyond. Over the past year writing for us, she has interviewed many working in film, TV and spots.

As sales and business development manager at postPerspective, Alyssa will be responsible for maintaining and growing partnerships, building the postPerspective brand and expanding its reach.

I can’t wait for everyone to meet her!

 

Summer

Review: Five Cool Tools for the Summer

By Brady Betzel

With the slower pace of summer upon us, it’s time to take an inventory of what gear you can upgrade to take your setup to the next level. From cleaner audio separation to precise video signal measurements, I am always looking at the latest and greatest production and post gear. While new plugins and software updates are great, peripherals, like a great stand-up desk or a high-quality/low-cost microphone, can make your workday a bit better.

If you’re hoping to travel this summer while continuing to work, we have a few suggestions that will support your mobile workstation:

Dark Matter Sentry Streaming Microphone by Monoprice
Whether you are on Zoom meetings for five hours a day or just want a quick and easy way to record a voiceover directly inside of Adobe Premiere Pro without complicated software installations, a high-quality microphone is the easiest way to impress.

Over the last few years, like you, I’ve found myself on Zoom meetings more than I would like. One constant is poor video and audio quality. Occasionally, you get someone crazy who has a spare Red camera around and has an incredible video essence to their Zoom, but more often than not, people use their laptop’s built-in hardware, which, to be fair, isn’t always terrible.

Apple includes very compelling cameras and microphones in their products, but for all the editors who don’t have a Yeti mic lying around, the Dark Matter Sentry Streaming Microphone by Monoprice rides the line between quality and cost. The Dark Matter Sentry retails for $99.99 but is currently selling for $74.98.

The Dark Matter Sentry is a hefty, well-designed, low-maintenance microphone that’s perfect for temp voiceover recordings or live Twitch streams. It offers four pickup/polar patterns: cardioid (directly in front of the mic), stereo (left/right side of the mic), bidirectional (front/back of the mic) and omnidirectional (360 recording). It even has a headphone jack below the mic gain and headphone volume knobs.

Installing the Dark Matter Sentry is as easy as plugging the USB-C to USB-A cable into your computer and choosing one of the five LED colors by pushing the button on the bottom of the mic. The spider-style mic stand included with the Dark Matter Sentry is surprisingly sturdy. You can also attach the mic to a mic boom via the ⅝-inch threaded mount point.

When comparing the Dark Matter Sentry against other popular streaming-style mics, the current $74.98 price tag is over half of similar but competing models, like the Shure MV7, which retails for $249.99. Check out the Dark Matter Sentry site because the price seems to change every day.

AJA Io|X3
With live streaming and small, home-studio-based multicam workflows gaining popularity, having reliable I/O hardware is a must. AJA has been around and producing top-quality I/O gear for a long time. The AJA Io X3 is a Thunderbolt 3-based, multi-channel 2K/HD/SD input/output hardware solution. Whether you are looking to switch/record four HD streams at once in OBS or simply stream your timeline to clients viewing remotely using the AJA Helo Plus, the Io X3 is a solid workhorse retailing for $1,759.

In addition to analog connections like four 3G bidirectional SDI ports with 16-channel embedded audio, the AJA Io X3 has HDMI I/O with eight-channel embedded audio, including HDR transfer characteristic recognition. The device supports and automatically detects PQ, HLG, HDR10, HDR10+ and Dolby Vision.

The AJA Io X3 is a great solution for most editing or color-grading software, except Blackmagic’s DaVinci Resolve. For Resolve you will want to grab something from the UltraStudio hardware line. But for apps like Adobe Premiere or Avid Media Composer, AJA hardware works well. With machine control for tape-based laybacks, Apple M1 chip support and even the ability to power the battery via the XLR 12V.

The Io X3 is very flexible unless you use Resolve.

Nobe OmniScope
One of the most under-used tools I see in the streamer and content-creator world is scopes. The ability to read and view luminance, saturation and color spaces in a technically accurate way is vital. Scope information is one of those skills that separates the hobbyists from the professionals.

Regardless of whether you work on streaming videos, wedding videos or the Super Bowl, you are a professional in my eyes. Nobe OmniScope has brought us professional-level scopes at consumer-level prices. The Video version of Nobe OmniScope retails for $235, while the Pro version retails for $399. Both give you one year of updates unless you renew at $70 per year for the Video version and $99 per year for the Pro version.

The Pro version has features like multiple input sources, 4:4:4 RGB 12-bit support through DeckLink and UltraStudio, Syphon and Spout (direct GPU memory-sharing), 3D Color Cube, min-max readings, error logger, multiple quality control features, OpenColorIO 2, native Stream Deck support, NDI source/scope output, HDR support, ACES color science, PQ ST 2084/HLG scales and two simultaneous licenses for one price.

You can run Nobe OmniScope with whatever software you are using or on a separate system with signal inputs. In the past, I’ve always been a fan of separate systems for running apps like this. However, these days it is not necessary. Newer systems with high-end GPUs can run Nobe OmniScope and Resolve concurrently with few slowdowns. But if you do have a spare Mac Mini or older Windows-based PC lying around, you might want to think about using it just for input/output of the Nobe OmniScope.

Summer

There is a great series of six instructional videos by Kevin P. McAuliffe that covers most of the Nobe OmniScope features. My favorite is blanking detection in the quality control features, which are part of the Pro version of Nobe OmniScope. In online editing, blanking is one of the trickiest errors to find.

Even using a professional output monitor, some blanking will get missed. But with Nobe OmniScope Pro, when QC tools are enabled, Nobe OmniScope will highlight any specified blanking areas in bright red if it thinks it’s an error. In the future, I hope Nobe OmniScope will add automated QC tools that will essentially export a preliminary QC report, including basic errors like blanking, illegal color values, black frames, etc. It would be an amazing feature to add to this extensive toolset that every colorist and online editor should own. Find out more at the Time in Pixels website.

KRK’s GoAux 4 Portable Monitors
I’m a sucker for great speakers and headphones. As a teenager, I worked at Best Buy (pre-Geek Squad) as a computer repair technician and eventually a car stereo installer. That is when I realized I love great-quality speakers and components. Once I began working at a mix house as an online editor, it reaffirmed my love for ultrahigh-quality sound setups, even if that wasn’t my primary job responsibility.

So besides having amazing headphones like the Audeze MM-500, which I recently reviewed, how do you get studio-quality sound setups from portable speaker systems? KRK Systems has you covered with the GoAux 4 monitor kit, which offers some of the smallest, most portable, tunable, powered-nearfield monitors. The GoAux 4 monitors retail for $419 with free two-day shipping.

Summer

The GoAux 4 monitors ship in an awesome and protective nylon carrying bag that holds both monitors, stands, auto ARC microphone and room for cables. The carrying bag is one of my favorite parts — it’s compact and efficient.

The KRK Systems GoAux 4s are the upgrade from the GoAux 3s. The GoAux 4s carry 100W of RMS power (total system power) with a 4-inch woofer and a 1-inch tweeter. Realistically, each woofer will max out at 33W RMS and each tweeter at 17W RMS. The SPL (sound pressure level, aka how powerful the bass notes are) peak at 102dB and can sustain 98.5dBs. The subwoofer frequency response is between 55Hz and 22kHz. Compare that to the GoAux 3s, which have 60W RMS power and a 3-inch woofer and stop at 60Hz on the low end. The larger the woofer, the lower the notes the woofer can play.

The speakers themselves only need to be connected to one power source because they share power. They feature built-in low and high frequency EQ adjustments, USB, ⅛-inch aux, RCA and ¼-inch TRS balanced stereo inputs and a Bluetooth connection. You can even connect headphones to the ⅛-inch stereo headphone output on the front of the GoAux 4s, which will automatically mute the monitors.

Physically, the monitors are small for the power they produce. They measure 8.07 inches by 5.35 inched by 5.51 inches and weigh just under 10lbs, including both speakers, stands, carrying bag and included accessories. But what really got my attention was the Auto ARC microphone that is included with the GoAux 4s. The Auto ARC is an automatic room correction feature. Simply, it allows you to move the GoAux 4s to different physical mixing environments while keeping similar audio qualities. Think of traveling between a studio and a hotel room to mix audio. Of course, the rooms will have much different acoustic setups. The GoAux 4s Auto ARC mic will help to correct for these differences, leading to similar mixing environments. It won’t be perfect or a replacement for a true studio setup, but equalizing the environment is one step closer to being able to mix anywhere.

To run the Auto ARC, you need to attach the included Auto ARC mic to the front left speaker Auto ARC mic input, hold/place the mic at ear level where the user will be sitting, and hold the Auto ARC button on the rear left speaker. It will produce 25 tones and then repeat. This will go on for a couple of minutes, during which time the mic must remain still. Once it’s complete, a low-frequency tone sounds. I tested this between multiple locations, including a bedroom, a bathroom and a studio. While it isn’t perfect, the Auto ARC setup brought the many different sound environments closer together.

Grab a set for $419, including free two-day shipping.

FlexiSpot E5 Standing Desk
By Guest Reviewer Randi Altman, Editor-in-Chief postPerspective

Summer Having worked at a computer my entire adult life, developing achy wrists, a stiff neck and hunchback (not Notre Dame-level, but my posture is not great), I have always wanted to try a stand-up desk. And thanks to FlexiSpot Dual Motor Standing Desk, I finally got my chance.

The desk arrived in different boxes over the course of a few days, so it was exciting to see what was coming next. Each box was clearly labeled, letting me know exactly what was in each before opening. I wonder if they break up the shipping on purpose so no one has all the parts of the desk at one time. (Yes, my New York Spidey senses are always on high alert.)

For the record, I don’t build stuff. I’ll spackle and paint whatever you need me to, but reading directions and putting stuff together is not my strength. Therefore, I recruited my husband, who started his furniture-building career on something called Skorük Mörk, our first bedroom set from IKEA. As I watched him from the couch while scrolling through my phone, he seemed to move along nicely… he describes it as “slow but steady,” and “easier than I thought it would be.” All told, his very casual build was probably just under an hour. Not too bad!

When complete, we were both sort of giddy. He with pride for a job well done, and me with the excitement of testing it out. I was immediately impressed with the quality hardware on the desk — from the moveable stand to the bamboo work surface that the company says can hold up to 287 pounds, the two-level workspace, and the ability to raise and lower it with a push of a button, depending on if I am feeling stand-y or sit-y.

The keypad panel has three height presets, and they are very easy to use. I stand at a towering 5 feet 3 inches tall, while my nephew who has been visiting is 6 feet 4 inches — genetics are weird. Both of us were able to find a comfortable height for working, whether sitting or standing. Oh, there is also a sit-stand reminder, that allows you to set a timer reminding you to switch working postures from time to time.

While working, most of my day is spent in Word and Photoshop with a little Resolve thrown in, and it has all been a breeze, standing or sitting. I also use an external keyboard and a pen/tablet, and there is plenty of room for all.

I have also come to really enjoy doing video calls while standing. I just feel healthier being able to move around a little bit from side to side, shifting my weight while chatting, especially after lunch. It’s my new normal.

The only negative I have relates to all the plugs/wires that hang under the desk while sitting – but I have to look deeper at this, because it could very well be user error.

For a more detailed review of the company’s similar but next-level E7 desk, give Cory Choy’s review a read.

Pricing can be found here.


Brady Betzel is an Emmy-nominated online editor at Margarita Mix in Hollywood, working on shows like Life Below Zero and Uninterrupted: The Shop . He is also a member of the Producers Guild of America. You can email Brady at bradybetzel@gmail.com. Follow him on Twitter @allbetzroff.

Virtual Production Roundtable

By Randi Altman

Virtual production (VP) is here to stay. And while tools and workflows keep developing, best practices are being put in place and misconceptions are being debunked. For instance, you don’t necessarily need an LED volume for virtual production, and you don’t have to work for a big company to take advantage of some of the tools available for VP.

What are the benefits? Well, there are many. The limitations of time and location are gone, as are concerns about magic hour or wind or rain. And creative decisions can be made right on the stage, putting even more control in the hands of the filmmaker.

But VP workflows are still developing, with many companies just creating what works for them. Those finding success realize you need to wear a variety of hats to make it work. And prepro, tracking and education about the process have never been more important.

To find out more about what’s happening in virtual production, we reached out not only to some of the companies that make VP tools but to some of the ones who put those tools to use. They told us what’s working and what’s needed.

Magnopus’ Ben Grossmann

Magnopus a content-focused technology company bridging the physical and digital worlds. They have 175 engineers, designers and artists working in the US and UK and focus on AR, VR, virtual production and metaverse-related technologies.

Virtual Production Roundtable

Ben Grossmann

Do you feel VP workflows are fairly standardized these days, or is everyone trying to figure it out on their own?
Some parts are consistent, but a lot of people are trying different things to improve the workflows. They’re still pretty rough. Collaboration is still hard. A lot of different companies have their own recipe for success that probably suits their business strengths more than anyone else’s. Sometimes they’re confusing the market by inventing funky names for things that aren’t really unique at all or use jargon that people don’t understanding consistently. In the end, this hurts everyone because if productions can’t understand virtual production, they aren’t comfortable. The more we make it complicated or unclear on purpose, the less people want to do it.

Can you talk about pain points related to VP workflows?
That question could turn this magazine into a book. The biggest challenge is the complexity of creating assets that are photoreal and performant. Unless you’re playing back footage that was shot, you’re probably having to create the environments from scratch, even if you captured them with photogrammetry. You can take some shortcuts with matte paintings, but creating content for virtual production can be a heavy lift, and producers aren’t used to planning for it. They’re often uncomfortable front-loading that much money in a budget.

An LED volume in the final stages of construction and calibration through a partnership between MBS, Fuse TG and Magnopus at Gold Coast Studios in New York for an upcoming production.

If you’re shooting in an LED stage, the budget for that remains a challenge.  The costs for all these things make sense when you investigate them closely (and they are generally reasonable), but productions haven’t gotten comfortable or confident with them yet. They’re not cheap and sometimes “what you need” and “what you’re paying for” can be unclear.

Aside from those two items, we could really use another year or two before the software gets more stable. It’s rarely “smooth sailing” right now (anyone who says it is probably spends most of their time saying, “You can’t do that”). But it absolutely works if you pay close attention to it and have a supportive director, cinematographer, and production designer.

What tools are paramount to virtual production workflows?
You need a game engine, and you need something to sync assets across all the people collaborating. Most commonly, that’s Unreal Engine and Perforce. You also need filmmaker buy-in and be comfortable. Without those things, you’re going to have a bumpy ride.

What would surprise people about virtual production workflows? 
The need to make assets before you shoot them and the time it takes. That seems so silly, but people have gotten so used to “digital” happening months after we shoot that when you say you need to start building months in advance of shooting, they don’t believe it.

Director/EP Jonathan Nolan has been collaborating with Ben Grossman and the team at Magnopus, in association with Kilter Films on the Fallout TV series.

Physical set construction has had a hundred years to mature. Crews are really fast and have the experience to be pretty accurate about time and cost. Virtual art departments haven’t. And you don’t want to show up to shoot on an expensive LED stage and go, “Nope, this looks like a video game.”

How can those who don’t have access to big stages work in VP? 
You definitely don’t need an LED stage for everything. If you don’t need the reflections and lighting integration on the live-action plates, then you can use virtual production on greenscreen and still see the composite for editorial with a real-time keyer. So you still get benefits. If you don’t need a stage because the content is mostly CG and not live action, then you can work in virtual production and go for final shots right out of the game engine for a lot of things.

I’d also say that you could replace previsualization with virtual production. People will argue with me here, but most filmmakers understand previsualization to be, “I give a script and some storyboards to a team of people, who make shots and edits from them that we can use as reference. We give them feedback, and they revise them.”

EPs Jonathan Nolan, Lisa Joy, Athena Wickham and Margot Lulick have been collaborating with the teams on developing this tech and production for the past two years. The volume is 74 feet wide by 91 feet deep and 22 feet tall.

Whereas virtual production is: “We build out our sets and maybe some animations, and then the director and cinematographer go in and shoot the scenes like they would in a live-action set.” There’s stronger creative ownership in the latter than the former. It doesn’t always work like this, of course, but I’m summarizing general perceptions here.

What is the biggest value of VP?
Creatively, everyone sees what movie they’re making at the same time. If all we can see is a part of a shot when we’re looking through the camera, then everyone will imagine something different, and it becomes a lot harder to make the movie. Biggest value of virtual production is that it brings much of the creativity from post up into production.

What is the biggest misconception?
It’s cheaper! Sometimes, yes. Someday, definitely. Today? You’ll have to work hard and smartly to make that true.

SGOs Adrian Gonzalez

SGO’s Mistika combines a highly efficient workflow with image quality that enables the complete creation of any immersive content, including virtual sets — from initial optical flow stitching, color grading and VFX all the way to automated final deliverables.

Virtual Production Roundtable

Adrian Gonzalez

Do you feel VP workflows are fairly standardized these days, or is everyone trying to figure it out on their own?
Based on the conversations with our clients who work in VP, there appears to be some ground rules the vast majority is following at the moment. However, it seems that general industry standards are yet to be built. As a (smaller) post production software developer with extensive experience designing innovative post workflows, especially in the immersive area, we have the ability to adapt to market needs and quickly deliver specific features that facilitate emerging workflows, including virtual production.

Can you talk about pain points related to VP workflows?
For us, one of the biggest challenges related to VP now is the fact that there are no industry standards for display/projection formats and resolutions. Literally every virtual set out there has a custom configuration.

What tools are paramount to virtual production workflows?
From a post technology point of view, we believe that fast storage and GPU-optimized solutions that offer professional color management workflows and support several different industry-standard formats can truly be a lifesaver for content creators.

What would surprise people about virtual production workflows?
Perhaps the fact that at the moment every VP production is an R&D project in itself, as it almost always includes an innovative aspect and requires teams to adjust the (post)production workflow to the specific virtual set and not the other way around.

What about color accuracy in VP?
Color-aware workflows are a must in any professional production, including VP. When capturing in a virtual setup with LED screens, several different color spaces need to be handled. Also, the color accuracy and continuity can only be successfully achieved through industry-standard color pipelines, such as ACES.

What is the biggest value of VP? 
Some of our clients who regularly work with VP would say that one of the most important aspects of producing content in a virtual set is that the crew and the actors get a real feeling of what will be seen later on screens. However, the other big value of VP is that nothing needs to be set in stone, as the technology provides the content creators with extreme levels of control, flexibility and creative freedom to change almost anything on the go.

What is the biggest misconception?
Perhaps one of the most common ones is seeing virtual production as a direct substitute for a more traditional way of producing media content. We think of it as a creative alternative. Having said that, sometimes it is thought that VP is reserved for Hollywood-budget feature films only. But if planned wisely, it can even optimize shooting and post time and consequently reduce the overall budget.

Finally, how can those who don’t have access to these big stages still use your product or still get involved in VP?
Our products are available on the SGO website, so anyone can download them and try them out completely free of charge.

Hazimation‘s Hasraf ‘HaZ’ Dulull

Dulull is a London-based VFX supervisor, technologist, producer and director, who runs Hazimation, which provides animated feature films, video games and content for the metaverse.

HaZ Dulull

Do you feel VP workflows are fairly standardized these days, or is everyone trying to figure it out on their own?
If we are referring to VP LED volume shoots, then I still think people are still trying to figure things out (which is great), but there is standardization happening with the setups. For example, you need a good LED wall, good tracking (like Ncam) and a system to get your Unreal Engine scenes managed onto the LED (like Disguise). There are also workshops taking places in big LED volume stages for training the next generation of VP supervisors, DPs and filmmakers.

Can you talk about pain points related to VP workflows?
I think we still have to combat things like moiré on the LED screen when shooting too close to it or focusing on it. Another one is that setup does take a while, and when you reset, it takes a while to recalibrate, so that can be a pain when you are doing aggressive shoots on a tight schedule.

What tools are paramount to virtual production workflows?
A good tracking solution to sync the camera with the virtual camera and Unreal Engine scenes should always be optimized and tested constantly to ensure we hit the frame rates required.

What would surprise people about virtual production workflows?
That it takes time to set it up right; it’s not just putting up the LED wall, plugging in Unreal Engine and your camera and — boom — off you go! There is a lot of testing required as part of the workflow, and each VP project shoot is different in terms of scope and what is required both on the set and in the virtual world (the LED wall content).

What about color accuracy in VP? Difficult to achieve?
You know, when I was moderating a panel about VP recently, we had the awesome Paul Debevec on, and when I asked him that question, the first thing he did was whip out his light meter and measure the luminance and RGB values coming from the screen. So to answer your question, I think it’s about having the DP and the colorist work closely with the VP supervisor on the shoot to ensure the best color accuracy possible when shooting.

What is the biggest value of VP?
You can shoot contained and not worry about weather or about racing against sunlight to make your shoot day.

What is the biggest misconception?
I have seen some people shoot VP just by having an actor stand in front of the screen and that’s it… that’s the shot. It pains me to think they could have done much more that. They should be using the screen not just as a visual background but also as part of lighting the scene. And they should use as many real props as possible to help the integration and have actual real parallax.

The other misconception is that this is cheap. It’s not cheap at all, but if you are smart with how you use VP and spread it across your movie or TV show, then it can be efficient both production- and budget-wise. But for the love of god, please don’t just shoot VP for one shot that could have been achieved with rear projection or greenscreen.

Pixotope’s David Dowling

Pixotope is a virtual production solution for live media production. Offering both 3D real-time graphics and camera tracking, Pixotope helps content creators and broadcasters to produce XR, AR, virtual set and mixed reality content for television, online streaming and film.

David Dowling

Do you feel VP workflows are fairly standardized these days, or is everyone trying to figure it out on their own?
Yes and no. Yes in the sense that the technology is mature, integration between components works, and there are people and teams out there that know what they’re doing. We spend a lot of time making sure our user experience is intuitive and stable. It’s not a science experiment.

No because users of VP are pushing boundaries with the technology to find new ways of storytelling and engaging audiences. In places, this breaks from the norms of production workflows and requires a new approach.

Can you talk about pain points related to VP workflows?
Listening to customers and the market in general, the top three pain points are almost always the same.

The lack of available talent is an industry-wide issue but is especially acute within virtual production, where a mixture of traditional production skills and a knowledge of 3D graphics workflows is required.

WePlay AniMajor

We’ve had success working with on-set media producers to help them learn and leverage virtual production tools and workflows as a companion to what they already do. For example, lighting designers mirroring their work in the physical world with virtual lights and reflections. In this way, virtual production simply becomes media production.

For the next generation, the Pixotope Education Program (PEP) is supporting universities and other educational establishments by giving them access to VP tools, experts and industry contacts.

When it comes to camera tracking, we’ve seen a lot of virtual productions struggle with getting the right data at the right time to make sure virtual and physical worlds align correctly. Here, the remedy is to make sure the tracking technology meets the requirements of the production and then ensure seamless operation with the graphics engine.

League of Legends

On the third point, a lot of complexity comes from trying to use tools in applications they were never really built for. We’ve seen how powerful Unreal Engine is, and as the underlying engine, it dominates the VP scene. However, on its own, it can lack the integrations and glue to make it reliable and easy to use in a studio and/or live environment. By building those integrations and glue around it and simplifying the UX, we can significantly reduce the complexity and increase the reliability of VP workflows, meaning less time and resources are needed to run productions.

What tools are paramount to virtual production workflows?
An implementation of Pixotope graphics and tracking.

What would surprise people about virtual production workflows?
We’ve often said that as adoption increases, virtual production will simply become media production. But as I’ve touched on, this is already the case with VP becoming just another element of production — whether you’re a set or lighting designer, a camera operator, a producer, etc.

Baltimore Ravens

For a recent production, we worked closely with a broadcaster to implement AR objects in a set. Once the various disciplines were familiar with the tools and workflows, it was very natural for them to be working with the virtual elements as they did the physical. Many on set were surprised how this became almost second nature, though none of them was more delighted than the health and safety advisor with the massive reduction in staff climbing ladders to adjust props!

What about color accuracy in VP?
Poor color accuracy, or perhaps poor color matching, can be a big “giveaway” in virtual production, especially in mixed reality productions that combine XR and AR set extensions or props. While the computer-generated AR elements can be created to precise color profiles, camera sensors can change the colors captured…and LED volumes displaying virtual backgrounds even more so.

Setting up a key

To overcome this, we developed an automated tool that enables users to compare and adjust the colors of the AR elements such that the end result matches perfectly. It’s these kinds of setup tools that can make the difference between complex/time-consuming and it simply ”just works,” enabling media producers to focus on excellent creative.

What is the biggest value of VP? 
Unconstrained by the physical, the most powerful aspect of VP is that anything is possible — the limit is in the creativity. It could be argued that this is even limiting the adoption of VP; some potential users simply don’t know what to do with it.

VP can also significantly reduce costs. With faster turnaround times between sets and scenes, some users have been quoted saying it represents a 40-50% savings in time (and therefore money) compared to equivalent soundstage or backlot shoots. That’s exciting, but where VP is used to best effect is in driving audience engagement and enhancing storytelling.

What is the biggest misconception?
Probably that VP is hard and/or expensive. Sure, there will always be the big productions with huge, high-end LED volumes with headline-grabbing price tags. But getting into VP doesn’t necessarily require a massive budget. Subscription-based software running on commodity hardware can make VP affordable and provide an easy route into using virtual sets or adding photorealistic AR set extensions and props into existing productions.

The Family’s Steve Dabal

NYC’s The Family a new virtual production film studio in Brooklyn with 3D animation, Unreal Engine, Disguise xr, Nvidia Omniverse and an LED wall.

Steve Dabal

Do you feel VP workflows are fairly standardized these days, or is everyone trying to figure it out on their own?
Every week feels different. Workflow standards are falling into place, with institutions like SMPTE, VES and ASC leading the charge. Hollywood is making it happen. However, our NYC productions tend to be with independent filmmakers, so we deal with bespoke workflows.

For example, we’re doing a feature that requires the virtual art department to have photogrammetry of miniatures in Unreal Engine 5, which is a whole digital/physical workflow of its own. So depending on how the above-the-line team works, we build digital systems to support.

Can you talk about pain points related to VP workflows?
The most significant pain point for us is overhead. The processing power and hardware costs are not cheap. Don’t even get me started on what it was like getting Nvidia cards. Another pain point is that virtual production changes the cash flow to be front-loaded to preproduction, so clients have a hard time understanding that they need to release funds earlier. Fortunately, we already see hardware that could allow for lower-cost VP productions, so these problems might solve themselves.

What tools are paramount to virtual production workflows?
How nerdy should I get? On the surface level, we want production to feel as standard as possible when using these tools. Some individuals are comfortable with Google Docs, whiles others use Movie Magic. Figuring out the workflow in which a person or team is most comfortable is instrumental in curating virtual production tools.

Often, the backbone of a workflow is a pen and paper, so we are working with technology startups to scan documentation, digitize it and automatically get it in 3D space. Those are coming soon and will be exciting to share. We’ve found great success in making sliders and buttons on an iPad to control and manipulate Unreal Engine scenes. Same with DMX tools for lighting systems. I think our list of tools would be too long to fit on a web page.

What would surprise people about virtual production workflows?
My goal is to preserve the most natural way an artist works. So if a VP workflow is designed right, it feels more like being on-location than it does a VFX shoot. Digital tools should make the process more accessible and streamlined. I don’t want to be someone who glorifies this trend by saying, “Virtual production makes filming easier” because that’s not the case for any of our crew behind the scenes, but for the storytellers, it is pretty accurate.

What about color accuracy in VP?
This is a big one that I wish there were more standards and practices for. Coming from the VFX world, this is one of those questions that is very complex to answer. For the most part, we’ve been taking an approach of letting DPs and colorists make a show LUT that they apply to camera feeds and then a content LUT that they use for the LED content. Again, let them work how they naturally work. But the honest answer is testing — lots of lighting tests, color charts, camera tests and beyond.

What is the biggest value of VP?
Accessibility. We can now produce a rough cut of a movie in Unreal Engine before starting production. You can block out scenes with lighting in preproduction without needing to rent equipment. It doesn’t matter where in the world you are.

We just filmed some scenes outside a Malibu beach house, but we stayed in New York. Granted, filming in a studio isn’t a new concept, but the entire prepro process was remote leading up to one day. So many of these things were never accessible before.

What is the biggest misconception?
Giant companies are putting massive IPs on this technology, but it is still early. It’s so early. It is a feat to get all this technology to look right and design a workflow that can function for multiple varieties of projects. When we do commercial work, we’ll have clients who see the first iteration of a CG environment and say it looks “too much like a video game.” And it does, because it is made in a video game engine. It’s not until you add the cinematic skills from filmmakers that the scene will work. But they don’t even take the time to think about how absurd real-time animation is. They’re spoiled already! We’re not rendering a still frame here. Recently someone told me that yesterday’s miracle is tomorrow’s expectation.

Ncam’s Nic Hatch

Nic Hatch is co-founder of Ncam, which offers virtual production technology and solutions for film, TV and broadcast. The company’s solutions include Ncam Reality, which enables virtual production through real-time camera tracking.

Nic Hatch

 Do you feel VP workflows are fairly standardized these days, or is everyone trying to figure it out on their own?
The techniques have been around for a long time, but not many productions were using them. Some of the big pieces existed, like LED walls, game engines and the type of in-camera VFX that Ncam Reality enables. But other pieces are either brand-new or nonexistent.

Virtual production is now toward the top of the early adoption phase and heading into early majority. It’s been used successfully on a number of productions but is still far from plug-and-play. The off-the-shelf tools aren’t all there, so studios are creating custom solutions to make it work. The early pandemic stage created a need for wider adoption of virtual production, but not every studio had the means, team or tools to put it into practice.

Now we’re seeing some big leaps in terms of ease of use, affordability and interoperability. Solutions are starting to appear from various hardware and software companies, and the space is rapidly evolving. It’s certainly an intriguing time.

Can you talk about pain points related to VP workflows?
Arguably the biggest struggle is the idea of changing your entire workflow. It used to be that filmmakers would come to set with just an idea. In virtual production, you have to come with most of your 3D content already finished. This means a lot of the creation process has to be done weeks or even months earlier than people are used to. It takes a lot of planning. But in the end, it gets everyone on the same page much faster.

Then there’s the shortage in skillsets and talent. How do we train and/or translate current skills? There have been some excellent initiatives from the likes of Epic with the Unreal Fellowship, and we opened two new training spaces in Europe and Latin America in 2021. However, the industry needs more opportunities, and this is a global issue.

How are your tools used in virtual production?
Real-time camera tracking is a key component to virtual production, whatever your definition of that term. For any in-camera visualization, whether replacing greenscreen with a previz model or finished ICVFX on an LED volume, accurate and robust real-time camera tracking and lens information — including distortion and focus, iris and zoom information — is a vital part of the workflow.

What would surprise people about virtual production workflows?
There are so many VP workflows and different ways to visualize content depending on your budget. In terms of the technology, affordability might be a surprise. You don’t necessarily need an LED wall or the best camera and lens package to create in-camera visualization.

When Unreal Engine announced it would be free to use for linear (film and TV) content, we recognized this would be a catalyst for VP. We wanted the world’s most advanced and flexible camera tracking technology to be affordable for everyone, so we worked hard to lower the barrier to entry. We spent a lot of time ensuring that the hardware was completely decoupled from the software in order to offer mounting and budget flexibility. We also decided that software licensing should be project-driven in order to support the industry and how people are used to working. This was a lot of work, but ultimately it changes the game.

What about color accuracy in VP?
Color accuracy is increasingly important. As ease of use dictates how scalable this technology stack becomes, easy and accurate color and color matching are critical, especially when mixing the real and synthetic worlds; humans are very good at spotting what is CGI, and color is a real giveaway.

What is the biggest value of VP?
When you can see everything in-camera, you can make better decisions. From a filmmaker perspective, you’re aligning everything through the camera again, which restores some of the creative freedom they had before visual effects came along. And if we design the tools correctly, we won’t have to give up anything. You’ll have ultimate flexibility — so you can get the shots you want on-set but also be able to tweak them easily in post.

What is the biggest misconception about VP?
That it’s scary and difficult. That it takes too much time and costs too much money. That it’s only for the trailblazers. That it’s just a short-term fad.

VP is a massive step change, but there is an inevitability about it. As the tech stacks become more affordable, more usable and more integrated, it’s really a matter of skillsets and training. Part of me wonders why it’s taken so long to get this far, yet I also recognize that humans, as a whole, are resistant to change. But this isn’t a zero-to-finish in 2022. This will be an evolution now that the revolution has kicked in.

Finally, and you touched on this earlier a bit, but how can those who don’t have access to these big stages still use your product or still  get involved in VP?
Ncam’s core technology is not only for use on big stages. In fact, it was designed with complete flexibility in mind. We wanted to create a camera tracking system that would work anywhere on any camera. This means that ICVFX can be shot outdoors, without the need for a greenscreen or LED volume. We’ve also harnessed the power of Unreal Engine and include our lite version of the plugin on the Unreal Marketplace for free.

Additionally, our new pricing tiers allow anyone to enjoy real-time VFX on a project-driven basis, ensuring you only pay for what you use. This lowers the barrier to entry significantly and enables more access to the creative freedom of VP via Ncam.

Puget System’s Kelly Shipman

Puget Systems designs and builds high-performance, custom workstations with a specialization in multiple categories of content creation, including post production, virtual production, rendering and 3D design and animation.

Kelly Shipman

Do you feel VP workflows are fairly standardized these days, or is everyone trying to figure it out on their own?
There isn’t one standard virtual production workflow. Instead, a few are based on asset creation or real-time playback, with slight variations for the specific hardware in use. For example, motion capture artists mostly have the same workflow, but with slight differences depending on which motion capture suit they are using and whether they want to edit the animations in a different animation package. Shooting on an LED volume is mostly standardized, with slight tweaks for the specific controller, camera and motion tracking system in use.

Can you talk about pain points related to VP workflows?
One of the biggest pain points is still the lack of standardized workflows with external hardware, as well as the lack of documentation on how to set everything up. Many people have put in lots of work figuring out how to get their own workflows executed, but there is not yet a plug-and-play-style product that someone new to the space can install and begin working with.

What tools are paramount to virtual production workflows?
The one tool that is fairly standardized is the use of Unreal Engine as the core of VP. There are a variety of other software or plugins that may be used, or there’s specialized hardware based on the specific task at hand. But Unreal Engine brings it all together.

What would surprise people about virtual production workflows?
Exactly how close the various teams will be working together. The old mantra of “fix it in post” truly doesn’t work with virtual production. When filming on an LED volume, there needs to be at least one Unreal Engine expert on-set to make any on-the-fly adjustments. If the director wants to change the position of the sun, for example, the lighting will need to be adjusted in the Unreal Engine scene, and they will need to work with the crew to make sure the stage lights match. When trying to blend the physical set with the digital set, both art departments will need to work hand in hand from the beginning, instead of one team doing its part and then handing it off to the next team to do theirs.

What about color accuracy in VP? Difficult to achieve?
The color accuracy of LED walls has been improving rapidly, as has the peak brightness. There is still room for improvement, but it is important to find out what the panels are capable of and do some test shots with the chosen camera. Most walls allow for calibrations, and Unreal Engine offers considerable calibration options for its final output.

What is the biggest value of VP?
The biggest value of VP is the immediacy it provides along the entire pipeline. Once environments are created in Unreal Engine, there is no waiting for a render to finish. The director can move the virtual camera through the scene to set up the shot using final-quality graphics. If on the day of shooting, something needs to be changed, those changes can be made, and the result appears immediately on the LED walls ready for filming. This allows for greater flexibility and experimentation as well as a unified vision since everyone on-set is seeing the same thing.

What is the biggest misconception?
Probably the biggest misconception is that the real-time engine is capable of the exact same quality as a traditional offline renderer, just faster. The truth is that to achieve real-time speeds, many sacrifices have been made. The trick is to understand how to properly optimize a project in Unreal Engine to get a balance of graphical quality and the desired frame rate. Using the LOD (Level of Detail) system and mipmaps, reducing texture sizes for nonessential objects, optimizing lighting, etc. can go a long way to improving the performance of a project without sacrificing the final output, which in turn allows you to put more effects on the screen.

Meptik’s Nick Rivero

Meptik is an Atlanta-based full-service virtual and extended reality production studio.

Nick Rivero

Do you feel VP workflows are fairly standardized these days, or is everyone trying to figure it out on their own?
It’s still a bit of the Wild West overall. Workflows and systems are still being figured out, while the hardware underpinning it is evolving at light-speed. I can say that over the past two years in particular, there is more standardization coming to fruition, all the way down to naming methodologies.

However, the market as a whole still has a lot of disparity and people operating toward what works best for themselves and their specific on-set or studio needs. So overall, I believe it will take some time before we see any sort of rigid standards by which we all operate.

Can you talk about pain points related to VP workflows?
Complexity is still front and center. The technology behind VP is still complex and requires qualified technicians with deep experience in nuanced technologies. Within the Meptik staff, we have people that specialize in system design, system operation, camera tracking systems and more, just to name a few. We’ve worked with the Disguise platform for a while now because it simplifies a lot of the larger technical challenges for us, allowing us to get volumes up and running faster.

What tools are paramount to virtual production workflows?
Well-rounded knowledge of both video technology and 3D content creation workflows is paramount to understanding all aspects of this paradigm. Beyond that, it really depends on what you have an interest in. If on-set operation, programming and system technology are more interesting to you, then focusing on video engineering, LED technology, networking, camera tracking systems and similar items is the path to take. Otherwise, if you are interested in a more creative path — toward the virtual art department side — then study content creation, visual effects, 3D software such as Maya or Cinema 4D…and definitely understand Unreal Engine.

What would surprise people about virtual production workflows?
If you understand the basics of video technology, you can pick up the rest pretty quickly. While there are very deeply technical pieces, the high levels come together fairly quickly. Also, while VP is aimed at cinematography, it requires knowledge of more broadcast and live event-type workflows and technologies, such as LED, computers and GPUs, and video system signal flow that is typically found outside of the film ecosystem.

What about color accuracy in VP?
Color accuracy is one of the largest hurdles in the space. Whether you’re on a film set or shooting with a corporate client, and whether it’s a TV commercial or a music video, precise color representation is key. The industry is making strides in this direction, but right now different manufacturers and software platforms handle color differently, so there is a ways to go toward standardizing it across the industry.

What is the biggest value of VP? 
Virtual production allows for total control of the shooting environment. Time of day, weather, object positioning, locations — all can be adjusted as per the director’s requirements, mostly with the press of a button.

Instead of waiting to see what your shots look like after a lengthy post process, the process shifts to preproduction — meaning you can previsualize scenes and know exactly what to shoot before you step foot on-set. And on the set itself, you can see 95% of the final product. This results in not only enormous time savings and decreased cost of traveling but also more creative freedom — you are not bound to physical barriers anymore. VP provides you the freedom to shoot anywhere at any given time. The only limitation is your imagination.

What is the biggest misconception?
Filmmakers think they need to really understand the technology behind VP to make use of it, but that’s not true. As a full-service virtual and extended reality production studio, our team takes care of everything beyond the idea. We have creative and technical teams that help with ideation, creation and execution of ideas.

Now that we are part of the Disguise family, we have even more access to a global team of immersive production experts and the latest in tech to deliver groundbreaking studios and virtual environments that transport audiences into new worlds.

How can those who don’t have access to these big stages still get involved in VP?
We divide our offerings into three main pillars: bespoke, facilities and installs.

If you have access to a virtual production studio with a crew, then we can provide content or our technical expertise.

If you don’t have access to a virtual production studio, we have a turnkey, production-ready facility with all the staff you need at Arc Studios in Nashville with our partners at Gear Seven.

And if you are looking for a permanent installation of a virtual or XR production stage at your own facility for large amounts of content production, we install XR stages with the Disguise workflow.

Vū Studio’s Daniel Mallek

With locations in Tampa Bay, Nashville and Las Vegas, Vū is a growing network of virtual production studios providing work for commercials, corporate live streams and events as well as long-format film and episodics.

Daniel Mallek

Do you feel VP workflows are fairly standardized these days, or is everyone trying to figure it out on their own?

There is a lot of work happening to standardize virtual production workflows. Since education resources are limited, many have had to figure things out on their own and piece together tools from different industries that weren’t created specifically for virtual production.

Organizations such as SMPTE and Realtime Society are actively bringing together innovators from across the industry. One of the biggest challenges to standardization is the quick pace of innovation and how rapidly the technology evolves. The more filmmakers use this tool, the more we learn about what works, what doesn’t and what standards we need to implement to make each shoot a success. There’s work to be done, but the future is very exciting.

Can you talk about pain points related to VP workflows?
The biggest pain point is a lack of education. Essentially, virtual production (ICVFX) combines several industries together (film and TV, gaming, live events, etc.). To this day, there is no clear path for someone who wants to enter the virtual production space. Stages are being built faster than we can find people to operate them, which is why at Vū, education is one of our primary objectives. Our goal is to lower the barrier of entry to this creative technology.

What tools are paramount to virtual production workflows?
Virtual production, specifically ICVFX, brings together tools from multiple industries. This includes real-time camera tracking, real-time rendering in a game engine, high-end computing, LED processing and physical LED walls. Within each of these tools, there are multiple additional technologies at play. What this means is that for a stage to operate smoothly, each of those items needs to be well-optimized and in good working order. If not, it can cause issues in the workflow that can be difficult to troubleshoot when something goes wrong. At Vū, creating and operating these systems on behalf of our clients is core to our business.

What would surprise people about virtual production workflows?
It’s complex to build out a system, but once a stage is optimized and online, the operation is relatively straightforward and can be operated by one person, depending on the production’s needs. Our vision is that anyone should be able to easily operate a stage and manipulate an environment, including directors and DPs. We still have work to do to accomplish this vision but are getting closer every day.

What about color accuracy in VP? Difficult to achieve?
This is one of the primary concerns that DPs and artists have. In the early days of ICVFX, it was definitely a barrier to entry, but the technology has come very far since then. By combining high-end LEDs with premium image processing, it’s now the standard to offer highly accurate color that can be manipulated and finely tuned depending on a production’s needs.

What is the biggest value of VP?
The biggest benefit we hear about often is the level of creative control that the technology unlocks. The limitations of time and location are gone. In practice, this means a production can shoot in a remote location without having to organize cost-prohibitive logistics to bring a large crew there. It also means that long and expensive post pipelines are drastically reduced since most large-scale effects are captured in-camera. This all allows creatives to extend their budget in ways that aren’t possible with traditional tools.

What is the biggest misconception?
That you need to have “the right project” or lots of VP experience to bring a story into one of our stages. While there are certainly aspects that are different from shooting on location, the learning curve is much less than people realize.

For example, if you know how to light a scene on-location, that knowledge will transfer to virtual production. It’s also great for all types of projects, from feature films to talking heads. It’s up to the filmmakers and creators to find the best way to use this tool to accomplish their vision.

Main Image: HaZ’s Rift


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 25 years. 

Looks That Kill VFX

Shaina Holmes on Creating VFX for Indie Film Looks That Kill

By Randi Altman 

To say that Shaina Holmes is a busy lady would be an understatement. In addition to her job as an assistant professor of television, radio and film at Syracuse University (Newhouse), she is the owner of Flying Turtle Post, which provides visual effects and post for independent films.

Shaina Holmes

One of her recent projects was the film Looks That Kill from writer/director Kellen Moore. The story follows a boy named Max, who was born with the ability to kill people if he shows them his face, so he needs to spend his life with bandages around his head to save those around him. It stars Brandon Flynn, Julia Goldani Telles and the late Peter Scolari.

Holmes’ role on Looks That Kill was three-fold: lead VFX artist, VFX producer and VFX supervisor. We reached out to Holmes to talk about her process on the film, which she worked on while simultaneously providing visual effects for three other American High films — Big Time Adolescence, Banana Split and Holly Slept Over. Flying Turtle Post worked on Looks That Kill for 10 months, from the initial VFX bid in September 2018 to final delivery in June 2019.

Let’s find out more …

How many shots did you provide?
The original bid consisted of 96 shots, which got narrowed down to 69 VFX shots. Of those, my team completed 42 of the 69. Most were very intricate VFX shots and transitions that were also long durations. The rest of the 96 shots were either omitted, done in the conform or with an outside artist who took on the less technical, more creative shots, like the eye-flash death shots.

What types of visual effects did you create?
The bulk of the complex VFX dealt with transitions with lengthy camera moves. One shot was over 7,000 frames, or 4 minutes long, and full of multiple greenscreen transitions, speed ramps, artifact cleanup, shot stitching, rig removals, adjusting the performance of props sliding down a wall during a time-of-day lighting change and more.

 

Looks That Kill VFX

There were also many hidden edits used throughout the film, such as when the camera was rotating around objects and people to transition to another location, or to link up two or three completely different camera moves and plates together to look seamless. The beauty of this smooth camera work really helped the audience engage with the inner thoughts of our main character as he deals with his medical condition.

We worked on a variety of shots, including a seamless greenscreen edit for a 4-minute stitched shot with multiple speed changes and cleanup; fluid morphs and artifact cleanup; wipe transitions through difficult camera moves and speed changes; the creation of distress-weathered signage for buildings; turning billboard light bulbs and creating flashing lights; adding a nosebleed; compositing multiple plates together (bus to street, fire to tree, cigarette falling through air); removing unwanted people, safety wires and a tattoo from a scene; cell phone screen replacements and graphics revisions; split screens for action/performance; TV monitor comps; lower-third graphics; dead pixel removal; stabilizations; beauty fixes; and the addition of anamorphic lens distortion to stock footage.

Looks That Kill VFXThis film is a dark comedy. Did your VFX help amp up the funny?
The editing style used fluid morphs and split screens to compile the best performances from each character at all times. While these are invisible effects that the audience shouldn’t be able to identify, our work on these shots really helped amp up the humor in each scene.

We also worked on a scene where a character aimlessly throws a cigarette behind her without looking, and of course it lands and starts a fire near a house. Another character is then seen trying to drag this burning shrubbery into the driveway. For these shots, we needed to composite the burning tree onto a non-burning tree prop the character was dragging, and we had to change the animation of the cigarette’s trajectory to hit the correct spot to ignite the fire.

Can you talk about your process? Any challenges?
With the American High projects prior to this one, the bulk of the VFX requests were expected — screen replacements and fluid morphs — but this project had a lot of different requests. This meant each shot or small sequence needed a new plan to achieve the goals, especially since we were working internally with a larger VFX team than previous projects due to the complexity of the shots.

My company, Flying Turtle Post, is based on mentorship, meaning we have many junior artists all being trained by me until they become mid-level artists, and then they help me train the next batch of junior artists. We are a very collaborative team of remote artists and coordinators, all of whom started off with me as their professor in college. I’m now their employer.

This project provided many challenges for us since we were dealing with longer file sequences than usual for VFX shots, meaning thousand-frame shots instead of hundred-frame shots. Additionally, many of my junior artists had never worked with anamorphic aspect ratios before, so we needed to include squeezed and unsqueezed into the training as we were getting up and running. We were a fully remote, work-from-home studio before the pandemic — before the new cloud-based options became commonplace for VFX pipelines. Some of these shots were 10GB for one render, which made it difficult to transfer easily from artist to artist. We quickly had to adapt our pipeline and reinvent how we normally would work on a show together.

What tools did you call on for your work?
Blackmagic’s Fusion Studio is my company’s compositing tool of choice for our artists. I teach Fusion to them in school due to its flexibility and affordability. I have used it for the past 20 years of my VFX artist career.

If we work on CG, we use Autodesk Maya and Adobe Substance Painter. Sometimes we also use Nuke for compositing, depending on the artist. We use Adobe After Effects for motion graphics and animation.

Separate from Looks That Kill, you seem to be attracted to horror films.
Over my career, I’ve had the honor of working in many different genres and on films many people call their favorites of all time. It’s always fun when I run across a horror fan, and they inevitably ask the question, “Have you worked on anything I would have heard of?” This is probably the genre I can most easily tell what kind of horror fan they are from the spectrum of films I tell them I’ve worked on. I’ll start with the bigger ones, like The Purge: Election Year (2016), Halloween II (2009), and Halloween (2007). If they’re intrigued, then I’ll see if they go to horror festivals and I’ll add cult favorite Starry Eyes (2014) to the conversation. That’s the real test. If they’ve seen that movie, then I know how deep their love for horror films goes.

Looks That Kill VFXIn fact, I had a conversation with one of my students a few years ago that went very similarly to that. When he professed his admiration for Starry Eyes, I introduced him to writer/director Kevin Kölsch, who, after working in the industry for years at a post house, decided to finally shoot a feature film and use his friends in post and VFX as resources to help finish the film. Starry Eyes went on to do well in festivals, and now he is attached to large-budget films and TV shows as director. This story inspired my student Matt Sampere to follow in a similar path, and now two years out of school, this person is shooting a Halloween-themed feature horror film. Naturally I am helping on the project as VFX supervisor, and my former students are providing the cinematography, post workflow and VFX.

The film is shot on Blackmagic Pocket cameras, with editing in Resolve and VFX in Fusion. Principal photography has just been completed, and I expect to start post toward the end of the year.

A horror film that I’ve worked on recently is The Night House, out in theaters this past August, for which I was the on-set VFX supervisor and plate supervisor for Crafty Apes. I tend to gravitate toward horror films that rely on on-set special effects and makeup to shoot gore and stunts as practically as possible and employ VFX only as an enhancement or for wow-factor moments instead of it being CG-heavy throughout. This was certainly true for Starry Eyes, Creeping Death and The Night House.

 The Night House was particularly interesting to work on because of how the set became a living character. I don’t want to give anything away, but the visuals are really unique for the invisible character.

What else have you worked on recently?
Mayday, which premiered at Sundance 2021. I was the VFX producer for Mayday, and with a small team we used a mix of Fusion and Foundry Nuke to complete over 400 VFX shots for the film.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 25 years. 

Cobra Kai Music Mixer Phil McGowan Talks Workflow

By Randi Altman

The series Cobra Kai revisits the 1980’s Karate Kid franchise to see what the two teenaged adversaries are up to as middle-aged men. Created by Josh Heald, Jon Hurwitz and Hayden Schlossberg, the show reintroduces viewers to underdog Daniel LaRusso (Ralph Macchio), his childhood nemesis Johnny Lawrence (William Zabka) and the ruthless martial arts instructor John Kreese (Martin Kove).

Music Mixer Phil McGowan

Phil McGowan

The show is now streaming its third season on Netflix, with Season 4 set for a December 2021 release. One of the team members who has been on the series since its inception is music mixer Phil McGowan. This audio post veteran’s credits include Ozark, Fear the Walking Dead and the film Promising Young Woman.

We reached out to McGowan to find out more about his work on Cobra Kai as well as on the HBO documentary film Tina, directed by Dan Lindsay and T. J. Martin.

What’s your typical mix process like?
For TV shows, I’m typically just doing one episode at a time, and they are spaced out a week or more between episodes. It’s nice to stay in the same mindset for a few days, as my weeks often see me bouncing around between multiple projects day to day unless I’m on a bigger film.

What was your workflow on Cobra Kai?
Composers Zach Robinson and Leo Birenberg start by sending me their materials in the week leading up to an episode mix. This would include everything from their sequencer sessions.

Phil McGowan’s studio

They then go and record drums and guitars with musicians here in LA before doing a late-night Source-Connect session with an orchestra in Eastern Europe. The LA musicians and the orchestra engineers then send me links to download all their recordings, and I begin to assemble all of the files by cue.

On any given Cobra Kai mix day, the first hour or two is just wrangling files and getting everything organized so I can dive in and start mixing. Once everything is assembled, I then proceed mixing cue by cue, always starting with the biggest, most complicated cue, as that will hold many of the sounds that the rest of the cues will use. That also helps me not stress about those big, intimidating cues all day.

Who else was part of that sound team on the show, and how did you work together?
Joe DeAngelis and Chris Carpenter are our excellent re-recording mixers on Cobra Kai, but I didn’t interface with them as much as I usually would. When we started Season 1 — when it was on YouTube — it was a bit of a whirlwind dub schedule, so I didn’t have my usual conversation with the dub team about stems, formats, etc.

 Music Mixer Phil McGowan

Cobra Kai

Our fantastic music editor is Andrés Locsey, and he is who I end up sending all of my approved mix stems to. Andres makes sure everything is in the right place and handles any music conforms if they ended up changing picture after we scored, which I don’t think they did very often. Everything was very smooth with this team, and I’ve always been happy with the final result when I watch each season when it comes out!

What were some of the more challenging parts to this season?
As with any Cobra Kai season, the most challenging part is often balancing the various genres that the score features. On any given episode, I could be mixing ‘80s hair metal, four-on-the-floor electronic music or straight-up orchestral score. Then on some of the bigger episodes, I’ll have a bunch of cues that pretty much combine all those elements, and those can take a bit to get into a good place with all the elements speaking appropriately. Even though it can be challenging, Cobra Kai is almost always very fun to mix.

Cobra Kai

Do you have any favorite scenes?
I can’t speak to specifics of Season 4 just yet, but I would say my favorite sequence from Season 3 is the “Duel of the Snakes” part of Episode 10. There are about six or more cues strung together into a 10-minute-plus sequence that jumps back and forth between multiple fights and flashbacks to Vietnam, so that runs the gamut of many of the styles featured in the Cobra Kai score. Zach and Leo wrote their asses off for that sequence, and we’re very proud of how it all turned out.

Let’s shift gears for a bit. Can you talk about your workflow on the documentary film Tina?
For the Tina score, I received Avid Pro Tools sessions from composers Danny Bensi and Saunder Jurriaans. I imported all their tracks into my mixing template. Danny and Saunder are just about my only clients that work in Pro Tools, so the process with them is unique as I can see all their plugins, routing, edits, etc. and can tweak any of that as necessary.

 Music Mixer Phil McGowan

Tina

Originally, I was only hired to mix the score, but partway through the mix, I was contacted by someone from the production team, who asked if I wanted to mix three live Tina Turner performances for the film. I immediately said yes! For those songs, I received tape transfers from Iron Mountain for the two concerts the songs were sourced from. The Rio show from 1988 was a transfer of two 24-track analog tape machines, while the Barcelona show from 1990 was a digital transfer from Sony DASH tape. That was fun to dig into those old recordings and bring them up to date in 5.1.

Any favorite scenes on that one?
For the score, the scene where Tina finally leaves Ike features the biggest score cue in the film and the way it plays against the picture and the voiceover from Tina is just brilliant. As far as the songs go, my favorite is “The Best,” which plays over the end credits, mostly because that’s the only song that plays all the way through without any edits.

Can you talk about working these two very different projects?
Cobra Kai and Tina are quite different projects, though I suppose they do share some similarities in that I was busting out some emulations of classic gear for the live Tina Turner mixes as well as any Cobra Kai score cues that were more in the ‘80s vein. I’m always excited when I can justify using the 480L, RMX16, or Korg Digital Delay on a mix. Those emulations from UAD are fantastic.

 Music Mixer Phil McGowan

Cobra Kai

What are you working on now?
We just wrapped up Season 4 of Cobra Kai, and that features a whole new level of epic that we can’t wait for you all to hear.

Danny and Saunder have been keeping me very busy lately as well. We have four to five TV shows plus a couple films currently on our plate. In addition to all of that, I have the usual variety of other projects from other clients, so the end of this year is shaping up to be pretty jam-packed.

Any advice for those just starting out?
Fortunately, for anyone starting out now, there’s a massive amount of resources online to see how this kind of work is done. Mix With the Masters and pureMix are fantastic places to see directly how mixers and engineers do what they do. Before websites like those existed, the only way to learn from other engineers was to be fortunate enough to assist them on a session, so it’s quite incredible that those outlets exist. A good creative professional is always learning, so never lose that drive to always learn new things and grow.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Shooting Jungle Cruise Underwater

DP Chat: Shooting Underwater for Disney’s Jungle Cruise

By Randi Altman

Disney’s feature film Jungle Cruise, starring Dwayne Johnson and Emily Blunt and directed by Jaume Collet-Serra, is based on the theme park ride of the same name. The story follows the captain of a small riverboat who takes a scientist and her brother through a jungle in search of the Tree of Life. As you can imagine, some scenes involve water, so the production called on the aptly named DP Ian Seabrook to shoot underwater footage for the film.

 Shooting Jungle Cruise Underwater

Ian Seabrook

An experienced cinematographer, Seabrook’s credits include Batman v Superman: Dawn of Justice, Deadpool 2, It Chapter Two and the upcoming untitled Thai cave rescue documentary. We reached out to him to find out more about the shoot, the challenges and what inspires him in his work.

How early did you get involved on Jungle Cruise?
I was finishing up underwater work on the film It Chapter 2 for DP Checco Varese when the line producer for Jungle Cruise contacted me to see if I would be available for a sequence for his film. I then met the director, Jaume, producer Doug Merrifield, DP Flavio Labiano, special effects coordinator J.D. Schwalm and first AD David Venghaus Jr. in Atlanta to discuss the underwater sequence. The production was around a third into the schedule at that point.

What direction did Jaume Collet-Serra provide?
Jaume explained in prep what the sequence entailed and how he saw it playing out in rough form. On the shooting days, he was more concrete on how he wanted the cast to play within the frame. There were some storyboards as a reference, but much of the composition was left to me.

How did you work with the film’s DP, Flavio Martínez Labiano?
Flavio and I discussed the lighting for the sequences I was involved with, and I kept an open communication with him during the production regarding any changes I was making by visiting him in his DIT trailer on the main unit set. By maintaining an open communication, I find it yields the best results.

What sort of planning do you have to do for underwater sequences?
The first is figuring out what the sequence entails: How many cast members will be involved? What environment will they be in? What are the potentially dangerous elements? For Jungle Cruise it was the La Quila and puzzle sets with two actors — Dwayne Johnson and Emily Blunt — and their associated stunt personnel.

The next step is figuring out if the cast has prior underwater experience, which can make or break the success of filming the sequence. If someone is not comfortable being in or underwater, then the scene could be a challenge to get around. Dwayne had prior underwater experience on Baywatch, but to my knowledge, Emily’s underwater experience was less involved. That said, she did an absolutely amazing job in the water and was key to the sequence’s success.

In addition, it was necessary to have several meetings with the art and construction departments regarding the build of the puzzle set, as we had to go over what materials to use and not to use with regards to submerged set pieces and the associated hazards. Those hazards include the disintegration of paint and construction materials in the water, the primary concerns about which are running afoul of water clarity standards and the potential for ear or eye infections (which happened to both me and Amy Adams on Batman v Superman).

Where were the underwater sequences shot, and how long was the shoot?
The underwater sequences for Jungle Cruise were photographed in two tanks at Blackhall Studios in Atlanta. One tank was an exterior set, which was built in a parking lot at the second lot at Blackhall and used for shots involving La Quila and the cast transitioning into the water. The interior tank, which contained the puzzle set, was built inside one of the construction stages at Blackhall 2.

Shooting in a water tank

Can you talk about the puzzle sequence?
The sequence involved the cast swimming down from La Quila to the puzzle and holding their breath. In reality, it was not entirely different. Only the sets were separate, with the exterior tank being used for the La Quila set. The interior tank, which housed the puzzle set, required working within tight confines and limited mobility.

To achieve the shots required, I used my customized underwater housing, which has a small footprint and enables me to fit within the set and have enough room for the cast and stunt personnel to perform. Emily’s character gets trapped inside the set, and Dwayne’s character tries to rescue her, but due to his sizable frame, he cannot fit. Instead of resorting to passing breaths to Emily via mouth, we constructed the set piece outside of the tank then lowered it in once all materials had been dried and sealed. It then needed a few days for the water to settle, and I did daily checks with marine coordinator Neil Andrea.

What about other challenges?
The epilogue of the puzzle scene involved raising the set out of the water, so the discussion point became how to achieve this practically. As the shots required the camera and set to travel out of the water simultaneously via a construction crane (which was barely able to fit within the stage doors), the thought process was for the camera housing to be attached to the set via pipe rigging. This idea was short-lived because when I saw what the desired shots were and where the camera needed to be, I realized there would be no space or bracing point where I could attach any rigging. I suggested that I could hand-hold the housing for the shots, which was met with “Do you think you could do that?” It was a challenge to go from hand-holding an 80-pound camera housing in water, where it has slightly negative weight, to having the full weight of water pulling down as the set was raised, but the test worked. Of course, after that, we did it eight more times!

Shooting Jungle Cruise Underwater

How do you go about choosing the right camera and lenses for projects like Jungle Cruise?
I make every attempt to use the same camera and lens package as the main unit uses on the production, which in the case of Jungle Cruise was the ARRI Alexa SXT Plus with Panavision anamorphic glass. The 30mm C series was our hero lens due to its smaller size and weight, but we used a few other focal lengths as well.

What about the underwater enclosure?
The underwater housing is my own custom housing, which gives me access to all the exterior buttons for the Alexa: ISO, white balance, shutter or camera speed, all of which can be changed underwater. The housing also contains a TV Logic on-board monitor for viewing. I have several housings for different cameras. It makes it easier to have a housing for the camera that is already in use on the show.

Any “happy accidents” along the way?
Though the lighting was designed to illuminate the inside of the set with subtlety, there were moments when Emily Blunt would swim inside the set and the backlight, and small kisses of refracted light would hit her perfectly. I saw these on the monitor as we were filming, and they made me smile.

Ian Seabrook

Any scenes that you are particularly proud of?
Both Dwayne and Emily were wonderful to work with in the water, which made the sequence a success. The shots of Emily figuring out how to manipulate the puzzle were structured around a sequence of manipulations of the set pieces. We discussed what action she would be doing, but on that day, I went with how I felt the scene should be photographed and followed her action, which was somewhat balletic. With both of us in sync, the sequence came together nicely.

Now more general questions …

How did you become interested in cinematography?
From a young age, I had a desire to figure out how things like radios and televisions worked. That interest in the practical morphed into cinema as I watched films like 2001: A Space Odyssey, Lawrence of Arabia and Giant and began to wonder how they were made. Around the same time, I was watching a lot of Disney and wildlife documentaries on television, in addition to James Bond films like Thunderball, which had fantastic underwater sequences. I became obsessed with being underwater and how cameramen were able to be in the water with marine life like whales and sharks. Many years later I found myself in the waters around Cocos Island, Costa Rica, surrounded by schools of sharks with a camera in my hand. My dreams became a reality.

Shooting Jungle Cruise Underwater

Ian Seabrook

What inspires you artistically?
Robby Müller, who shot films for Wim Wenders, Alex Cox and Jim Jarmusch, is my favorite cinematographer. His ability to use available light on the films he photographed was unprecedented and is still a major influence to this day. I take inspiration from many forms: cinema, natural history films, music, art and photography.

Lamar Boren, who was the underwater cinematographer on Thunderball, and David Doubilet, who worked on The Deep, Splash and The Cove, are top of the list for me.

What new technology has changed the way you work (looking back over the past few years)? 
Taking LED fixtures underwater has changed what was a constant for underwater illumination. Smaller, lighter and, at times, more compact fixtures have transformed the lighting market. Where the dialogue used to involve lighting with attached cables and the associated boats with generators required to power them, LED housed fixtures without tethers have reduced the time and power requirements for underwater illumination. When I need a lot of punch for composite screen work, the industry-standard underwater lights still very much work, but the smaller and lighter fixtures have become indispensable, especially for travel.

What are some of your best practices or rules you try to follow on each job?
Arrive early, pay attention and remember why you are there. I bring enthusiasm to each project. I always remember my beginnings and strive to exceed expectations on each assignment. I work in many locales worldwide and try to involve as many local crew as I can. And whenever possible, I train those who are interested on the proper use of the equipment. I do a lot of my own prep and research for the assignments I do, in addition to the standard production prep. I also have backup plans.

Ian Seabrook

Explain your ideal collaboration with the director or showrunner when starting a new project.
I work best when there is a relationship built on mutual respect. There is always a reason that you want to collaborate with someone, and they with you. While I have been on my share of large, multi-personnel crews with a slew of trucks and trailers, it is the more intimate jobs involving travel and a reduced crew that have been the most memorable. I am quite capable of being autonomous and capturing sequences on my own while adding the right people to that mix, and nothing beats that. The same applies for the land-based second unit cinematography I have done — good people usually yield good results.

What’s your go-to gear? Things you can’t live without?
Much of the work I do is with the ARRI Alexa, which I have several housings for. I own my own Mini LF, but I rarely use it because it is usually working elsewhere on other jobs.

I travel a lot and always take my Leica M10 Monochrom with me  — I have a housing for that too.

What is in my bag at all times? My iPad makes scheduling and workflow easier while on the go, and I have housings for all my light meters, which I still use to this day.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

 

Using Sound Effects to Score The Killing of Two Lovers

By Randi Altman

The Robert Machoian-directed indie film The Killing of Two Lovers, which premiered at Sundance earlier this year and is now streaming, tells the story of a married couple in the midst of a trial separation. During their time apart, they agree to date other people — something the wife does with success, while the husband desperately wants his wife and kids back under one roof. It’s a story of pain and obsession.

Peter Albrechtsen, a veteran sound designer, re-recording mixer and music supervisor, is a frequent collaborator of Machoian’s, having worked with him on The Minors and When She Runs. Albrechtsen has also worked on projects such as Dunkirk and the Oscar-nominated documentary The Cave. For The Killing of Two Lovers, Machoian decided not to have a musical score and instead wanted to use sound effects, which challenged Albrechtsen … in a good way.

Peter Albrechtsen

We reached out to Albrechtsen, who mixed the film along with David Barber, to find out more about creating the soundscape for The Killing of Two Lovers.

You’ve worked with director Robert Machoian before. Can you talk about that relationship? Is there an unspoken language at this point?
My collaboration with Robert is quite extraordinary. He has a lot of faith in me and was really open for any idea or input that I could bring to the movie. I met Robert when I did the sound design for his previous feature film, When She Runs, which he directed together with longtime creative partner Rodrigo Ojeda-Beck. They came to Copenhagen, where I live, and we spent a week on the mix. The film had these long takes, which I thought were incredibly atmospheric, so I added a lot of different sounds to them using background sounds almost as foreground sounds.

Robert and Rodrigo loved it, and they mentioned several times how they wished they had made their shots even longer. So when Robert wrote The Killing of Two Lovers immediately afterward, he wanted to explore this further. “I really believe that you need to work on this film for it to be what I am imagining,” he wrote me when he sent me the script. Quite a statement, but that says everything about how much Robert cares about sound.

He sent me some musique concrète works by Pierre Henry and Hildegard Westerkamp very early for inspiration, and when he started editing — he was the picture editor himself — we started sending sound sketches back and forth. When the proper sound editing had been going on for a few weeks, he came to Copenhagen for a couple of days. The final mix was done at Juniper Post in Los Angeles.

Robert is very good at talking about the film’s sound in an emotional way, and it’s quite rare that we discuss small sync sounds like a door or a car pass. He talks about the feelings of the character and the scenes, and that’s what inspires me to do my sound design. He’s interested in sound being the inner voice of the character. So that’s what we talk about. And sometimes we don’t talk at all, but we can just feel in the mix, being in a room together, what works. It’s a very inspiring collaboration.

When was it decided there would be no score?
From the very beginning, Robert did not want to work a musical score, only with sound design. When he sent me the musique concrète works, the idea was to use sound in a very musical way from the very beginning. Musique concrète was invented in the 1940s and is a type of music that uses recorded sounds instead of normal instruments, and that approach became the foundation of the sound of the film. One of the very first sound sketches I did was the sound collage in the opening of the film that is built from different car sounds and metallic screeches and noises. When Robert heard that first sketch, he decided to use it in several places in the film, and in many ways, that became the film’s soundtrack: We scored the film with sound effects.

How did not having a score affect how you did your job?
It was such a bold decision by Robert to do a relationship drama without any score. Normally in a movie like this, the music would be there to guide your feelings and really be the emotional voice of the film. The main character of the film, David, very rarely talks about his inner feelings, so the emotions of the story are told through the sound.

When there’s no music, you really listen to the sound, and for me, the whole film becomes very real and gritty because there is no music added as filter or guide track for the senses — just like it is in the real world. It meant that I could be incredibly dynamic and bold in the use of sound. Sometimes it’s a very quiet film with a lot of small, subtle, distant background sounds, and at other times, the sound is very intense, loud and visceral. The soundscape is a reflection of David’s inner life.

I do not feel there is that big of a difference between sound design and music, generally. I work a lot with rhythms and tonalities in my sound design. I often score a scene with sounds the same way you would write a pop song: First comes the intro, where you play ambiences loud and set the scene, then comes the vocal/dialogue, and you bring down the instruments/ambiences, then in every pause in the vocal, you add a little fill to underscore the feeling. This would be a drum fill or guitar note in a song, whereas I use, for instance, a bird call or a train pass, and then the chorus comes, the scene peaks, and you make sure that the sounds build toward that moment. Then you add an additional sound or perhaps take away some sounds to make the moment stronger. I think that the more musically you approach sound, the more impact it has on the audience. So even though The Killing of Two Lovers doesn’t have any film music, it hopefully feels very musical.

You were also re-recording mixer on this. Were you mixing the dialogue or the effects?
I mixed the film together with David Barber at Juniper Post in LA, and he was handling dialogue and Foley, while I mixed ambiences, sound effects and all the abstract sound design elements. David did such an amazing job on the dialogue. This was the first time we worked together, and at our very first meeting, I urged him to experiment. One of the first things he did was to play around with panning in one scene where it really helped enhance the distance between the characters. Immediately, we all loved it and that became the method for all the dialogue mixing in the film — pretty much all the dialogue is panning with the person in the frame.

That may sound like a very technical endeavor, but it was an amazing way of underlining the physical and psychological gap between the people in the film. This took a lot of work, though, as several scenes were recorded with just one or two mics. David did magical things with iZotope RX to isolate dialogue elements. Like in the scene where the father and the kids are in the field with the fireworks. The kids didn’t have a separate mic, so David took the recording of the father and removed the kids’ voices, panned that to the left side together, and then removed the father’s voice and kept the kids’ voices in the right side. It’s something you couldn’t technically do just 10 years ago. It’s pretty much sonic magic what David Barber did, and it really made the wide images come alive in a beautiful way.

What were some of the soundscapes you created for this one?
There are a lot of different layers of sound in this film. Some very subliminal, some very upfront. Some are very quiet, and some are very loud. I’ve already talked about the abstract sound collages, but there are also a lot of background sounds that play a big role in describing the unique location in Utah where the film was shot. Everything takes place in this tiny town, and we really used sound to enhance the environment.

Robert went out to the town where they shot to record some ambiences on just a small recorder, and I used those sounds as inspiration and foundation for my work with the layered ambiences. Robert’s recordings were filled with the sound of cows; I’ve never made a movie with this many cow moos! It helps to enhance the humor that’s also an important part of the film. There are generally many abstract sounds playing in the distance as well. I love it when you can’t really hear clearly, but those textured, tactile sounds make your ear curious. There are also lots of different elements added to the sound of David’s truck.

I used animal roars when he turns on the engine and a lot of weirdly processed rattles and whines pitched so that they fit together or create weird dissonance when he’s driving — really turning the car into a living, breathing predator. It’s a beast. The car is really an additional character in the film, and the sound gives it a distinct and unpredictable personality.

How did you use sound to build suspense?
The suspense in the film is sometimes created by the intense, loud soundscapes but also, quite often, by the absence of sound. There are also a lot of abstract, unnerving, subtle elements in the ambiences throughout the film. Clayne Crawford, who played the lead role of David, was really good at doing ADR for breathing and efforts. Those can sometimes be really tricky to do for actors, but Clayne nailed it. By using his breathing all the way through the film, it creates this feeling of being very close to the main character. This results in a lot of suspense because it feels like we’re really close to a character who, from the very beginning of the film, is doing highly unpredictable things.

What was the process like with Robert? How often was he checking in. Were you working under COVID protocols on this one?
The film was made before COVID, but most of it was still done long-distance. Robert lives in Utah and I live in Copenhagen, and during the sound editing process, we were sending video bounces back and forth, usually a couple times a week and sometimes more often. As I mentioned before, Robert came over during the sound edit for a couple of days so we could experiment together.

I do a lot of international work with directors and people remotely. You can make a lot of great things that way, but being in a room together is still incredibly important, especially when you’re doing something that’s experimental or abstract like we did on this film. Still, the whole setup was very international: Dialogue editor Ryan Cota was in Sacramento; my Foley artist, Heikki Kossi, works from his studio in Finland; and the final mix was done in Los Angeles. Even my Danish sound effects editor and recordist, Mikkel Nielsen, is located almost 50 miles from me in Denmark. I love how you can easily work with anyone you want nowadays, no matter where they’re located. The teamwork on this movie was incredible, even thousands of miles apart.

What’s an example of a note you got from Robert? How did you address it?
Every director has his or her own way of giving notes. Robert rarely gives super-specific notes, but something that he’s focused on is rhythm, and his notes are often about timing — when should an abstract sound element start or when should it end? The sound collages were very rhythmic, and sometimes he wanted specific parts of the collage to hit specific places in the scenes. But this is also because Robert doesn’t necessarily want the subjective sound to cut out when we leave a scene. Sometimes he wants it to linger into the next one or perhaps cut out way before we leave a scene. There are a lot of very dynamic shifts in sound inside the scenes in the film, and a lot of those come from Robert. Sometimes in movies, sound can really be a slave to the image — every little sound has to have a visual reference, and all sounds have to stop when you leave a scene visually. But Robert is often quite the opposite: He really wants to explore what sound can do beyond the image.

Does your process change while working on big-budget films like Dunkirk versus more indie films like this one?
I love doing a lot of different projects, different styles, different genres. That’s also why I do both fiction films and documentaries. I even do sound installations in museums sometimes. Variety is incredibly important to me.

For Dunkirk, all I did was record sound effects of a special boat, which didn’t exist in the US anymore. I’ve known Dunkirk sound designer Richard King for quite a few years now, and he reached out to me because he was looking for sounds of a boat that only existed in Scandinavia. So this was a small job for me, but, of course, I’m very proud to be a tiny part of that phenomenal soundtrack. Richard King’s work is a big inspiration for me. But what I do feel is that, generally, there’s not that big of a difference between doing a big-budget film and a small indie feature. Of course, you have more time and more money when you do a big-budget film, but basically, it’s about telling stories with sound.

I think it’s a very fundamental thing to be part of the projects early, no matter what film it is. I love being part of projects already from the script stage, like I did on The Killing of Two Lovers, and the same goes for documentaries. Early involvement means I have a lot of time to do research, record a lot of unique sound effects and start creative discussions very early with the director, the picture editor and the composer. Great sound design is not something you do at the end of the process. Great sound design is something that’s integrated in the storytelling from the beginning.

Peter Albrechtsen

For a recent Danish fiction film, The Good Traitor, we did some sound collages for the film before it was even shot, and these sounds were written into the script, so the images were made to fit with the sound instead of the other way around. It was a really inspiring process, not just for me and the director, but for everyone — the photographer, the picture editor, the composer and the actors, of course. I think filmmakers should consider much more how to organize the creative process on a film. Every movie is different, and to me it’s kind of weird that the processes are usually so similarly organized.

Can you talk about the tools you use?
I mentioned this product earlier, but we really couldn’t have made the dialogue mix of The Killing of Two Lovers without iZotope RX. It’s amazing what that software can do. For the effect side of things, I used the Cargo Cult delay plugin Slapper quite a lot. Before Slapper I always found it hard to find the right delay for exterior sounds because those are usually very complicated, complex and unpredictable, but with Slapper there are so many possibilities for playing around with the settings. It totally changed the way I work with ambiences. This works for realistic sounds, but I often also add the weirdest sounds to Slapper, and because the delays work in such naturalistic ways, you can add any kind of crazy sound — an animal scream, a weird noise — to the plugin. I used that a lot for creating tension in subliminal ways in several scenes.

During the sound collages, I often sent ambiences, Foley and dialogue through Slapper to create the dizzying feeling of being inside our main character’s head. I had a lot of these delays and echoes as separate audio files so I could do lots of different pannings for them. The film is in 5.1 and not Dolby Atmos, but we really used the full surround system for panning abstract effects and elements in the ambiences. I love using spatial dynamics in mixing. Often when we talk about dynamics in sound, we talk about quiet and loud sounds, but you can also use panning for dynamics, and there are several places in the film where the sound is in mono and then goes full 5.1 or the other way around. It provides really interesting dynamics. We had a lot of fun with all available tools on The Killing of Two Lovers. Working on that film was a truly extraordinary experience.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Scanline’s VFX Supervisor Talks Epic Godzilla vs. Kong Effects

By Randi Altman

Ever sit around as a kid with friends and argue about who would win a fight between Batman and Superman? Well, I’m pretty sure the creators of Godzilla vs. Kong did. This Adam Wingard-directed action film is the fourth movie in the franchise’s MonsterVerse, which includes Godzilla: King of Monsters and Kong: Skull Island.

Bryan Hirota

Scanline VFX was the lead house on the film, with its LA, Vancouver and Montreal offices providing 390 shots over 17 sequences, including some epic battle scenes on the ocean and throughout Hong Kong. They were responsible for the overall design and build of both Kong and Mechagodzilla.

Here, Scanline VFX supervisor Bryan Hirota answers questions about workflow, being on set in Hawaii and Australia, and collaborating with the director and the film’s VFX supervisor, John “DJ” Des Jardin.

How early were you brought onto the film? And what benefit did that have (assuming it was early)?
We were brought onto the project while it was still in preproduction. As Scanline hadn’t done a lot of big-creature work of this type previously, we did a few test shots from the ocean battle sequence as well as a shot of sad Kong in the rain to demonstrate our ability to deliver emotional character performances. Everyone was highly confident in our ability to handle the large-scale destruction and FX simulations required throughout the battle sequences, but how much character work we should take on was still an open question.

We took this opportunity to do a first pass on aging the Skull Island Kong asset we received from Legendary, and our team really put their heart into the tests. Adam Wingard, John “DJ” Des Jardin (the film’s overall VFX supervisor) and the folks at Legendary were so pleased with the results of our tests that not only did they award us a large amount of hero-battle work, but they also asked us to carry on the work we started with Kong and create the hero “old man Kong” asset for the film to share with the other vendors.

Can you share some of the things you learned or how your game plan developed from being brought on so early?
Being on the show early and doing the test shots allowed us to get a good understanding of the size and scope of what would be required in these large battle sequences. It also gave us the time to prepare our pipeline to accommodate the sheer amount of data we would be using and decide how we were going to efficiently approach the layered simulations required.

You were the lead vendor on this. Do you know how many other VFX studios were on the film?
Weta Digital, MPC and Luma Pictures were the other primary vendors on the show.

You supplied almost 400 shots? Can you break down what those sequences were?
Our 390 shots spanned across 17 sequences, with work including the design, build and creation of both Kong and Mechagodzilla; the build and destruction of Hong Kong city; and a huge amount of FX simulation work to pull off the numerous battle sequences between our hero characters.

Main sequences included the initial attack by Godzilla on Pensacola; Kong on the transport ship, including Jia’s visit at night; the fight between Kong and Godzilla, both on top of and underneath the water; Mecha’s training exercise against the Skullcrawler; the plethora of fight sequences in Hong Kong that take place between Kong, Godzilla and Mecha; and the post-fight sequence.

Can you talk about the design of Kong and Mechagodzilla? These are iconic characters. Did you feel any pressure while working on them? Did you get inspiration from past iterations?
Scanline VFX was entirely responsible for the design, build and creation of the hero Kong asset, which was then shared with Weta Digital and MPC. As Godzilla vs. Kong takes place 50 years after Skull Island, we explored aging Kong relative to the number of years that had elapsed. We received concept art from the client as a starting point and, along with real-world references, we developed a final “old man” look for Kong, who was more aged, muscular and bigger while still maintaining continuity from Skull Island in relation to any past wounds and battle scars.

Our creature-build developments included:

    • We developed our body muscle system in Maya to include a real-time, procedural fat-jiggle rig, which was used on shots where characters weren’t as close to camera to increase efficiency.
    • For shots that required fine muscle and tissue details, we used Ziva’s FEM Solver. However, we could also mix the results from Ziva with our real-time Maya jiggle rig on a shot by shot basis.
    • We also developed an auto-simulation process for the muscles, jiggle and hair which could be run over a series of shots to increase our efficiency further.
    • We rebuilt our eye model for Kong’s eye, introducing the conjunctiva on the eyeball within our rigging and lookdev workflow. Introducing this extra layer to the surface of the eye meant we were able to get proper coloration and more realism to our eye model. We also accurately replicated the shape of the cornea, how it refracts light and interacts with the iris so the iris appeared correctly. We had full control of the meniscus and therefore we were able to control the mix of oil and water that sits on the surface of the eye.

Kong goes through a wide range of emotional states throughout the film, from tender moments with Jia to epic moments of rage. We carried out facial and motion capture for Kong, particularly for those shots that had extreme closeups or an intense emotional state that needed to be portrayed.

We implemented a new facial motion-capture workflow using Faceware and harnessing machine learning, and we referenced FACS primate studies from the University of Portsmouth to achieve certain face shapes.

Kong has an array of different groom states throughout the film, such as dry, wet, oily and burnt, which we needed to track for continuity. Due to his scale, Kong’s individual fur also needed to be huge, so we referenced cornfields blowing in the wind to achieve the right look. Kong had 6,358,381 hairs that were simmed into every Kong shot.

We received an approved design for Mecha from the client, however the challenge lay in taking this design and translating it into a functional 3D model that was agile. We started by developing the geometry and all the mechanisms for how his joints would function and how to avoid interpenetration. This involved creating gimbaling surfaces, sliding metal panels as well as bespoke mechanisms.

We also had to research and design all the weaponry and defense systems that Mecha needed to use throughout the film and implement those devices to work within the established Mecha model. We used a system called Manifest that connected all the parts of Mecha together in one master asset, much like a puzzle.

What were some of the challenges of the battle sequences?
The ocean sequences were challenging for FX as they had to account for the reflection and refraction of the light on the ocean, along with the scale of the creatures in the water and all the fur and water interaction.

Due to the scale of the characters, the ships in the sequence were aircraft carriers and big transport ships, which resulted in challenges when simming the boat wakes, splashes and sprays as they moved through the water and became overturned. We made improvements to our height field tech, which we use to calculate rolling waves. This made it faster and more realistic to calculate the surfaces of curling waves.

What about Hong Kong scenes?
The shots in the Hong Kong city sequences are full CG shots. We used helicopter/drone footage along with photography as a reference when building out the city. All buildings were built with full interiors, and building materials (glass, concrete, floors, walls) were tagged so destruction sims could be run accurately. Furniture was also added in for FX where necessary. Creatures were scaled relative to the buildings, as they had to be a lot bigger to clear the height of the skyscrapers. FX worked with our animation and layout teams to ensure the layout was protected when it came time for FX destructions to be run. That is, buildings that were set for destruction were tagged by FX in the layout so that animation could take this into account when driving the creature movements.

What was the most challenging part of the effects for this film?
The most challenging part was the sheer size and scope of the creatures, the destruction they caused and creating believable simulations and secondary interactions relative to their physicality and the world around them.

Is there a scene or character that you are most proud of and why?
I’m quite proud of all the work the team did on the project. If I had to narrow it down to one specific moment, I think the way Kong finishes Mecha is a satisfying and fun way to end that character’s rampage.

How were you sharing scenes for approval?
We would submit shots for review and then typically would have cineSync sesssions to discuss client feedback with DJ, which is fairly standard procedure these days.

Was any of this done remotely during COVID?
We started the show pre-COVID but had to switch to being able to work from home. Thankfully, due to the hard work and foresight from the company, the switch from in office to remote work went remarkable smoothly given the challenge.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

DP Chat: Snowfall’s Tommy Maddox-Upshaw

By Randi Altman

Executive produced and co-created by Dave Andron (along with John Singleton, Eric Amadio), FX on Hulu’s Snowfall is now streaming its entire fourth season. This gritty series follows the rise of the crack epidemic in the mid-1980s and revolves around several characters living in South Central LA, including a young drug dealer named Franklin Saint (Damson Idris).

Tommy Maddox-Upshaw

Cinematographer Tommy Maddox-Upshaw, ASC, has worked on Seasons 3 and 4 — sharing some of the load with DP Eliot Rockett.

We reached out to Maddox-Upshaw, whose other TV credits include Empire, Huge in France and On My Block, to talk about his workflow and how the show’s look has evolved.

Can you talk about the look that was established in Season 1 and how that’s evolved over the seasons?
Seasons 1 and 2 were very linear story arcs that were defined by an almost monochromatic palette for the main characters. The story diverged from linear storylines of three colors to blended by the end of Season 3. I followed the arc of the story and kept twisting and blending color following the storyline.

How would you describe the show’s look?
The show’s look is very aggressive and complex within the storyline itself. As Franklin’s web is weaved, the intricate nuances in the approach are almost like jazz and what visually feels right to the black-and-white of the page. With the occasional aesthetic solos, because the moment is so emotionally charged and shifting, I may try something a bit more extreme and visually fun.

How does showrunner Dave Andron explain the look he wants?
Dave does a great job explaining the story and look in the writers’ room at the beginning of the season and at the tone meetings.

Can you talk about the challenges of night shoots and lighting for the show?
I approach night work much like my daytime interior work and watch what’s best with the directors’ blocking.

What about the chase sequences?
I follow the directors’ leads and ask Dave who has the bigger moment as an anchor point.

How do you work with the colorist on the show?
Technicolor’s Pankaj Bajpai is amazing, and he helped set up the tone from the beginning. He knows the story, and we start there in collaboration.

How did you go about choosing the right camera and lenses for this project?
I chose the Sony Venice because I felt it’s a great tool and gives the best neutral starting point to manipulate the image. I chose the Zeiss Super Speeds with Eliot Rocket because it felt right.

Any scenes that you are particularly proud of or found most challenging?
In Episode 402, lighting the warehouse for the shootout scene, there were a lot of people to cover in a big space for the setup, and then executing the shootout itself. It was a great time lighting the warehouse at night with Black actors in black wardrobe.

Now more general questions….

How did you become interested in cinematography?
I was exposed to the business through my sister Kyla, and I already had an affinity for films because of people like Spike Lee, John Singleton and Steven Spielberg. My sister got me on the set of a music video in 1996 with legendary director Hype Williams, and I was hooked.

Where do you find inspiration?  
I find inspiration from many things, especially people like Barron Claiborne, Gregory Crewdson, Gordon Parks. And from Instagram, to be honest.

What new technology has changed the way you work, looking back over the past few years?
The Sony Venice has changed how I approach my work tremendously with the dual ISO and its dynamic range and color space.

What are some best practices that you follow on each job?
Asking, “What’s the story arc, or is there a story arc at all?”

Does your process change at all when working on a film versus an episodic or vice versa?
For me, episodics and film are truly the same approach now in this movement of television. Anyone who says otherwise is crazy. Snowfall is a big feature film each season.

Tommy Maddox-Upshaw

Explain your ideal collaboration with the director when setting the look of a project.
If a director and I can take a good amount of time, like months, and develop a language for a film together — referencing anything that can express different aspects of the script and overall feel — that’s when I’m truly happy going into a show or movie because we have an emotional plan that speaks to the script.

What’s your go-to gear (camera, lens, mount/accessories) – things you can’t live without?
Whatever gear is best for the script itself and what is going to reflect the most seamless route to getting the visual language agreed upon. I like to switch things up because all stories are not the same. If anything, with my Odyssey 7Q and the Sony Venice, I know I can take on the challenge at a good starting point.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Virtual Roundtable: Remote Workflows

By Randi Altman

While some in the industry were starting to dabble in aspects of remote workflows prior to March 2020, COVID made this way of working a necessity. As they say, adversity breeds innovation, and that has been true during the past year as technology companies and post houses stepped up in a big way, making sure the work continued uninterrupted.

While there have been many advances and reasons to celebrate — including saved commute time, more time with family and access to global talent — there are some pain points as well, with the majority of those we spoke to citing slow internet speeds, the need for face-to-face creative sessions and missing out on those “watercooler” meetups.

For this virtual roundtable, we reached out to a variety of users to talk about how they made the transition to remote. We also talked to manufacturers, which have either introduced or adjusted their technology to help post studios build robust production pipelines.

Goldcrest Post New York’s Domenic Rom

Have you been working remotely during the past year? 
Yes. When the pandemic hit, we moved quickly to establish remote options for all our services. We kept our people and clients safe.

How have your remote workflows evolved from March of last year until now?
When we implemented remote services last spring, it was a scramble. We patched systems and workflows together. As with everything in this business, we have continuously refined processes, systems, technologies and protocols since then. We worked out the kinks and improved efficiency, security and client experience.

What are some of the tools you’ve been using?
We use standard remote desktop options including Amulet Hotkey, Splashtop and AnyDesk to drive our in-house Blackmagic DaVinci Resolve, Autodesk Flame and Avid Media Composer. ClearView Flex, T-VIPS and other HD video streaming tools let us work in 4K and HDR and facilitate the secure streaming of content directly to mobile devices.

Have artists been dialing into your servers or working locally?
All source material is kept in-house. Our artists access it through a secure connection. Security is paramount to everything we do, so under no circumstances do we allow client media out of house. We keep it safe behind our firewall.

What are some pain points of working this way?  
Remote work is heavily dependent on internet connection speeds, both at the facility and at the access point. We immediately upgraded internet speed and bandwidth at our facility and for most employees. Having clients and artists working from separate locations inevitably adds time to the process and results in some loss of personal connection. There is lag time between thought and execution. Internet connections go down, dogs bark, cats walk in front of cameras, babies cry.

What about the positives?
The most positive thing is that we’ve learned we can do it well. We kept our business going and kept everyone safe and sound in their homes. One of our greatest fears was having COVID spread in the facility, but we refused to let that happen. Remote workflows are now in place, they work well and, I believe, they are here to stay.

What would help make remote workflows easier in the future? 
Remote work has come a long way over the past year. It had to because it was necessary. Even as people return to our facility, some work will remain remote because it is now practical, reliable, safe and convenient. It’s a viable option that many of our clients will choose.

Cinnafilm’s Ernie Sanchez

Which of your products are people using for remote workflows?
People are using our flagship product, PixelStrings. It’s a flexible and scalable SaaS that encompasses Cinnafilm’s and our partners’ audio processing solutions (including Tachyon and Dark Energy). PixelStrings provides enterprise-grade standards conversions within realtime transcode workflows; high-quality, motion-based frame-rate conversion; deinterlacing; denoising; and texture management.

With technology partners such as Skywalker Sound and Technicolor, PixelStrings also offers tools for audio/video retiming, channel routing, loudness control and fully automated SDR-to-HDR conversion.

Are they cloud-based? IP-based?
PixelStrings is a cloud-native platform that is also available for on-prem IP.

Were these tools available before the pandemic, or did you introduce them during the past year?
We introduced PixelStrings Cloud in 2017 due to the growing demand for cloud-based media conversion. This allowed us to be well-positioned for the shift that occurred due to the pandemic. We were prepared with vetted solutions that we continue to develop as our customers’ needs grow. PixelStrings On-Prem was released last year.

What are the trends are you seeing? Are remote workflows here to stay?
We’ve noticed there is a growing need for cloud-based transcoding and image processing. In the past, these processes were completed mostly on-premises. Many content owners are aggressively seeking the most efficient ways to get a lot of work done remotely, which is bringing a larger-than-expected variety of customers to our platform.

Our belief is that the hybrid remote/on-prem approach that has been adopted will continue to grow. PixelStrings provides the best of both worlds with an on-premises option that fully mirrors the functionality and interface of PixelStrings Cloud for private, on-premises deployments. Clients can select which option best suits their workflow and their budget.

What do you see as the best parts of remote?
On the high end, remote has enabled some of the best talent anywhere in the world to be available for hire. All that is needed is a solid internet connection to bring the best and brightest minds in software development, editing, visual effects, etc. into your project.

In general, it provides options that reduce employee stress and fatigue, and it returns days of their lives back to them by eliminating commuting. At Cinnafilm, our CTO calculated that he has gained over a year of his life back by working remotely and eliminating his commute.

What about the hardest parts?
In general, there could be inconveniences or concerns. An inconvenience could be upload and download of the source and rendered files. A concern could be the user’s comfort level with using secure, public-cloud infrastructure. But there is no such thing as a fully secured workflow system, whether it’s in the cloud or on-premises.

What would help make remote workflows easier in the future?
Bandwidth to the last mile makes everything easier. Some clients already have fiber-based connectivity, so this isn’t an issue with them.

Where the Buffalo Roam’s Taraneh Golozar

What services does your company provide?
Where the Buffalo Roam (WTBR) is a full-service creative production company with offices in Oakland and Los Angeles. With decades of experience in advertising and commercial production, we offer services from strategy development, business affairs and production to post production and finishing.

Have you been working remotely during the past year?
Our work-from-home mandate went into effect in March 2020. What started out as an option quickly became enforced within days, as all employees were stationed at home. We continue to follow city and state protocols and limit our in-office interactions. Currently, we are assessing how we can benefit from the advantages of remote workflow when things get back to normal while seeking a healthy balance between the two.

How have your remote workflows evolved from March of last year until now?
Technological advances have helped this process. This industry could not have survived if we were still in the world of dial-up internet or if the use of floppy disks was our only method of file exchange. Comparatively, today we can stream a live session from across the country/globe or transfer terabytes of data with just one click. The evolution of technology was heightened even more during the pandemic, and we expanded our communication horizon through platforms such as Google Meet, Zoom, BlueJeans, Slack, FaceTime and others.

Being Human

What are some of the tools you’ve been using?
Slack, Dropbox and Google Sheets were our bread and butter for the longest time until we were introduced to Rangeworks, a project resource and digital asset management platform. Rangeworks became our savior in the work-from-home era. It’s a flexible platform that delivers core functionality and then adds a customizable layer that can be configured to deliver exactly on the needs of all end users.

Have artists been dialing into your servers or working locally?
With full-time artists already working remotely before the pandemic and working seamlessly between the two offices, we were ready to go fully remote. But as most things go, it’s a bit different when you want to work remotely versus being forced to. That said, having the Dropbox server has been our lifeline in terms of the full office needing to work together without much interruption.

Gym Shark

What are some pain points of working this way?
In-person human interaction is essential for any individual, team or company’s health and growth. Working remotely removes the human experience and the sharing of creative ideas. Just being able to spend time together without the need for screens has been a huge challenge.

What about the positives?
One perk is that you get to sleep in longer, but the best thing that has come out of working remotely is that it has allowed us to hire and partner with more talent from around the world, which brings all sorts of fresh perspectives to the work. With the power of the internet, we are connected through space and time no matter the location or time zone.

What would help make remote workflows easier in the future? 
With on-site productions, we get to humanize the process and establish care, but that affinity tends to get removed from a remote workflow. Companionship is a profound connection that builds trust and improves productivity. Perhaps the combination of both remote and on-site productions can be the solution to making the remote workflow easier in the future.

Colorfront’s Aron Jaszberenyi

Which of your products are pros using for remote workflows?
We recently introduced Colorfront Streaming Server, which is a dedicated 1RU box with SDI input. It offers secure sub-second latency and reference-quality streaming from any UHD video source. But the same remote video-streaming technology  — including On-Set Dailies, Express Dailies and Transkoder — is available in all Colorfront dailies and mastering systems on Windows.

Are they cloud based? IP-based?
Having access to Colorfront Transkoder on a GPU-enabled workstation in AWS and Microsoft Azure is great, but you also need high-quality, low-latency video monitoring, just like you have with SDI on a local system. Colorfront Streaming also works great when needing to access systems in your facility remotely.

Were these tools available before the pandemic, or did you introduce them during the past year?
Colorfront has been working on remote streaming for several years as part of our cloud initiative. We only released it to our customers about a year ago, coinciding with social distancing and the COVID lockdown. We have seen wide adoption by a number of major studios, OTT providers and post facilities, and it is actively being used in production on both sides of the Atlantic.

What are you seeing in regard to trends? Are remote workflows here to stay?
We are using mature, cutting-edge technologies with industry-wide adoption of remote streaming, such as SRT (secure reliable transport) and NVENC (Nvidia GPU-accelerated HEVC encoding). These building blocks allow Colorfront’s remote streaming solution to offer secure, optimized video streaming performance across unpredictable networks. The demand for remote work will only increase, and after learning that they could work from anywhere, colorists, editors, VFX supervisors, QC operators, producers and clients will continue to want perfect reference-quality remote video at their workstations — from the facility and from the cloud.

What do you see as the best parts of remote?
We see studios moving massive video files into the cloud: 4K lossless Dolby Vision IMFs, terabytes of OCN Raw files and OpenEXR files for visual effects. Well, once there, studios also want to put eyes on them for QC or for review and approval. How do you do that without moving the footage? Colorfront’s customers routinely spin up a Transkoder in AWS, hit play and stream reference-quality HDR video and audio to a dailies or QC operator or VFX supervisor sitting at the facility — or in their home.

What about the hardest parts?
One of the nontrivial aspects is reliability — that is, robust streaming even on nonproprietary networks rife with packet loss and bandwidth fluctuation — that ensures the best possible viewing experience on typical home broadband connectivity. Other aspects are achieving true reference quality, including support for 4K video in 4:4:4 SDR/HDR, and support for Dolby Vision and multi-channel immersive audio and security (AES-256 bit encryption, visible and forensic watermarking.)

What would help make remote workflows easier in the future?
By moving everything into the cloud more quickly. Networking and cloud-based production technologies are maturing rapidly. Innovations like SetStream.io and Frame.io’s Camera to Cloud (C2C) will help to further enable production teams to collaborate on complex projects over long distances.

Hayden5‘s Melissa Balan

What services does your company provide?
We are a post company working with brands, agencies and entertainment clients. One of our new proprietary offerings is Cloud Cuts, which is our vision of what the future of post can look like — with editors, colorists, VFX artists and sound designers working from anywhere in the world. High-definition, low-latency, browser-based client edits. Accelerated file transfers spanning continents. Secure lifetime backups of all client media. We’re embracing remote post not because we have to, but because it’s better.

Have you been working remotely during the past year?
We have. Our primary headquarters are in New York, and the East Coast team began remote work in March 2020. In early 2021, Hayden5 officially expanded to Los Angeles and now has a team of West Coast production and post staff. While the pandemic forced us to shift to remote workflows, we’re now fully embracing the flexibility that decentralized post provides.

How have your remote workflows evolved from March of last year until now?
When the pandemic hit, it was finally time to leave hard drives behind. Since then, we’ve completely restructured how media arrives in our ecosystem, how it’s shuttled between relevant parties, and where it ultimately ends up. To pivot our entire workflow to digital.

What are some of the tools you’ve been using?
We’re now using purpose-built file acceleration tools to transfer media across the fast lane of the internet directly into our on-premises storage, where it is automatically indexed, proxied and backed up. Our new asset management system provides a web portal for us to browse media from anywhere. From there, point-to-point downloads are initiated to share media with editors working from all over. When projects wrap, we can archive to a secure cloud storage with the click of a button.

Toyota’s “Well Good”

Have artists been dialing into your servers or working locally?
We’re currently taking a hybrid approach. Our on-premises storage acts as a repository for all inbound media, which is then shared with our contractors via accelerated file transfer links. Once content is downloaded, contractors work with the media locally, then consolidate projects back to our on-premises storage upon wrap. We have plans to make this process even more seamless and decentralized.

What are some pain points of working this way?
In short, internet speeds. Some contractors have gigabit connections; others don’t. We’re actively solving this problem with new hardware and software solutions so that any vendor, with any connection, can work efficiently. This is the core value-add of our Cloud Cuts system.

What about the positives?
The positives greatly outweigh the pain points in our remote workflows. Not being limited to the physical world frees us up to work with a much larger network of talented professionals. We have freelancers in most major cities around the world. I’m based in LA, but my post producers are on both coasts, and our clients are all over the world.

The ability to transfer media digitally and smoothly from production to post and to conduct our offline and online workflows from anywhere in the world — even offering low-latency live-edit sessions in a virtual “room” — allows us to offer a lot of flexibility to our clients and hire the best people to do the best work. Plus, no time commuting between work and home means more time with family and friends and helps contribute to a cleaner environment by cutting down on emissions.

What would help make remote workflows easier in the future? 
Better, faster, more reliable home internet access across the US and the world; Mac hosting for PCoIP products; bare metal Macs in the cloud; simpler/more predictable pricing for cloud-based active-tier storages.

Dell Technologies’ Thomas Burns

Which of your products are people using for remote workflows?
Dell Technologies’ presence in remote creative workflows spans core data center infrastructure to technology at the edge and end-user devices.

On the data center side, our customers are using Dell EMC VxRail and VMware Horizon for VDI, Dell EMC PowerEdge servers for compute, Dell EMC rack workstations and Dell EMC PowerScale and ECS for storage. On the end-user side, our customers are using a wide range of Dell Precision workstations and displays.  Some of our customers are also using a range of Dell Wyse PCoIP-capable thin clients.

Are they cloud-based? IP-based?
While many of our products are deployed on-premises, we offer multi-cloud and cloud-native solutions.

Were these tools available before the pandemic, or did you introduce them during the past year?
While there were some new products announced during the past year, like Dell EMC PowerScale F200 and F900 nodes, the majority of the products being brought in to solve the challenges of remote workflows were already on the market prior to the pandemic and were already in use in workflows that span the globe.

In fact, we were able to provide guidance based on these experiences to help get remote workflows up and running quickly. One example of this is the work we’ve done with Like a Photon Creative, Australia’s only female-owned and operated animation studios. Like a Photon Creative was able to embrace remote workflows and realize a 120% productivity boost.

What are you seeing in regards to trends? Are remote workflows here to stay?
I believe that remote workflows are here to stay, at least in some capacity. But I think it will resemble a more hybrid model that offers flexibility to work from wherever while still providing the opportunity for face-to-face collaboration when necessary and possible.

Remote workflows open the possibility to find and retain talent that used to have geographic limitations, both on the local and global scale. For example, remote workflows make it possible to live in an area with a lower cost of living without the need to commute daily to a centralized office that might be in a higher-cost area. On the global scale, it allows for access to a wider pool of talent and 24/7 creative workflows.

What do you see as the best parts of remote?
Aside from what I’ve already covered, the best part is providing the flexibility for creatives to work however they find themselves being their most creative. Whether they are more creative at home or in an office, the result will ultimately be a higher-quality product in the end.

What about the hardest parts?
Workflow complexity and ensuring that valuable IP remains secure. On top of the complexity of data moving between working groups using tools both on-prem and in the cloud, remote workflows introduce a whole new layer of workflow management complexity and a larger threat landscape that may span devices outside of traditional firewalls.

What would help make remote workflows easier in the future?
Working with a partner to identify the opportunities and challenges of remote workflows and developing a strategy that accounts for both data management and security challenges upfront.

From a data management perspective, it’s really about shifting from file management to asset management. This requires developing a deep understanding of data flow and dependencies between working groups and using tools that consolidate and automate this management — for example, the Dell EMC DataIQ plugin for the Autodesk Shotgun API.

While security might not be top of mind when developing a strategy, it’s vital to get ahead of threats with a comprehensive content security strategy. Failure to do so might lead to revenue or reputational loss due to leaks of sensitive data.

One way to get ahead of this is by employing good policy and governance along with using prevalidated solutions and architectures that meet industry standards, e.g. the Trusted Partner Network.

DigitalFilm Tree’s Nancy Jundi

What services does your company provide?
Dailies and GeoDailes: Drop off physical drives or go true camera-to-cloud from anywhere in the world.

Cinecode: story visualization

SafetyVis — on-set safety, living storyboards, Lidar and import of your set

Color, VFX and online: available with remote review

GeoPost data management: All cloud, all secure

HDRexpress:Helping older show libraries step into the world of HDR streaming using a Dolby-certified workflow on SDR source material

Archival: While we’ve been early adopters of camera-to-cloud, we’re also emboldened by the future of LTOs in tandem with QR code technology.

Have you been working remotely during the past year? 
DigitalFilm Tree has remained fully operational, 100% staffed and open to serve remote needs while still available to receive physical media. That said, 85% of our staff did immediately pivot to working from home because we have the network security and infrastructure in place to secure unaired media in the home/on consumer Wi-Fi.

How have your remote workflows evolved from March of last year until now?
We’ve deployed hundreds — and approaching well over a thousand — network security routers to protect remote editors, colorists, VFX artists and more. The threat to those working on unaired media in the home was and remains real. To work from home had always been a luxury. Overnight it became a necessity, and hackers were incredibly quick to capitalize. Our router inventory skyrocketed overnight, and our network security team was awake for a good, long stretch ramping up in those early weeks.

After security, it was really a matter of helping to get others back to work. We really just had to increase inventory to keep up with demand for remote review stations and on-set/camera-to-cloud GeoDailes pods, and then we hired more previz artists to help productions visualize cast and crew safety protocols.

What are some of the tools you’ve been using?
We’re pretty agnostic across the board, but for previz we use Unity and Unreal. Dailies work is mostly Blackmagic Resolve, but we do have a couple FilmLight Baselight shows. Editorial is Resolve and Avid Media Composer. Color is Resolve. VFX is Foundry Nuke and Adobe After Effects. All departments have their bevy of organizational tools, like ftrack and Shotgun, Trello and Slack. Signiant and Aspera are used for file transfers. The list is endless, never mind what’s proprietary.

NCIS LA

Have artists been dialing into your servers or working locally?
Both. We’ve reached a point in our safety threshold where if staff wants to work from home or the office, they can choose. As for clients, we’re still servicing all needs remotely. Only now have we begun discussing on-prem offerings, but only as a means to prepare. There has yet to be a demand from clients to get back in the bays, which might suggest we’ve set them up a little too well at home

What are some pain points of working this way? 
Home internet. We’ve spent a large chunk of the last year in conversation with just about every internet provider you can imagine to either offer better solutions, beta their idea of a better solution, deploy/test/architect edge computing for last-mile internet options — you name it. We’ll keep trying because there’s no reason some portions of Los Angeles should still be seeing up/down speeds that are reminiscent of dial-up.

What about the positives?
Endless. Quality of life can skyrocket in these conditions. When it’s an option to work from home or the office, you can plan your life a bit more organically. Again, our building is a tool to serve a greater whole; it’s not an office in which we seat people for eight hours a day. We’ve got a mixed bag of folks — some prefer the office, some prefer the home and some prefer a hybrid. There’s also a lot to be said for productivity. Proximity doesn’t always equal fast answers. The dilly-dally here and there or the constant interruptions prevent focused work. The office days can eat up a lot of time for some folks. For others, the home is less productive, or they are protective of their time, so flexibility has really only increased output and efficiency and has clarified how we communicate with one another.

What would help make remote workflows easier in the future?
Better home internet.

Scale Logic’s Bob Herzan

How are folks using your products for remote workflows?
We see editors using VDI, proxy editing, sync and cloud technologies and even sneaker-netting media drives around. In most cases, our customers must adapt to several different technologies to adjust their workflow and create the level of efficiencies they need to get the job done.

Our Remote Access Portal (RAP) is storage- and MAM-agnostic, providing editors seamless access to content that exists in on-premises storage — and enabling them to access that content and sync to a local storage device. From there, they can work locally and then easily sync changes back to the project file.

Are they cloud-based? IP-based?
Scale Logic supports both cloud-based and IP-based technologies to satisfy various customer requirements. Proxy editing that takes advantage of current media asset management technologies can be an excellent choice for private and public cloud offerings.

However, if you do not have a MAM solution in place, our RAP offering will allow a push/pull workflow between your on-premises storage and your remote editors — virtually turning your current on-premises storage into your own private cloud.

In addition to this, adding a VDI configuration to support larger file-based workflows enables remote workers to access any office editing systems that are connected to the on-premises shared storage. This allows editors to work seamlessly on projects while still collaborating with other colleagues who are syncing their changes via RAP.

Were they available before the pandemic, or did you introduce during the past year?
MAM-based proxy editing has been around for a while, with many updates taking place over the last year to improve on this level of workflow. Our RAP solution and VDI support was developed around a larger need to bring a remote workflow to those without a MAM in place.

What are you seeing trends-wise? Are remote workflows here to stay?
Without a doubt, remote workflows are here to stay. We have seen customers change their environments to a more hybrid model by downsizing their facilities or even getting rid of them altogether in order to go fully remote.

Our clients see great benefits in being able to work and collaborate with freelance editors of their choice. As our customers go from necessity to choice, they will learn better ways to support their remote workflows. Meanwhile, we as innovators will listen, learn and support their requirements through technology and managed services.

What do you see as the best parts of remote?
First, the flexibility that remote allows creative artists. Being able to sync to a local device, even without an internet connection, allows these professionals to work from literally anywhere. Second, compared to working in a cloud-based environment, there are no additional operating costs to work remotely. Rather, you’re using your own storage to create your own private cloud, so there are no fees to sync your data.

What about the hardest parts?
One big challenge is that remote workers often need to change the way they think and communicate. Moving to remote editing requires good communication and solid SOPs for a successful transition. This is especially true when working in a collaborative editing environment; you want to ensure that you’re not overriding or interrupting the work that someone else just did. It’s easy enough to check with a colleague when you’re working in the same physical office, but not so much when you’re working from separate locations. So it’s critical that your team understands and consistently adheres to a set of operating procedures.

Another challenge is the variance of internet providers and the level of performance an editor may have in his or her area. There is a time cost for dealing with large media files. If you have a slower internet connection at home, it could take days to properly sync data. So having a good internet connection is imperative.

Finally, security is another huge concern for those making the move to remote work; they want to ensure that their intellectual property is always protected. The good news is that, with RAP, you’re sending your media files securely over HTTPS from the on-premises storage to your personal drive.

What would help make remote workflows easier in the future?
One of the biggest challenges we hear regarding remote workflows is the inconsistency of internet speeds. Download and upload speeds will continue to play a huge role in remote workflows — specifically, how quickly editors are able to sync and access huge files so they can do their jobs effectively and efficiently. As ISPs roll out faster internet options, I believe we will begin seeing a major boon in the marketplace for remote workers.

Streambox’s Bob Hildeman

Which of your products are people using for remote workflows?
Chroma 4K, Chroma X with Cloud Sessions workflows using iOS, OSX and Windows media players and Chroma and Halo decoders.

Are they cloud-based? IP-based?
Yes, IP-based with Cloud Sessions workflows. IP-based workflows cover all product support and internet protocol transmission for video streaming. Cloud-based workflows use AWS, Azure and others for data processing, video/audio routing, archiving and management.

Were these tools available before the pandemic, or did you introduce during the past year?
Yes, we had the solutions I mentioned prior to the pandemic. However, we have updated our media player software and Spectra, our virtual encoder.

What are you seeing trends-wise? Are remote workflows here to stay?
We are continuing to see demand for remote workflows that allow our customers to make faster and better decisions for grading, editorials, VFX and sound productions. The content creations are global due to all the streaming companies.

What do you see as the best parts of remote?
At the end of the day, customers are saving time/money by having faster feedback to make changes, approvals and team collaborations on ideas across the global footprint.

What are the hardest parts?
We continue to work to make solutions easier to use at lower costs and to drive higher-quality new software like Spectra, our virtual encoder product that plugs into tools from Avid, Adobe, Resolve and other video editing applications. This is a new type of workflow for editing directly in the cloud or in on-prem edit bays.
What would help make remote workflows easier in the future?
Advancements in technology would help a great deal. This could be new iPad Pros with high-quality, color-accurate Retina displays; faster MacBook Pros with M1 chips (which produce higher-quality 10- and 12-bit videos); newer, lower-cost and color-accurate monitors; and LG OLED TVs that support Dolby Vision and Atmos. Our software takes advantage of all of these new technologies to deliver high-quality and color-accurate videos at lower costs.

Syn’s Nick Wood

What services does your company provide?
Syn is a music agency providing music and sound to the film, TV, gaming and advertising industries. We offer original music, sound design, sonic branding, catalogue music and more. We also offer ADR/VO recording from our studio in Tokyo, working with both local and international productions for translation, localization, production and post.

Have you been working remotely during the past year?
Music making can be — to some extent — a fully remote process. However, we’ve continued to do socially distanced sessions from our studio in Tokyo and tried to record live musicians and live artists wherever it was safe to do so. From recording vocalists such as Maxayn Lewis in Los Angeles, to working with voice artist Rudi Rok in Helsinki, all the way to recording 60-piece orchestras in Macedonia, we’ve certainly fully committed to working remotely. Thankfully, with modern technology, great work can also be made at home, and we’ve tried to encourage that whenever possible.

How have your remote workflows evolved from March of last year until now?
Given Syn’s global setup (with teams in Japan, China, Southeast Asia, UK and the US), I think we were fortunate to be well-prepared to work remotely. As an example, we have a daily production meeting, which we started well before the pandemic, and this enables us to work closely as a team despite the large geographical distance sometimes. We’ve tried to keep everyone engaged and positive while we’re not in the same room — from cocktail-making evenings on Zoom to weekly show-and-tell workshops.

What are some of the tools you’ve been using?
From an audio point of view, we’ve been using Source-Connect to offer realtime ISDN connection for sessions we’re running from the studio in Tokyo, giving directors, producers and clients the opportunity to “join” the sessions remotely and give their feedback with ultra-low latency.

What are some pain points of working this way? 
There’s definitely something to be said for realtime, face-to-face contact. After all, that’s how humans have communicated for centuries. I think one of the difficulties of a mostly remote setup is keeping a team positively engaged and communicating in an effective way. While tools such as Teams, Basecamp, Slack, etc. are all good facilitators of communication, there’s nothing like a face-to-face conversation, with all the nuances and subtleties that involves.

What about the positives?

One of our differentiators is our diversity and international reach. I think transitioning relatively easily to a remote working format has only made our global, 24/7 setup smoother and more efficient. It’s reinforced the idea that great music can be made across borders, time zones and language barriers. I think that sometimes it takes a challenging situation — seemingly with lots of obstacles and barriers — for you to think outside the box and create solutions.

What would help make remote workflows easier in the future?
Increased integration between platforms, perhaps. It’s a great thing that there are now so many platforms and tools to support, automate and encourage remote working, but sometimes juggling so many different platforms can be confusing and complicated. So increased integration between tools — and perhaps increased automation (where appropriate) — would make remote work easier in the future.

Source Elements’ Rebekah Wilson

Which of your products are people using for remote workflows?
Our entire solution toolbox was built from day one in 2005 for remote workflows. Source-Connect, for example, is an industry standard for remote recording. Source-Nexus was born from the need to connect remote applications together “within the box,” removing the need for external hardware to make internal connections with our DAW.

Are your tools cloud-based? IP-based?
While we support certain cloud features, such as file transfer, we really focus on connecting remote people in real time to “be apart but create together.”

Your tools were developed pre-pandemic, but were some of them updated due to the need for remote tools?
Our tools have been developed and sharpened over the last 15 years. Since the pandemic we have, of course, invested a significant amount of effort in delivering and improving the new hybrid workflows that we all feel will be here to stay.

Our updated Source-Live Pro: Low Latency service has incredibly low latency (often eight frames or so from point to point) with HD. That’s 1920×1080 pictures at up to 60fps with in-sync, broadcast-quality audio, which allows for realtime remote collaboration with multiple remote parties. This is a specialized, purpose-built collaboration tool for AV professionals, not conferencing software.

What are you seeing trends-wise? Are remote workflows here to stay?
We know that remote workflows are here to stay, and we can say that through the experienced lens of over 15 years of providing remote workflow solutions. The paradigm has changed, and we have learned to work better and to work smarter. The world is now our global creative village.

For us, it’s not really new, but we have expanded further in new directions, reaching new possibilities and new potentials.

What do you see as the best parts of remote?
I love the fact that everyone I talk to is not only creative and openly collaborative, but comfortable and enjoying being nearer their families; we aren’t wasting time commuting or missing planes. We have so much more time to work together productively and meaningfully in a less stressful way. We are working smarter, not harder. Although this can lead us to spend perhaps more time than we might otherwise — like on our computers — so we must all take appropriate breaks!

What about the hardest parts?
We all love those “water cooler moments” or going out for lunch during work. That’s something we can’t replace, but we know that by creating together as we work remotely, we strengthen our sense of connection and community. When we do eventually get to meet, and when we can travel again, we’ll immediately become firm friends, as we have already created a strong element of trust and collaboration.

What would help make remote workflows easier in the future?
There are a few things on our wish list:

First, it would be that the entire world has access to fast internet. There are vast parts of the world where this just isn’t possible. However, with 5G and projects such as Elon Musk’s Starlink on the horizon, we know this will change quickly. We are doing a series of winter concerts with a ski resort in New Zealand, far from any city, thanks to the Starlink service.

Secondly, the very act of working and collaborating remotely has now become mainstream and part of the new world culture. This makes our job a lot easier, as everyone now has the appetite to try working smarter. We look forward to making that a reality for everyone.

11 Dollar Bill’s Del Feltz

What services does your company provide?
11 Dollar Bill is a full-service post studio specializing in creative design, animation, editorial and finishing.

Have you been working remotely during the past year? 
Yes, and we were very fortunate to be able to adapt to the new normal rather quickly. With offices in LA, Chicago and Boulder, we were already set up for remote work and were already sharing resources across all offices. We needed to make some adjustments but feel we were one step ahead of the game going into it.

How have your remote workflows evolved from March of last year until now? 
I think our biggest evolution has been with our clients. We work in a very hands-on industry, and most clients prefer to be in the room with our team when working on projects. But they have seen that remote work sessions are possible and have been super-supportive along the way, trusting us to do more unsupervised sessions, which makes the live sessions more productive.

What are some of the tools you’ve been using?
Over the past year, we have introduced several options for working remotely. We have implemented technologies including Zoom, Slack and Streambox that facilitate long-distance collaboration. Our goal was to incorporate technologies that our clients were already using and were familiar with so that there wasn’t a big technology learning curve for them.

These solutions have worked exceedingly well and proven to be not simply stop-gap measures, but efficient ways to get things done. We will continue to offer remote options even in the post-pandemic world and are working to further extend and improve services to benefit clients who value that convenience.

Have artists been dialing into your servers or working locally?
It has been a combination of both.

What are some pain points of working this way? 
I think the biggest pain point was trying to figure out how to maintain our company’s culture while being so secluded and somewhat disconnected. We began by having status meetings twice per day over Slack, which allowed us to touch base on the work as well as on a personal level. We also had some fun virtual happy hours and officewide events that weren’t mandatory, but almost everyone attended in order to get and maintain that connection we were all missing.

What about the positives?
Although the pandemic brought unpredictable challenges, we have weathered the storm without serious setbacks. We remained productive and were able not only to avoid staff reductions but also increase staff over the past year.

What would help make remote workflows easier in the future?
On the technical side, remote workflows are limited by the internet bandwidth of the client. However, when time is an issue or when our clients prefer it, we will continue to offer remote services. Nothing beats having our clients experience the food, culture and comfortable, collaborative environments that our River North and downtown Boulder offices offer.

AFX Creative’s Toby Gallo

What services does your company provide?
We are a design-driven, multidisciplinary content creation studio specializing in VFX, 3D animation, motion design and color grading. Our capabilities include live interactive remote color-grading sessions, remote Autodesk Flame sessions, a full-service CG department and more.

Have you been working remotely during the past year?
Yes, immediately after the COVID-19 restrictions began, we started to rely heavily on remote collaborations. Thankfully, we had the virtual resources ready for a seamless transition. Because remote operations have become such a mainstay, I don’t foresee this paradigm shifting anytime soon.

How have your remote workflows evolved from March of last year until now?
Although AFX Creative had been set up to handle remote workflows and collaboration prior to the pandemic, we made small and key changes to meet an increased demand in our talent and work. Additional provisions were also made to our connectivity and to the hardware required to support more endpoints.

What are some of the tools you’ve been using?
Streambox, UltraGrid, HP Remote Graphics Software, Frame.io, NIM, Wiredrive, and Sohonet FileRunner.

Have artists been dialing into your servers or working locally?
To ensure security and minimize the potential for client data exposure, we mandated the use of zero and thin clients for all team members working on a creative level. This is carried out through a variety of security measures put in place.

For production management, we embraced secure L2TP VPN connections. This allows remote workers to work within the corporate network as if they’re physically at the workspace. Any access required to tools like NIM is achieved in a similar secure fashion.

What are some pain points of working this way?
Fortunately, because we had the existing infrastructure, it wasn’t as painful as it could have been. The most challenging part was acquiring the additional equipment needed to scale up and handle the new and increased volume of requests. Once the additional equipment was acquired, it took us less than a week to implement and deploy.

What about the positives?
Like other industries, we have found an increase in overall productivity. Our creative talent has enjoyed the freedom, flexibility and convenience of working from home.

What would help make remote workflows easier in the future?
Remote workflows are already working very well. Like anything, there’s room for improvement, such as the ways in which everyone collaborates. As the industry continues to evolve this remote model, virtual platforms and tools will become more powerful and seamless. The increase in broadband speeds and 5G cellular technology will make it easier for partnerships around the globe.

Take the creation and adoption of LED walls and in-camera VFX for example, which forever changed modern television and filmmaking. I believe these workflows will only further our effort and ability to bring to life whatever we can imagine.

Alkemy X’s Bilali Mack

What services does your company provide?
Alkemy X is an independent media company working in entertainment, technology and advertising. We specialize in VFX, design, animation, live-action production, original development and branded content.

Have you been working remotely during the past year?
Yes, the company has been working remotely since March 2020. Our technical team worked around the clock to adapt our robust workflow to be entirely virtual and seamless, which of course led to unique challenges when meeting deadlines with the delivery of cinema-quality files.

How have your remote workflows evolved from March of last year until now?
There is certainly better integration and development of workflow and communication. Now that we have overcome the initial technical barriers, we are able to focus our time and efforts on continuing to elevate the work rather than battling slow internet service or unexpected security permissions. We’ve also started to regain some of the teaching opportunities for our artists that were initially lost to the remote workflow.

What are some of the tools you’ve been using?
Currently we are using Shotgun, Slack, Zoom, Foundry Nuke, SideFX Houdini and Autodesk Flame.

Have artists been dialing into your servers or working locally?
Artists have been logging in remotely to our secure, on-site systems in NYC.

What are pain points?
Communication, culture and career development have probably been the most difficult challenges from the remote work standpoint. While I do make it a point to maintain a virtual “drop-in” policy with our team, it is difficult to replace the happenstance interactions and collaborations that occur over a shared lunch or encounter in the kitchen.

What about the positives?
With great challenges come great opportunities. In many ways we have become very efficient with time. Diversified talent has also been a great new addition to the way we work, given that we are no longer tied to geographical requirements when it comes to staffing non-tax-incentive-based projects.

What would help make remote workflows easier in the future?
If we could do partial remote and partial in-person, that would be great. While there are major benefits when it comes to working remotely, there is certainly a creative shorthand that is dissolved without the face-to-face interaction on a regular basis.

Zoic Studios‘ Saker Klippsten

What services does your company provide? 
Zoic Studios specializes in visual effects for episodics, film and commercials. We also have two sister companies, including Zoic Labs, which specializes in software development and advanced visualization, and Zoic Pictures, our original content development division.

Have you been working remotely during the past year?
Yes, Zoic Studios reacted quickly and transitioned all three of our offices to a fully remote workflow virtually overnight in March of 2020 and have remained remote since. Our partially remote pipeline made this transition easier than most, but it was through the tireless efforts of our entire team that we were able to maintain operations seamlessly during this transition.

How have your remote workflows evolved from March of last year until now?
At Zoic, we are using Teradici PC over IP, however, since going fully remote, our process has evolved to become more efficient in several ways. We have narrowed down meetings to more focused groups rather than larger full-team groups, which has enhanced overall productivity. We are also now able to hold larger meetings that bring together the entire company from all of our locations across North America. Employees also now have much more accessibility to senior management in other offices and overall can be much more communicative with each other.

Stargirl

With the emphasis on Microsoft Teams within our remote workflow, we can be more unified as a company. Having the ability to be connected through this platform allows us to be less reliant on email, where the communication can sometimes break down and information can get lost. While we always remained connected with our offices, a fully remote workflow has allowed us to spread work more easily across different offices and artists.

What are some of the tools you’ve been using?
Microsoft Teams has been a game-changer for Zoic Studios. It allows us to hold full-company gatherings that allow us to continue building our culture, such as lunch-and-learns or companywide town halls. The democratization of collaboration with the platform has brought the company together rather than siloed us into our separate offices. With the MS Stream for desktop, employees can also record meetings that they are not able to attend and follow up with feedback later. Teams also has proved extremely helpful for training and deploying new workflows companywide.

Judas and the Black Messiah

Another key tool that we have adopted within the last year is Epic’s Unreal Engine for its realtime filmmaking capabilities. We have been using Unreal for visualization, virtual art, animation and virtual production. We have already leveraged it across a wide range of top series, including Superman and Lois and Stargirl for Warner Bros., AppleTV+’s See, Amazon’s The Boys and the upcoming Netflix series Sweet Tooth.

Have artists been dialing into your servers or working locally?
We use Teradici PC over IP to remote into our workstations located at our private data centers.

What are some pain points of working this way?
While overall we have found tremendous efficiencies with this remote workflow, we unfortunately don’t have full control end to end of the internet connection, which can be a major challenge when working with such large file sizes.

In the early days of the pandemic and our fully remote workflow, employees struggled with the capacity available from their local internet providers. This meant that in the early months, a great deal of time was spent upgrading internet capacity at home offices.

Maintaining social connectivity has also certainly been a challenge with remote workflow. Seeing people on a screen is nice, but it lacks the spontaneity of the happenstance meetings that used to occur in office kitchens and common areas.

What about the positives?
Zoic has made it a point to heavily encourage a healthy work-life balance. It can be easy to get consumed working long days when your workstation is at home, but that is detrimental for employees and can lead to burnout.

Also, with the travel and commuting time being removed, employees have enjoyed more personal time for family, health and hobbies. There are also, of course, environmental benefits of being at home, which is something that we’ve experienced collectively across the globe during the pandemic.

What would help make remote workflows easier in the future?
Solve the speed of light problem! In all seriousness, latency is the number one problem for artists working at home. It would also be a major game-changer to have less expensive, higher-quality software available to support realtime video playback, an improvement we’d love to see in Microsoft Teams.

MTI Film’s Larry Chernoff

What services does your company provide?
General post production services for episodic one-hour dramas, film restoration, remastering and Avid remote editing for television series.

If you’ve been working remotely during the past year, what are some of the tools you’ve been using?
We have been focused on several technologies for different purposes.

For Avid editorial, we have set up a data center that uses PCs for Avids and Parsec software for connectivity from the remote location to the data center.

For online editing, we use Parsec to connect our editors to our facility and a simple VLC player to connect clients to the streams we send them via Teradek “Cube” encoders.

For color correction, we use Teradek Prism to encode directly from the color correctors and on the receiving side, clients are viewing on their iPads using Teradek’s Core viewer. The Core viewer has ASC color correction that allows our colorists to profile and match the iPad to our Sony X300 monitors.

For final QC, there are several methods including Cube, Prism, Clearview and T-VIPS.

Have artists been dialing into your servers or working locally? If so, what are some of the pain points of working this way?
Occasional loss of connectivity or playback quality due to poor bandwidth on the client side is the only pain point.

What about the positives?
During the pandemic, it has allowed work to continue with little interruption.  It has proven to be invaluable to producers who have a lot to do and do not want to spend their time travelling.

What would help make remote workflows easier in the future?
If everyone had at least 50mbs of solid download connectivity to the net.  The more the merrier.

Ataboy’s Vikkal Parikh

What services does your company provide?
Ataboy is a design-driven content creation studio that creates visual content for brands and agencies. Our team uses design, animation, live action and VR/AR to connect brands to their audiences.

Have you been working remotely during the past year?
Yes, besides shoots, which we’ve been doing as a combination of both remote and in-person.

How have your remote workflows evolved from March of last year until now?
It’s evolved from a makeshift solution to a robust production pipeline. We’ve experimented with various technologies over the past year and created solutions that work well for our clients and our teams. From trying various cloud storage options to installing fully remote desktops, we’ve fine-tuned these services to fit our needs. Our Clean Classics collaboration with Adidas and agency Annex88 was created entirely in this remote situation.

What are some of the tools you’ve been using?
We use a combination of the enterprise cloud solution from Dropbox and on-site storage solutions, namely Synology. For the desktop, we’ve been relying on virtualized Microsoft Azure using Teradici. We achieve remote rendering with Fox Renderfarm. For communication, we’ve primarily been using Slack and a combination of Google Meet and Zoom for “face-to-face” meetings.

Have artists been dialing into your servers or working locally?
We’ve taken the hybrid local-and-cloud approach using Dropbox. All the artists have been working off the Dropbox folder that syncs to the cloud and the Synology at the studio.

What are some pain points?
The biggest challenge is missing the in-person conversations and collective brainstorming. We have achieved it using remote tools, but there will always be something missing there. Being able to just walk over to an artist and have a conversation versus the chat and video call fatigue. It is real. We are not wired to be in a bubble.

What about the positives?
Some of the perks are saving on the commute time and flexibility in the work hours. Some creatives work better at night, and some like to knock out the most challenging things first thing in the morning. This offers the opportunity to do that and work at your creative peak, with the support from the infrastructure we’ve built.

What would help make remote workflows easier in the future?
I think having the flexibility to work remotely is a great option, and it’s something we should hang onto going forward. And, yes, it’s true that being able to collaborate in person makes things a lot easier, but it’s still nice to have the tools and the knowledge to do it remotely and do it well.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

 

HPA Tech Retreat 2021: At a Glance

By Randi Altman

The Hollywood Professional Association held its 2021 HPA Tech Retreat virtually over a two-week period last month. While those of us who usually attend missed being in beautiful Palm Springs and picking fresh grapefruits from the trees (Wait, what? Who would do that?), we found that the HPA’s virtual platform had a lot to offer. In addition to live and prerecorded panels, there were the show’s beloved Breakfast Roundtables, a show floor, private messaging that allowed for those in attendance to have conversations like they might have in person, and much more.

While panel topics were diverse — there were between 130 and 150 of them — it was hard not to walk away from the virtual event without thinking about the cloud … how it allowed for workflows to continue during the pandemic and will do so going forward.

The Tech Retreat closed with a two-day Supersession called The Found Lederhosen. They asked a series of filmmakers across the globe to make a short film under COVID conditions and to document how they did it. According to the HPA, “This not only illustrates a whole set of different workflows for producing with COVID-safe guidelines but also points to how remote collaboration can leave a lasting creative impact.” More on this later from MovieLabs’ Rich Berger.

The Supersession was one of many panels held during the event. One of those was the timely “Remote Post Production For Reality,” in which Key Code Media brought together reality television partners Avid, Adobe and StorageDNA to create its own remote reality post production workflow using remote, automation and creative tools on an actual scene for the upcoming season of Jersey Shore.

“In Reality TV, you face unique challenges with ingest, multicam, mixed file formats/codecs and a massive 700:1 shoot ratio. Now we’re all remote, and it’s not getting any easier. With remote reality TV post, the challenges become even more complex — cloud editing, cloud storage, egress charges, review/approval, and syncing on-prem and remote projects and media files,” explains Jeff Sengpiehl of Key Code Media.

During the panel Adobe previewed unreleased beta features for promo departments, Avid showed off unreleased features in the newly released Avid Symphony, and StorageDNA demo’d a unique workflow wherein editors can edit locally but have all the media synced with all other collaborating editors and the facility in real time. “No VPN. Not cloud editing. We call this CloudHybrid,” adds Sengpiehl.

The Creative Intent panel.Nigel Edwards is top left; Poppy Crum is top right.

One of the event’s audio post-based panels was “Is Creative Intent Wrecking the Intent of the Creative?” Nigel Edwards from The Farm in the UK says, in regards to the panel’s topic, that, “The public is bemused at the sound and vision ‘quality’ of some show, though it’s a creative choice rather than a technical fault.”

According to Edwards, one of the surprises that popped up during the chat was that panelist Poppy Crum, chief scientist at Dolby Laboratories, spoke about age-related hearing loss that starts to set in in early teens. “The masking that happens means dialog intelligibility becomes harder. Therefore, the older the mixer, the clearer the dialog should be, as the mixer will have to work harder to compensate for their own natural hearing loss. This does not appear to happen in the real world.”

One of Edwards’ big takeaways was that technology companies have “a far greater understanding of how we actually see and hear and how the brain interprets that information. Creatives need to learn this rather than working on gut instinct,” he says, adding that if they had had more time, the panel would have covered how monitoring levels affect the final mix balance and the viewers’ enjoyment.

Joaquin Lippincott

Another series of HPA Tech Retreat panels was called “Understanding the Cloud Media Workload.” It was a four-part discussion that gave an overview of cloud media workloads and covered the media supply chain, content creation and content distribution.

According to panelist Philippe Brodeur from Overcast, this conversation was timely because in the media and entertainment industry in 2020, “there is one space for media workloads that has thrived — cloud. The reasons are simple: Cloud supports remote workflows, and whether a production is ultimately successful or not, costs should not be fixed.”

The panel’s moderator, Joaquin Lippincott, CEO/founder at Metal Toad, felt the topic was timely because of the cloud’s “significant impact on the infrastructure used in the media and entertainment industry.”

He says one of the takeaways from the panel was how private businesses are trying to figure out what the cloud means and determine how much they will leverage the public cloud versus how much they are going to do themselves. The risks are significant, he says. Another takeaway was that “business leaders need to understand this rapid transformation calls for agility, flexibility and experimentation to understand what the technology is capable of and to create profitable business solutions. The business model has to adapt, and it’s uncharted territory.”

If they’d had more time, Brodeur says the panel would have talked more about pricing and the composability of cloud. “Pricing is why the cloud is winning.”

If asked to do a similar panel next year on the same topic, he says he would focus on microservices and composability — “why the incumbent software and hardware providers are getting trounced by the new service integrators.”

The “High Resolution and Beyond” session. Dylan Mathis is top right.

On the Sam Nicholson, ASC-moderated panel, called “High Resolution and Beyond,” the discussion revolved around different uses for 8K in Hollywood … and for NASA. Panelist Dylan Mathis from NASA came away from the panel thinking, “Just when you think there are enough pixels, there is a use case for more!”

What would he like to see next year on this panel’s topic? “It’s impressive to me that high-resolution LED panels are used as backgrounds for movies.  More examples of that would be interesting as well as using actual NASA 8K footage in this way.”

MovieLabs was involved in a number of talks during the Retreat. MovieLabs’ CTO Jim Helman presented “Software Defined Workflows,” which continues to expand the conversation around the concepts and components that will enable a completely interoperable cloud-based future as outlined in MovieLabs’ recent whitepaper “The Evolution of Software Defined Workflows.”

Spencer Stephens, MovieLabs’ senior VP of production technology and security, presented “Why Do We Need a Common Security Architecture?” which explains the concepts and architecture of a completely new approach to securing cloud-based assets and workflows.

Finally, Chris Vienneau, MovieLabs’ production technology specialist, gave a sneak peek of the organization’s early thoughts on how to visually depict workflows in “A Visual Language Primer.”

MovieLabs’ Mark Turner during the Supersession, Day 2.

In addition to their presence in other parts of the Retreat, MovieLabs directly partnered with HPA for the Day 2 Supersession “Live from the Cloud – Without a Net.” According to Rich Berger, the Supersession was “an audaciously ambitious live demonstration of a remote, untethered production — shooting a pickup shot where proxies, sound and original high-resolution files were sent wirelessly to the cloud with the expectation that the shot would be cut into a film and a trailer, with new visual effects, new sound effects, conformed, color corrected, mixed and delivered all through multiple cloud tools and providers in less than three hours while the audience watched live.”

Berger says the takeaway from this Supersession was: “For some production tasks, working in the cloud is real and here today. There are many vendors, tools and infrastructure providers that enable meaningful cloud-based capabilities across the entire production lifecycle. It was great to see so much excitement about cloud-enabled workflows. But for us, another key takeaway is to continue focusing on the work that we still need to do to better leverage the full potential of the cloud for production and to help bring the industry together to implement a more interoperable cloud ecosystem as outlined in our 2030 Vision. To fulfill that vision, we must start now.”

If you registered for the event but missed some of the goodness, you can find panels and chats on the HPA site for the next month or two, including the one I moderated for Sony Picture Studios talking about Tiburon VFX pull process. It is important to note that content is available only to registered attendees. Those who haven’t registered can still do so here.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

 

 

 

Editor Geoffrey Richman Talks Workflow on Apple TV+ Film Palmer

By Randi Altman

Who we are in high school is rarely who we become, but some veer so far off of their presumed path that it’s hard to recover. This brings us to the Apple TV+ film, Palmer, which stars Justin Timberlake as a high school football star turned convict who returns to his Louisiana hometown after 12 years in prison.

Editor Geoffrey Richman

At one time, Eddie Palmer had a bright future ahead of him, but now he’s struggling to navigate life after incarceration. Upon his return home, Eddie moves in with the grandmother who raised him while trying to figure out his next move. It’s during this time that he develops a bond with a 7-year-old boy named Sam, whose mother is on a prolonged bender.

We reached out to editor Geoffrey Richman, ACE, to talk about working with director (and frequent collaborator) Fisher Stevens, his assistant editors and his process.

How early did you get involved on this film?
Fairly early. Palmer was originally scheduled to shoot about a year before it actually started. I was expecting a baby at the time, and the due date was scheduled for the first day of shooting. Fisher and I had a whole plan to make that work — me working part time and close to my apartment and with the possibility of having another editor help out until I was on full time.

For various reasons to do with financing and casting, the film didn’t start until a year later, which really worked out for the best, both for the film and for me, facing the reality of trying to start an edit with a newborn. Over the following year, I was able to read more drafts of the script and hear about casting choices as they happened, which made anticipation for the shoot all the more exciting.

Director Fisher Stevens (in hat, on set) often calls on Geoffrey Richman to cut his projects, giving them a shorthand.

What direction did Fisher Stevens give for the edit? How often was he looking at cuts?
Very early on, Fisher gave me a list of films to watch as inspiration and to get an overall feeling for what he was going for with style and sense of place. Then during shooting we would talk generally about the story and the scenes. We watched a few cut scenes together, but for the most part, I was on my own building the first assembly. Once we watched the full assembly, that’s when we both dove in. From then on Fisher was in the edit a lot, and we were looking at cuts together all the time.

Fisher and I worked on a few documentaries together before Palmer, so we already had a level of trust and a shorthand going into the edit. We could spend all day attacking one scene, or he could give much broader notes, and I could go off on my own to try different things.

We both like screening to audiences often, so we got the cut to a screenable point fairly quickly — about a few weeks into the edit — and from then on screened regularly to different groups of friends. The feedback from those screenings really helped steer the cut for both of us, and in many cases also helped solve problems along the way.

Stevens is also an actor. Did that play a role in how he directed on set or directed the edit?
On set, Fisher gave the actors a lot of freedom with their performances. Ryder Allen (Sam), in particular, has a lot of great lines in the film that he came up with on the spot, and a lot of that came from Fisher giving the actors room to improvise.

The same applied in the edit. Fisher was very particular and sensitive to the smallest details in the performances, making sure the truest moments were on screen. But at the same time, he was always open to taking apart scenes or structures and trying out new ideas. I’m sure that comes just as much from his background in documentaries, where the edit and story are constantly shifting.

Was there a particular scene or scenes that were most challenging?
Not so much a scene, but a section of the film. We spent a lot of time working on the first 20 or so minutes of the film. The energy between Palmer and Sam is amazing, and watching their relationship develop over time carries the film. But there’s a certain amount of groundwork that has to be laid before that part of the story can kick in.

In the early stages of the edit, we were just cutting down for time to get to the heart of the film sooner, pacing some things faster and cutting other things out entirely. But while technically that got to the Palmer/Sam storyline sooner, it was losing a lot in the process — like establishing a connection to Palmer, the early tension between Palmer and Sam, and the relationships with the grandmother. So there was a lot of trial and error to find a balance between keeping the story moving forward and finding the right moments to connect with and get invested in the characters. Sometimes it was as simple as repurposing a single shot from a deleted scene that would help recenter the surrounding scenes.

We also played a lot with structure, finding the right time and place to get into a backstory. For example, there’s a scene early on between Palmer and his parole officer that introduces the backstory of him being in prison. In the first cut, it felt it was coming too late — the audience needed to be grounded in that information earlier. But when we moved it too early, it felt like an interruption. Like we weren’t letting the audience settle in and have time to intuit things on their own before being told what’s going on.

Was the edit done during the pandemic? If so, how did that affect the workflow?
We were already a few months into editing when we shut down the office and brought everything home. We had just signed up with Evercast about a week before that, and it made the transition surprisingly smooth. Fisher and I worked remotely for the rest of the edit.

Of course, there were times it was hard to make precise edit choices when the internet is cutting out, the audio isn’t in sync and kids are screaming in the background. There was a lot of, “I can’t believe we’re picture-locking a movie like this!” I did miss the experience of screening the film with an audience, and it definitely made for a different cutting room atmosphere. My 4-year-old would watch scenes a lot and give notes or make suggestions about where

the story should go.

What system did you use to cut and why? Where did you edit before lockdown?
Avid Media Composer. Back in the good old FCP7 days, I was alternating between Avid and FCP regularly. Now it’s just Avid for me. We edited in a room at Article 19 Films in New York until the pandemic started.

Is there a tool within Media Composer that you use but others might not know about?
Remove Hidden Volume Automation. I don’t know if most people use this or not, but it’s a hidden gem to me, tucked away in the Automation Audio Mixer. When you have a bunch of audio keyframes on your clips and then cut the clips up, you can’t access the keyframes that are past the edit points. This feature lets you set in and out points around an edit and, well, remove the hidden volume automation, so you don’t have unwanted volume shifts leading into or out of a cut.

How did you manage your time?
I use an assortment of organizational apps to help with that — Trello, Evernote, Wunderlist (now Microsoft To Do). During production it’s all about keeping up with the dailies, so each day is its own deadline to cut a scene. And during post, the nice thing about having lots of audience screenings is that they create deadlines along the way. It keeps the momentum going in the edit and helps gauge how much needs to get done and when. Also, having lots of interim deadlines is great for creativity since a lot of unexpected ideas come out of being up against the clock and having to make decisions on edits quickly.

Did you have an assistant editor on this? Do you see the role of assistant editors as strictly technical or as collaborators?
I had two assistant editors on Palmer, Ian Holden and Keith Sauter. I always view assistants as collaborators, whether that’s getting feedback from them about cuts or asking them to cut their own versions of scenes. My assistant is often the first person to watch a scene before it goes to the director. It’s always nice to have someone to bounce an edit off of and to see how it feels with another person in the room.

What’s great with a trusted AE is that if they have notes on a cut, they know the footage so intimately that they can also help hunt down whatever’s needed to make it better.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years.

DP Chat: Oktoberfest: Beer & Blood’s Felix Cramer

By Randi Altman

When most people think of Oktoberfest, they picture Germany’s weeks-long celebration of beer — women dressed in dirndls and men dressed in lederhosen while drinking copious amounts of beer from large steins in giant tents, eating Bavarian pretzels and singing and dancing.

Well, Netflix’s Oktoberfest: Beer & Blood paints a very different picture of the yearly event. Taking place in Munich in the year 1900, this series, loosely based on real events, offers viewers a glimpse into the dark and bloody origins of what we all think of as a giant party.

Oktoberfest: Beer & Blood tells the story of brewer Curt Prank, who will do anything to build a beer hall that will dominate the city’s Oktoberfest. The series, co-written by Ronny Schalk and directed by Hannu Salonen, was shot by cinematographer Felix Cramer. We recently reached out to the Germany-born Cramer to find out about his process on the series, which he describes as “luscious, colorful and full of life, never fusty or drab.”

How early did you get involved with Oktoberfest?
Director Hannu Salonen approached me eight months before shooting. When I read the script for the first time, I knew this would be a series far different from most period pieces I have seen.

I am a big fan of late romantic music and felt that Oktoberfest has much more to do with one of Richard Wagner’s operas. The series is settled in a time when huge changes have been visible in the society, economy, art and science. The pivotal question of our characters is whether to keep old traditions or to modernize their lives to move into the next century. Our protagonist, Curt Prank, stands for the latter and has a huge impact on transformation of Oktoberfest.

On the other side, traditionalists are reflected by the Hoflinger family, who own an old-fashioned brewery and wants to hold on to the olden days. When these two families bump together, great conflicts are inevitable. After I finished reading the script, I knew the series would be an amazing opportunity for visual filmmaking.

When do you prefer getting on a project and why?
I like to get on a project as early as possible. As a DP, you have a great opportunity to create a visual concept when you are involved from the beginning. Hannu and I worked intensely on storyboarding many key scenes and created our visual language for Oktoberfest.

Many of our ideas were written into the script, and the strong collaboration with our head writer, Ronny Schalk, during the preproduction definitely had a huge impact on the making of Oktoberfest. Hannu always said, “The camera has to be its own character of our series.”

You were the DP on the pilot, so you worked with the director and showrunners to set the look of the show?
That is correct. For weeks we did camera, lens and lighting tests and modified digital grading technology to set a unique look for Oktoberfest. Together with the production designer and the costume designer, we created images inspired by oil paintings of expressionist artists of that time. We used modern LED lighting technology that allowed us to specify the colors in perfection. Even though the lighting of the movie looks naturalistic, it is completely stylized according to the context of the scenes.

How would you describe the look of the show?
It is hard to say since the world of Oktoberfest has never been created in this way before. I always wanted to be as modern as possible, avoiding the stereotypical kind of historic look you see in many shows. My visual approach orients more on German art and music than existing period movies.

Our imagery is luscious, colorful and full of life, never fusty or drab. Oktoberfest contains different settings and characters and is a breathtaking trip through their souls. All of the characters are complex, not just good or bad, and they reflect different shades of human flaws. My aim was to show these facets and to strengthen the emotional impact by using expressive camera work, lighting and colors.

Late romantic paintings were an influence. Can you talk about that?
We have been inspired by European late romantic artists like Arnold Böcklin, Anselm Feuerbach and expressionist artists from the Neue Künstlervereinigung München, which were known as the “Blue Riders.” We analyzed the colors of their work and adapted the color palette to our costumes and production design. Even the lighting colors have been adjusted to perfect the look.

There seem to be a variety of looks and color palettes throughout, and some are modern-feeling, as you mentioned earlier. Can you discuss that?
Since Oktoberfest tells a story in the turn of the century, we wanted to characterize the two worlds that clashed at that time. The world of the late romantic time, the end of 19th century — represented by the Hoflingers — has a very traditional brewery. Especially the mother, Maria, has no interest in opening her mind to the modern world of the 20th century. That said, the colors of these scenes are more desaturated, less colorful and bluish-grayish, sometimes greenish. On the other side, we see the modern world, the next century, with electric light — represented by Curt Prank. The colors of these scenes are more saturated, more reddish and expressive. All this reflects the change from late romantic art to the expressionists of that time.

What kind of research went into preparing for the show?
The story of Curt Prank as founder of the first big beer tents is based on Georg Lang, who built the first “Bierburg.” Nowadays, it’s common to sit in these beer cathedrals and drink lots of beer, but at the beginning of Oktoberfest, smaller beer booths were common. Our production designer, Benedikt Herforth, did tremendous research and collected hundreds of images and illustrations of Oktoberfest around 1900. We couldn’t bring every detail into our show but tried to be as historically accurate as possible, especially regarding costumes and production design.

DP Felix Cramer and director Hannu Salonen on set.

How did it feel recreating a piece of history that not many people know the origin of?
It is definitely a challenge creating a history piece from scratch. There have been no documentaries or movies we could use as a guide for Oktoberfest. When I immersed myself in this time, I realized many differences to the present. Visitors wore Sunday coats, not these leather trousers or dirndl dresses that everybody combines with Oktoberfest nowadays. And there was a ring road, where the most important beer booths were placed. Munich had a major art community represented by famous artists like Vladimir Kandinski or Thomas Mann and was definitely one of the most liberal cities. We felt excited showing a Munich that few people know.

Where was it shot, and how long was the shoot?
We shot Oktoberfest in Prague (Czech Republic), Bavaria (Munich) and Cologne. The main location, the exterior of Oktoberfest, has been created on an old goods station in the middle of Prague. We had a total of 66 days of shooting, which was definitely tough when you see the scenes with hundreds of extras and big scenery we had on many shooting days.

How did you go about choosing the right camera and lenses for this project?
We conducted a very intense lens and camera test to get the best result for this project. We tried all kinds of old lenses but also modern glass to see what gives the best look and put them on the ARRI Alexa and the Sony Venice.

Felix Cramer behind the scenes

At the end of the test, I came on the very newly created Genesis G35 Vintage 66 lens (by Gecko-Cam) that combines modern and vintage elements on the same side. The lenses have a pretty modern and warm look, with high resolution, less defocus on the edges and less chromatic aberration. But they use uncoated lenses on the other side, which produce nice lens flares and reflections that overcast the image and give a more historic and organic feeling. I decided to combine these lenses with the ARRI Alexa Mini, a camera I worked with many times and which has a slightly better dynamic than the Sony Venice.

Can you describe the lighting?
Love, hate, ambition, vengeance, arrogance, grudge or anger drive the characters of Oktoberfest. My aim was to show these facets and to heighten the emotional impact by using expressive lighting.

On the one side we see the bright Oktoberfest with happiness and joy, but below the surface, there is blood and darkness. We used colors and contrasts that reflected more the story instead of just being naturalistic. You will find genre elements from Westerns or horror movies in the cinematography of Oktoberfest, and this is reflected in the lighting design as well. Instead of using old-fashioned film lamps, we decided to use the foremost LED technology on our show and changed the colors for each scene, sometimes during a shot.

Any challenging scenes that you are particularly proud of or found most challenging?
The most challenging scenes were the long takes we did. One of them is the flight into the Kocherlball (Episode 1), where you see many young people dancing in a ruin by night. It took us several days to execute that shoot using a drone, catching the drone and walking with the drone through the crowd of people. Our camera trainee wore security gloves and glasses and learned a special walking technique to realize shots like a Steadicam. The result was so convincing that we decided to use this technique for many other shots.

Now more general questions …

Felix Cramer

How did you become interested in cinematography?
When I was a kid, I was interested in music and wanted to become a clarinetist. My teacher played at the Opera House of Stuttgart, and I was extremely interested in the visual side of operas. Years later my father – who was an historian and interested in traditional handcraft — made a documentary about a charcoal burner. He rented a camera and never changed the camera position over seven days. I was 16 when I saw the footage and explained to my father that this footage wouldn’t be sufficient for a movie at all. At that moment I realized that I love to visualize movies in my head and started to make my first documentaries. I then got more and more passionate about making movies and studied cinematography at the Film Academy Baden-Württemberg years later.

What inspires you artistically?
Besides all the movies I’ve watched over the years, I always took inspiration from hearing classical and modern music. For me making a movie is like composing a symphony. You have to be aware of all the different rhythms of each scene and their transitions. I always have that in mind while shooting. On the other side, I get inspiration from paintings and photos, especially when shooting period pieces.

What new technology has changed the way you work (looking back over the past few years)?
There are huge changes in the last 10 years, and all that allows us to make movies in a way we never could have 10 years ago. Starting from the camera — the modern cameras are extremely sensitive, and you can work with less light and higher contrast. Drones and new cranes and gimbal heads allow camera movements that had been difficult to achieve years ago. Last but not least, the LED lighting technology is a big step; it allows us to change brightness and colors easily. I never want to go back.

Felix Cramer

What are some of your best practices or rules you try to follow on each job?
I always want to be as prepared as possible. We have no time to think in detail about the scenes during a shooting. The more preparation time you invest, the better the result. Another thing is checking what you did and learning from your mistakes. I always check each shot, even after long shooting days, and see what I can improve the next day. From my perspective, learning by doing is the best way to become a good cinematographer and the only way to develop one’s own signature.

Explain your ideal collaboration with the director or showrunner when starting a new project.
The ideal collaboration is having a very intense conversation about production design, costume design, shooting design, make-up … every aspect that influences your visual work is important and has an influence on the result. Aside from that, I also like to speak about the script and the storyline. Sometimes the scripts are still in development when I get attached, and many directors ask me about my thoughts. I like to help to improve a script as much as I can.

Any go-to gear – things you can’t live without?
On the camera side, I am pretty open, but one thing is really important to me: the color grade. Five years ago, I started to grade my movies on my own, and since then, I never wanted to miss that. I developed my own grading technique, which I adapt for each movie, and I would say that it is my signature that you can see in each of my movies.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Colorist Chat: Andrea Chlebak on An American Pickle, Remote Work, More

By Randi Altman

Industry veteran Andrea Chlebak has spent her career color grading film and television projects while at a variety of post houses. For the past two years, she was at LA’s Efilm, where she worked on projects such as An American Pickle, Bad Hair, Limetown and A Babysitter’s Guide to Monster Hunting. Prior to that, Chlebak spent 10 years in Vancouver at Digital Film Central and Umedia, grading Elysium, Chappie, Kahlil Gibran’s The Prophet and Prospect.

Bad Hair

And her journey continues… Chlebak recently joined LA’s Harbor Picture Company as senior colorist. Harbor, which has studios in New York, Los Angeles and London, offers live-action, dailies, creative and offline editorial, design, animation, visual effects, CG and sound and picture finishing.

We recently reached out to Chlebak to talk about her path, workflow and her work on American Pickle and others.

You have been busy with projects, even during the shutdown. Can you talk about working remotely? How did that affect your process?
Yes, it’s been pretty non-stop. I feel like I lucked out with the number of projects I was on that were in the final stages just before the pandemic and shutdown began.

After the initial pause — when absolute shutdown occurred — I think post facilities got very creative in finding ways of working that kept the process going, but also kept everyone safe. The solutions that I participated in ranged from some at-home grading setups to scheduling sessions to stream to the home of the director or DP, or to another building in order to maintain strict distance.

A Babysitters Guide to Monster Hunting

When in-person sessions were allowed, I was able to grade with one person in my suite and with social distancing measures — but it made a lot of sense to stream to a home or other facility. On A Babysitters Guide to Monster Hunting, I was slated to travel to Vancouver for the entirety of the grade — that obviously did not happen. For that project, we streamed from the facility in LA to a facility in that city.

I think that all of the years we have spent improving the streaming grade session experience really helped keep everything rolling. A major reason I do what I do is because I love to collaborate with people. I really stand by the process of going into the grade open-minded and finding the look and solutions together. It is more difficult to do that smoothly when there is a delay in transmission, or if you can’t see the filmmaker react to the image while you are working. While, yes, you can hear reactions, it made me realize how much of my work is working off the vibe of the room and riffing off of the filmmakers to come to a final look and feel. So I suppose the distance just makes that a slightly longer process.

What are some projects you worked on since the pandemic began?
Feel the Beat (Netflix), Bad Hair (Hulu), An American Pickle (HBO Max) and A Babysitter’s Guide to Monster Hunting (Netflix) – along with a variety of music videos and short films. All of these productions were completed while I was at Efilm Hollywood.

Feel the Beat

On Feel the Beat, I was in Toronto, working with director Elyssa Down and DP Amir Mokri mere days before the shutdown occurred. The timing had me back in LA to complete the HDR version. So while that project had a few streamed reviews, it was mostly a usual DI. After that, I went into finishing Bad Hair, which was a film that had actually been graded late last year and premiered at Sundance in January. However, we had a recut to grade for the Hulu release, and I worked with director Justin Simien (Dear White People) to finalize the theatrical and HDR look while also revisiting the look as a whole, given the new order of some scenes.

At the time of the finishing, we were working with strict coronavirus shutdown regulations along with city curfews. To keep things totally safe, Justin and I worked in separate buildings and graded via Streambox sessions, but because we had a good foundation for the film already, things went pretty smoothly given the circumstances.

After that was An American Pickle, which was in the middle of post and edit when the pandemic took hold. Once we were scheduled to return to final color for that film, some adjustments had been made in terms of safety protocol, and director Brandon Trost (The Disaster Artist, This is the End) was able to supervise the grade. We were able to work in-person with strict safety measures in place but also alternated with streaming sessions to review specific notes. Since the film shifted its primary release to streaming on HBOMax versus the originally planned theatrical release, we were able to reframe our process to focus more creatively on the HDR master of the film.

An American Pickle

After that completed, I moved onto A Babysitter’s Guide to Monster Hunting. I was really looking forward to my third collaboration with DP Gregory Middleton (Watchmen, Game of Thrones), and even though I was supposed to travel to grade that film in person, we managed to make it work streaming the grade from start to finish.

Do you expect that even after the pandemic, parts of your job might be done remotely or in a hybrid model?
I do think that this experience will have changed how some people see the process, yes. There will most likely be less travel for colorists and DPs. I think after the pandemic, colorists will probably work more independently, and more streaming sessions will happen. I hope that it opens up more opportunities to collaborate on look and allow the DP to check in more if they are not available to come to the full grade. That said, I also think once it is safe to do so, a lot of people will be keen to work and collaborate in person again.

Let’s talk about An American Pickle. Can you tell us what it’s about and describe the look and your process?
An American Pickle begins in 1919 with Herschel Greenbaum (Seth Rogen), a struggling but upbeat laborer who immigrates to America to build a brighter future for his growing family. One day, while working at his factory job, he falls into a vat of pickles…where he is perfectly preserved for 100 years. When he emerges in present-day Brooklyn, he connects with his only surviving relative, his great-grandson Ben (also Rogen), an easygoing app developer. The story is about the pair’s attempt to bridge their 100-year gap and reconsider the true meaning of family.

An American Pickle

For the grade, I was lucky enough to work with both DP John Guleserian (Like Crazy, About Time) and Director Brandon Trost, who is also an extremely talented DP. Since the film takes place in the past and also current day, we started by establishing those two major looks — one for the early 1900s and one for present day. The 1900s story was shot in 4:3 frame and with a vintage lens that really gave a distinct look, and when it came to color they gave me a number of references from the early color film days and also a hand-colored film look — we experimented with a few different palettes and eventually narrowed in on a softer filmic look we all liked.

We then spent a bit of time with the current-day story, which was much more subtle in terms of grade, but it also needed to have a style and palette that would connect back to the 1900s look in some way. Having Brandon and John both there was a treat, as it was like having two DPs at times. We tried a variety of looks, and then again narrowed the modern-day look. Once we had the major direction in place for both time periods, our DP left the process to shoot another film and the director and I spent the remaining weeks refining the palette and style — we ended up reaching what we called a modern storybook look. They shot the film on an ARRI LF camera.

An American Pickle

Also, since American Pickle is like a “twin film,” with Seth Rogen playing two characters, there were a lot of visual effects involved. I spent a lot of time in later reviews as Brandon finalized the VFX shots, and we were able to integrate the reviews into the DI so we could cut down on some potential VFX notes that could be handled in color. The grade took place at Efilm Hollywood.

How do you prefer to work with the DP/director?
I really love when I get the chance to start working with both DP and/or director when they are heading into pre-production. Being involved from this point gives me a more creative role usually, and it also allows me to properly support the DP on all imaging aspects during production. I typically keep in touch and do some look development, and sometimes update or create scene-specific LUTs along the way as needed. Being involved early also allows me to start supporting VFX teams and collaborate with the post supervisor to refine the post workflow — my role in that is to ensure that the creative process gets the time at the right stages based on the project needs.

In terms of the final grade, I find it helpful to start with the director and DP together, to get a watch through and have them align with me on look and direction. That is usually a great experience because the DP is often watching the close-to-final cut for the first time and has a genuine reaction. If I can get that time in at the beginning, I have found working with the DP for a few days or weeks to work through the film, and then finishing up final touches with the director, can be a really great way to organize the process. In some films I have only worked with the director throughout the entire process.

A Babysitters Guide to Monster Hunting

How do you prefer the DP or director to describe the look they want? Physical examples? 
I think it’s different each time. Most often I ask for visual references because you can talk about looks or films and have different ideas in mind. I usually start with a thorough discussion where I ask as much as I can, then I often ask them to send me the moodboards they usually have already created.

If in pre-production, I’ll often create my own moodboard and share that so we can have different visual references when we are talking about certain scenes. Then, of course, the look session is the most telling because it’s when I get to create a variety of directions and gauge their reactions to each one. All of those methods really help me to narrow their style, aesthetic and sensitivities.

Any suggestions for getting the most out of a project from a color perspective?
To elaborate on the earlier question about the process and how I like to work with the DP and director,  I think you would get the most out of the color process if you bring your colorist on as early as you can.

This isn’t always feasible, but I often hear from DPs I work with that they were able to focus more on the cinematography without having to worry about technical aspects — they like knowing that I am looking at images coming off of set and there to provide a quick review or troubleshoot if something goes sideways in production. It just means you are going to have less confusion at the end.

Feel the Beat

I think it also makes for a smoother finish. The colorist is involved for a number of months, so by the time they get to the final stages, they are really locked into the look and direction, and it’s really just a matter of implementing. This also leaves a bit more time to get deeper into the color story and refine the look in a more meaningful way.

What’s your favorite part of color grading?
I love that initial phase of look development, when you are working with the DP and/or director and presenting ideas and seeing their reactions. I really enjoy that alignment phase, when as the colorist you are gathering information and finding interesting solutions and also learning what the filmmakers ultimately love and hate. I enjoy navigating the multiple roles as well – how a director sees the film and what they focus on versus a DP or the producers — I weirdly love reading the room and learning from the different perspectives.

Do you have a least favorite?
I really love my job, so that’s a bit of a hard one. I guess I could say that I don’t love when it’s over. I’d also say that my least favorite part is the inevitable color calibration differences that we experience from display to display or theater to theater – especially now with a pandemic and being in different cities or rooms, I have definitely developed more anxiety around display calibration and whether the filmmaker and I are looking at the same image.

Bad Hair

Why did you choose this profession?
The profession I initially chose was photographer, but half-way through my degree, I just fell in love with motion imagery and storytelling. I knew early on I would find some way of working in film. After graduating college, I began shooting stills on set for a number of months, and to be honest, I had a bit of difficulty with the gender bias in the camera union, which at a young age was enough to turn me in the opposite direction.

Instead of blazing the cinematography trail, I turned to post production. The roundabout way I got into color grading was through VFX compositing and lighting and on the side editing indie films. Then I started to color grade the films I was editing. When a VFX producer was reviewing my reel and suggested I look into the colorist role, I didn’t even know that it was a profession, but it definitely flipped a switch in my brain.

It was another two years before the role of junior colorist landed in front of me, and at that time I 100% knew it was for me. I luckily had my start when DI was just becoming more widespread, so I was learning how to scan and record film while at the same time learning these new digital grading platforms. I think I was about 26 when the first feature came my way and basically, I never turned back.

What is the project that you are most proud of?
Good question. I’ve been working on becoming more mindful of picking something out to remember or be proud about after completing each project. For example, for An American Pickle, I am really proud of the aesthetic overall and how we found something that blended the two time periods, while keeping them distinct – and even though it is a comedy, I felt like we preserved or enhanced a lot of beauty in the look. For Elysium I am really proud of having pulled off a very meticulous look on such a large budget, heavy VFX film with such a small DI team. For Mandy I am really proud of having pushed myself to create a bold look and for having taken some big risks pushing an image that far surpassed anything I had done before. And for Watchmen, I‘m just really proud to have played a small role in developing looks that helped to guide the overall aesthetic for the series.

A Babysitters Guide to Monster Hunting

What would surprise people the most about being a colorist?
It often surprises people when I talk about how color is used to draw the eye and tell a story. In the kind of work I do, where I spend a bit of time shaping and adjusting light in the grade, it can go beyond just making things look good or giving it a look. A solid understanding of color psychology is an asset, as often we can enhance emotion and intensify the narrative using palette.

Where do you find inspiration? Art? Photography? Instagram?
Everything and everyone. I love looking to multiple mediums for inspiration. I definitely keep a solid art book collection, as well as constantly keeping and saving images that I like. I build libraries of my own images, as I love to travel and study how light changes in different parts of the world. I definitely use Instagram and Pinterest to save and store imagery I like. I’m finding myself out and about in the earlier hours in Los Angeles and surrounding areas getting inspiration from the world around me.

Can you name some technology you can’t live without?
One I wished I could live without is my phone, but it is not possible. I also can’t live without my Circadian optics light, which helps me rebalance after long days in a dark room, and my iPad Pro for all of the streaming collaboration I’ve been doing.

A Babysitters Guide to Monster Hunting

On the other end of the spectrum, I would say good communication and people skills are a major asset to the job. Whether you are working with many different roles —cinematography, VFX, directing, producing — in the room or collaborating with engineers and IT teams to build workflows, it’s a role that spans artistry, science and technology.

What system do you work on?
FilmLight Baselight is my platform of choice, but I have graded films in DaVinci Resolve. Recently, when I was training a new color assistant, she said, “Baselight works like how I think.” That sentiment really resonated with me in how I felt about using it early on. For me, Baselight is really intuitive in terms of the working surface (Blackboard) and also how the GUI is designed. It allows me to work very seamlessly when collaborating with many people in the room – I work directly on the film’s image and I almost never look at my interface.

I also find the way you can be in control of the color spaces and effortlessly split or switch between displays has been a huge time saver for me in recent years with creating SDR and HDR deliverables on tight timelines. That said, I think great results are achieved with any system — and that it is truly the choice of the artist, as a DP would choose a camera and lenses.

Finally, what do you do to de-stress from it all?
I have become a daily meditator, so that definitely helps me to stay calm and mindful of those around me, as well as focus on what I can do in the moment versus worrying about the future or past.

I’m also pretty devoted to any and all fitness routines that work with my schedule. Now it’s all at-home workouts, so I am fairly obsessed with HIIT and yoga, along with the odd ‘80s or ‘90s dance party with my six-year-old.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

 

With NAB 2020 canceled, what’s next?

By Randi Altman

After weeks of will they or won’t they, and many companies announcing they won’t be exhibiting, NAB announced Wednesday it has canceled its Las Vegas show.

NAB president and CEO Gordon Smith released a statement yesterday that included this bit about what might be next: “We are still weighing the best potential path forward, and we ask you for your patience as we do so. We are committed to exploring all possible alternatives so that we can provide a productive setting where the industry can engage with the latest technology, hear from industry thought leaders and make the game-changing connections that drive our industry forward.”

Some think NAB will be rescheduled, but even the NAB isn’t sure what’s next. They sent this in response to my question about their plans: “We’re in the process of engaging with exhibitors and attendees to gauge their interest in what will be the best path forward for the show. Options under consideration include an event later this year or expanding NAB Show New York in the fall. All of this is, of course, premised on COVID 19 fears being alleviated. We will be in touch with the NAB Show community as decisions are made.”

What is certain is that product makers were prepared to introduce new tools at NAB 2020, and while some might choose to push back their announcements, others are scrambling to find different ways to get their message out. The easy solution is to take everything online — demos, live streaming, etc.

For our part, postPerspective will be covering news from NAB, without there actually being at NAB. Our NAB video interviews and special Video Newsletters will happen, but instead of being from the show floor, we will be conducting them online. And as news comes in, we’ll be reporting it. So check our site for the latest innovations from what we’re now calling “NAB season.” And we’re trying to think outside the box, so if there’s a way we can help you get your message out, just let us know.

I think everyone will admit that trade shows have been evolving, and traditional trade shows have realized that as well. This year even NAB was set to start on a Sunday for the very first time in an effort to expand access to the show floor.

I, for one, am excited to see what’s next. As Plato said, “Necessity is the mother of invention.” Sometimes something bad has to happen to get us to the next step… sooner than we were planning on it.

Visual Effects Roundtable

By Randi Altman

With Siggraph 2019 in our not-too-distant rearview mirror, we thought it was a good time to reach out to visual effects experts to talk about trends. Everyone has had a bit of time to digest what they saw. Users are thinking what new tools and technologies might help their current and future workflows. Manufacturers are thinking about how their products will incorporate these new technologies.

We provided these experts with questions relating to realtime raytracing, the use of game engines in visual effects workflows, easier ways to share files and more.

Ben Looram, partner/owner, Chapeau Studios
Chapeau Studios provides production, VFX/animation, design and creative IP development (both for digital content and technology) for all screens.

What film inspired you to work in VFX?
There was Ray Harryhausen’s film Jason and the Argonauts, which I watched on TV when I was seven. The skeleton-fighting scene has been visually burned into my memory ever since. Later in life I watched an artist compositing some tough bluescreen shots on a Quantel Henry in 1997, and I instantly knew that that was going to be in my future.

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
Double the content for half the cost seems to be the industry’s direction lately. This is coming from new in-house/client-direct agencies that sometimes don’t know what they don’t know … so we help guide/teach them where it’s OK to trim budgets or dedicate more funds for creative.

Are game engines affecting how you work, or how you will work in the future?
Yes, rendering on device and all the subtle shifts in video fidelity shifted our attention toward game engine technology a couple years ago. As soon as the game engines start to look less canned and have accurate depth of field and parallax, we’ll start to integrate more of those tools into our workflow.

Right now we have a handful of projects in the forecast where we will be using realtime game engine outputs as backgrounds on set instead of shooting greenscreen.

What about realtime raytracing? How will that affect VFX and the way you work?
We just finished an R&D project with Intel’s new raytracing engine OSPRay for Siggraph. The ability to work on a massive scale with last-minute creative flexibility was my main takeaway. This will allow our team to support our clients’ swift changes in direction with ease on global launches. I see this ingredient as really exciting for our creative tech devs moving into 2020. Proof of concept iterations will become finaled faster, and we’ve seen efficiencies in lighting, render and compositing effort.

How have ML/AI affected your workflows, if at all?
None to date, but we’ve been making suggestions for new tools that will make our compositing and color correction process more efficient.

The Uncanny Valley. Where are we now?
Still uncanny. Even with well-done virtual avatar influencers on Instagram like Lil Miquela, we’re still caught with that eerie feeling of close-to-visually-correct with a “meh” filter.

Apple

Can you name some recent projects?
The Rookie’s Guide to the NFL. This was a fun hybrid project where we mixed CG character design with realtime rendering voice activation. We created an avatar named Matthew for the NFL’s Amazon Alexa Skills store that answers your football questions in real time.

Microsoft AI: Carlsberg and Snow Leopard. We designed Microsoft’s visual language of AI on multiple campaigns.

Apple Trade In campaign: Our team concepted, shot and created an in-store video wall activation and on-all-device screen saver for Apple’s iPhone Trade In Program.

 

Mac Moore, CEO, Conductor
Conductor is a secure cloud-based platform that enables VFX, VR/AR and animation studios to seamlessly offload rendering and simulation workloads to the public cloud.

What are some of today’s VFX trends? Is cloud playing an even larger role?
Cloud is absolutely a growing trend. I think for many years the inherent complexity and perceived cost of cloud has limited adoption in VFX, but there’s been a marked acceleration in the past 12 months.

Two years ago at Siggraph, I was explaining the value of elastic compute and how it perfectly aligns with the elastic requirements that define our project-based industry; this year there was a much more pragmatic approach to cloud, and many of the people I spoke with are either using the cloud or planning to use it in the near future. Studios have seen referenceable success, both technically and financially, with cloud adoption and are now defining cloud’s role in their pipeline for fear of being left behind. Having a cloud-enabled pipeline is really a game changer; it is leveling the field and allowing artistic talent to be the differentiation, rather than the size of the studio’s wallet (and its ability to purchase a massive render farm).

How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines for VFX have definitely attracted interest lately and show a lot of promise in certain verticals like virtual production. There’s more work to be done in terms of out-of-the-box usability, but great strides have been made in the past couple years. I also think various open source initiatives and the inherent collaboration those initiatives foster will help move VFX workflows forward.

Will realtime raytracing play a role in how your tool works?
There’s a need for managing the “last mile,” even in realtime raytracing, which is where Conductor would come in. We’ve been discussing realtime assist scenarios with a number of studios, such as pre-baking light maps and similar applications, where we’d perform some of the heavy lifting before assets are integrated in the realtime environment. There are certainly benefits on both sides, so we’ll likely land in some hybrid best practice using realtime and traditional rendering in the near future.

How do ML/AI and AR/VR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Machine learning and artificial intelligence are critical for our next evolutionary phase at Conductor. To date we’ve run over 250 million core-hours on the platform, and for each of those hours, we have a wealth of anonymous metadata about render behavior, such as the software run, duration, type of machine, etc.

Conductor

For our next phase, we’re focused on delivering intelligent rendering akin to ride-share app pricing; the goal is to provide producers with an upfront cost estimate before they submit the job, so they have a fixed price that they can leverage for their bids. There is also a rich set of analytics that we can mine, and those analytics are proving invaluable for studios in the planning phase of a project. We’re working with data science experts now to help us deliver this insight to our broader customer base.

AR/VR front presents a unique challenge for cloud, due to the large size and variety of datasets involved. The rendering of these workloads is less about compute cycles and more about scene assembly, so we’re determining how we can deliver more of a whole product for this market in particular.

OpenXR and USD are certainly helping with industry best practices and compatibility, which build recipes for repeatable success, and Conductor is collaborating on creating those guidelines for success when it comes to cloud computing with those standards.

What is next on the horizon for VFX?
Cloud, open source and realtime technologies are all disrupting VFX norms and are converging in a way that’s driving an overall democratization of the industry. Gone are the days when you need a pile of cash and a big brick-and-mortar building to house all of your tech and talent.

Streaming services and new mediums, along with a sky-high quality bar, have increased the pool of available VFX work, which is attracting new talent. Many of these new entrants are bootstrapping their businesses with cloud, standards-based approaches and geographically dispersed artistic talent.

Conductor recently became a fully virtual company for this reason. I hire based on expertise, not location, and today’s technology allows us to collaborate as if we are in the same building.

 

Aruna Inversin, creative director/VFX supervisor, Digital Domain 
Digital Domain has provided visual effects and technology for hundreds of motion pictures, commercials, video games, music videos and virtual reality experiences. It also livestreams events in 360-degree virtual reality, creates “virtual humans” for use in films and live events, and develops interactive content, among other things.

What film inspired you to work in VFX?
RoboCop in 1984. The combination of practical effects, miniatures and visual effects inspired me to start learning about what some call “The Invisible Art.”

What trends have you been seeing? What do you feel is important?
There has been a large focus on realtime rendering and virtual production and using it to help increase the throughput and workflow of visual effects. While indeed realtime rendering does increase throughput, there is now a greater onus on filmmakers to plan their creative ideas and assets before you can render them. No longer is it truly post production, but we are back into the realm of preproduction, using post tools and realtime tools to help define how a story is created and eventually filmed.

USD and cloud rendering are also important components, which allow many different VFX facilities the ability to manage their resources effectively. I think another trend that has since passed and has gained more traction is the availability of ACES and a more unified color space by the Academy. This allows quicker throughput between all facilities.

Are game engines affecting how you work or how you will work in the future?
As my primary focus is in new media and experiential entertainment at Digital Domain, I already use game engines (cinematic engines, realtime engines) for the majority of my deliverables. I also use our traditional visual effects pipeline; we have created a pipeline that flows from our traditional cinematic workflow directly into our realtime workflow, speeding up the development process of asset creation and shot creation.

What about realtime raytracing? How will that affect VFX and the way you work?
The ability to use Nvidia’s RTX and raytracing increases the physicality and realistic approximations of virtual worlds, which is really exciting for the future of cinematic storytelling in realtime narratives. I think we are just seeing the beginnings of how RTX can help VFX.

How have AR/VR and AI/ML affected your workflows, if at all?
Augmented reality has occasionally been a client deliverable for us, but we are not using it heavily in our VFX pipeline. Machine learning, on the other hand, allows us to continually improve our digital humans projects, providing quicker turnaround with higher fidelity than competitors.

The Uncanny Valley. Where are we now?
There is no more uncanny valley. We have the ability to create a digital human with the nuance expected! The only limitation is time and resources.

Can you name some recent projects?
I am currently working on a Time project but I cannot speak too much about it just yet. I am also heavily involved in creating digital humans for realtime projects for a number of game companies that wish to push the boundaries of storytelling in realtime. All these projects have a release date of 2020 or 2021.

 

Matt Allard, strategic alliances lead, M&E, Dell Precision Workstations
Dell Precision workstations feature the latest processors and graphics technology and target those working in the editing studio or at a drafting table, at the office or on location.

What are some of today’s VFX trends?
We’re seeing a number of trends in VFX at the moment — from 4K mastering from even higher-resolution acquisition formats and an increase in HDR content to game engines taking a larger role on set in VFX-heavy productions. Of course, we are also seeing rising expectations for more visual sophistication, complexity and film-level VFX, even in TV post (for example, Game of Thrones).

Will realtime raytracing play a role in how your tools work?
We expect that Dell customers will embrace realtime and hardware-accelerated raytracing as creative, cost-saving and time-saving tools. With the availability of Nvidia Quadro RTX across the Dell Precision portfolio, including on our 7000 series mobile workstations, customers can realize these benefits now to deliver better content wherever a production takes them in the world.

Large-scale studio users will not only benefit from the freedom to create the highest-quality content faster, but they’ll likely see overall impact to their energy consumption as they assess the move from CPU rendering, which dominates studio data centers today. Moving toward GPU and hybrid CPU/GPU rendering approaches can offer equal or better rendering output with less energy consumption.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines have made their way into VFX-intensive productions to deliver in-context views of the VFX during the practical shoot. With increasing quality driven by realtime raytracing, game engines have the potential to drive a master-quality VFX shot on set, helping to minimize the need to “fix it in post.”

What is next on the horizon for VFX?
The industry is at the beginning of a new era as artificial intelligence and machine learning techniques are brought to bear on VFX workflows. Analytical and repetitive tasks are already being targeted by major software applications to accelerate or eliminate cumbersome elements in the workflow. And as with most new technologies, it can result in improved creative output and/or cost savings. It really is an exciting time for VFX workflows!

Ongoing performance improvements to the computing infrastructure will continue to accelerate and democratize the highest-resolution workflows. Now more than ever, small shops and independents can access the computing power, tools and techniques that were previously available only to top-end studios. Additionally, virtualization techniques will allow flexible means to maximize the utilization and proliferation of workstation technology.

 

Carl Flygare, manager, Quadro Marketing, PNY
Providing tools for realtime raytracing, augmented reality and virtual reality with the goal of advancing VFX workflow creativity and productivity. PNY is NVIDIA’s Quadro channel partner throughout North America, Latin America, Europe and India..

How will realtime raytracing play a role in workflows?
Budgets are getting tighter, timelines are contracting, and audience expectations are increasing. This sounds like a perfect storm, in the bad sense of the term, but with the right tools, it is actually an opportunity.

Realtime raytracing, based on Nvidia’s RTX technology and support from leading ISVs, enables VFX shops to fit into these new realities while delivering brilliant work. Whiteboarding a VFX workflow is a complex task, so let’s break it down by categories. In preproduction, specifically previz, realtime raytracing will let VFX artists present far more realistic and compelling concepts much earlier in the creative process than ever before.

This extends to the next phase, asset creation and character animation, in which models can incorporate essentially lifelike nuance, including fur, cloth, hair or feathers – or something else altogether! Shot layout, blocking, animation, simulation, lighting and, of course, rendering all benefit from additional iterations, nuanced design and the creative possibilities that realtime raytracing can express and realize. Even finishing, particularly compositing, can benefit. Given the applicable scope of realtime raytracing, it will essentially remake VFX workflows and overall film pipelines, and Quadro RTX series products are the go-to tools enabling this revolution.

How are game engines changing how VFX is done? Is this for everyone or just a select few?
Variety had a great article on this last May. ILM substituted realtime rendering and five 4K laser projectors for a greenscreen shot during a sequence from Solo: A Star Wars Story. This allowed the actors to perform in context — in this case, a hyperspace jump — but also allowed cinematographers to capture arresting reflections of the jump effect in the actors’ eyes. Think of it as “practical digital effects” created during shots, not added later in post. The benefits are significant enough that the entire VFX ecosystem, from high-end shops and major studios to independent producers, are using realtime production tools to rethink how movies and TV shows happen while extending their vision to realize previously unrealizable concepts or projects.

Project Sol

How do ML and AR play a role in your tool? And are you supporting OpenXR 1.0? What about Pixar’s USD?
Those are three separate but somewhat interrelated questions! ML (machine learning) and AI (artificial intelligence) can contribute by rapidly denoising raytraced images in far less time than would be required by letting a given raytracing algorithm run to conclusion. Nvidia enables AI denoising in Optix 5.0 and is working with a broad array of leading ISVs to bring ML/AI enhanced realtime raytracing techniques into the mainstream.

OpenXR 1.0 was released at Siggraph 2019. Nvidia (among others) is supporting this open, royalty-free and cross-platform standard for VR/AR. Nvidia is now providing VR enhancing technologies, such as variable rate shading, content adaptive shading and foveated rendering (among others), with the launch of Quadro RTX. This provides access to the best of both worlds — open standards and the most advanced GPU platform on which to build actual implementations.

Pixar and Nvidia have collaborated to make Pixar’s USD (Universal Scene Description) and Nvidia’s complementary MDL (Materials Definition Language) software open source in an effort to catalyze the rapid development of cinematic quality realtime raytracing for M&E applications.

Project Sol

What is next on the horizon for VFX?
The insatiable desire on the part of VFX professionals, and audiences, to explore edge-of-the-envelope VFX will increasingly turn to realtime raytracing, based on the actual behavior of light and real materials, increasingly sophisticated shader technology and new mediums like VR and AR to explore new creative possibilities and entertainment experiences.

AI, specifically DNNs (deep neural networks) of various types, will automate many repetitive VFX workflow tasks, allowing creative visionaries and artists to focus on realizing formerly impossible digital storytelling techniques.

One obvious need is increasing the resolution at which VFX shots are rendered. We’re in a 4K world, but many films are finished at 2K, primarily based on VFX. 8K is unleashing the abilities (and changing the economics) of cinematography, so expect increasingly powerful realtime rendering solutions, such as Quadro RTX (and successor products when they come to market), along with amazing advances in AI, to allow the VFX community to innovate in tandem.

 

Chris Healer, CEO/CTO/VFX supervisor, The Molecule 
Founded in 2005, The Molecule creates bespoke VFX imagery for clients worldwide. Over 80 artists, producers, technicians and administrative support staff collaborate at our New York City and Los Angeles studios.

What film or show inspired you to work in VFX?
I have to admit, The Matrix was a big one for me.

Are game engines affecting how you work or how you will work?
Game engines are coming, but the talent pool is difficult and the bridge is hard to cross … a realtime artist doesn’t have the same mindset as a traditional VFX artist. The last small percentage of completion on a shot can invalidate any values gained by working in a game engine.

What about realtime raytracing?
I am amazed at this technology, and as a result bought stock in Nvidia, but the software has to get there. It’s a long game, for sure!

How have AR/VR and ML/AI affected your workflows?
I think artists are thinking more about how images work and how to generate them. There is still value in a plain-old four-cornered 16:9 rectangle that you can make the most beautiful image inside of.

AR,VR, ML, etc., are not that, to be sure. I think there was a skip over VR in all the hype. There’s way more to explore in VR, and that will inform AR tremendously. It is going to take a few more turns to find a real home for all this.

What trends have you been seeing? Cloud workflows? What else?
Everyone is rendering in the cloud. The biggest problem I see now is lack of a UBL model that is global enough to democratize it. UBL = usage-based licensing. I would love to be able to render while paying by the second or minute at large or small scales. I would love for Houdini or Arnold to be rentable on a Satoshi level … that would be awesome! Unfortunately, it is each software vendor that needs to provide this, which is a lot to organize.

The Uncanny Valley. Where are we now?
We saw in the recent Avengers film that Mark Ruffalo was in it. Or was he? I totally respect the Uncanny Valley, but within the complexity and context of VFX, this is not my battle. Others have to sort this one out, and I commend the artists who are working on it. Deepfake and Deeptake are amazing.

Can you name some recent projects?
We worked on Fosse/Verdon, but more recent stuff, I can’t … sorry. Let’s just say I have a lot of processors running right now.

 

Matt Bach and William George, lab technicians, Puget Systems 
Puget Systems specializes in high-performance custom-built computers — emphasizing each customer’s specific workflow.

Matt Bach

William George

What are some of today’s VFX trends?
Matt Bach: There are so many advances going on right now that it is really hard to identify specific trends. However, one of the most interesting to us is the back and forth between local and cloud rendering.

Cloud rendering has been progressing for quite a few years and is a great way to get a nice burst in rendering performance when you are  in a crunch. However, there have been high improvements in GPU-based rendering with technology like Nvidia Optix. Because of these, you no longer have to spend a fortune to have a local render farm, and even a relatively small investment in hardware can often move the production bottleneck away from rendering to other parts of the workflow. Of course, this technology should make its way to the cloud at some point, but as long as these types of advances keep happening, the cloud is going to continue playing catch-up.

A few other that we are keeping our eyes on are the growing use of game engines, motion capture suits and realtime markerless facial tracking in VFX pipelines.

Realtime raytracing is becoming more prevalent in VFX. What impact does realtime raytracing have on system hardware, and what do VFX artists need to be thinking about when optimizing their systems?
William George: Most realtime raytracing requires specialized computer hardware, specifically video cards with dedicated raytracing functionality. Raytracing can be done on the CPU and/or normal video cards as well, which is what render engines have done for years, but not quickly enough for realtime applications. Nvidia is the only game in town at the moment for hardware raytracing on video cards with its RTX series.

Nvidia’s raytracing technology is available on its consumer (GeForce) and professional (Quadro) RTX lines, but which one to use depends on your specific needs. Quadro cards are specifically made for this kind of work, with higher reliability and more VRAM, which allows for the rendering of more complex scenes … but they also cost a lot more. GeForce, on the other hand, is more geared toward consumer markets, but the “bang for your buck” is incredibly high, allowing you to get several times the performance for the same cost.

In between these two is the Titan RTX, which offers very good performance and VRAM for its price, but due to its fan layout, it should only be used as a single card (or at most in pairs, if used in a computer chassis with lots of airflow).

Another thing to consider is that if you plan on using multiple GPUs (which is often the case for rendering), the size of the computer chassis itself has to be fairly large in order to fit all the cards, power supply, and additional cooling needed to keep everything going.

How are game engines changing or impacting VFX workflows?
Bach: Game engines have been used for previsualization for a while, but we are starting to see them being used further and further down the VFX pipeline. In fact, there are already several instances where renders directly captured from game engines, like Unity or Unreal, are being used in the final film or animation.

This is getting into speculation, but I believe that as the quality of what game engines can produce continues to improve, it is going to drastically shake up VFX workflows. The fact that you can make changes in real time, as well as use motion capture and facial tracking, is going to dramatically reduce the amount of time necessary to produce a highly polished final product. Game engines likely won’t completely replace more traditional rendering for quite a while (if ever), but it is going to be significant enough that I would encourage VFX artists to at least familiarize themselves with the popular engines like Unity or Unreal.

What impact do you see ML/AI and AR/VR playing for your customers?
We are seeing a lot of work being done for machine learning and AI, but a lot of it is still on the development side of things. We are starting to get a taste of what is possible with things like Deepfakes, but there is still so much that could be done. I think it is too early to really tell how this will affect VFX in the long term, but it is going to be exciting to see.

AR and VR are cool technologies, but it seems like they have yet to really take off, in part because designing for them takes a different way of thinking than traditional media, but also in part because there isn’t one major platform that’s an overwhelming standard. Hopefully, that is something that gets addressed over time, because once creative folks really get a handle on how to use the unique capabilities of AR/VR to their fullest, I think a lot of neat stories will be told.

What is the next on the horizon for VFX?
Bach: The sky is really the limit due to how fast technology and techniques are changing, but I think there are two things in particular that are going to be very interesting to see how they play out.

First, we are hitting a point where ethics (“With great power comes great responsibility” and all that) is a serious concern. With how easy it is to create highly convincing Deepfakes of celebrities or other individuals, even for someone who has never used machine learning before, I believe that there is the potential of backlash from the general public. At the moment, every use of this type of technology has been for entertainment or otherwise rightful purposes, but the potential to use it for harm is too significant to ignore.

Something else I believe we will start to see is “VFX for the masses,” similar to how video editing used to be a purely specialized skill, but now anyone with a camera can create and produce content on social platforms like YouTube. Advances in game engines, facial/body tracking for animated characters and other technologies that remove a number of skills and hardware barriers for relatively simple content are going to mean that more and more people with no formal training will take on simple VFX work. This isn’t going to impact the professional VFX industry by a significant degree, but I think it might spawn a number of interesting techniques or styles that might make their way up to the professional level.

 

Paul Ghezzo, creative director, Technicolor Visual Effects
Technicolor and its family of VFX brands provide visual effects services tailored to each project’s needs.

What film inspired you to work in VFX?
At a pretty young age, I fell in love with Star Wars: Episode IV – A New Hope and learned about the movie magic that was developed to make those incredible visuals come to life.

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
USD will help structure some of what we currently do, and cloud rendering is an incredible source to use when needed. I see both of them maturing and being around for years to come.

As for other trends, I see new methods of photogrammetry and HDRI photography/videography providing datasets for digital environment creation and capturing lighting content; performance capture (smart 2D tracking and manipulation or 3D volumetric capture) for ease of performance manipulation or layout; and even post camera work. New simulation engines are creating incredible and dynamic sims in a fraction of the time, and all of this coming together through video cards streamlining the creation of the end product. In many ways it might reinvent what can be done, but it might take a few cutting-edge shows to embrace and perfect the recipe and show its true value.

Production cameras tethered to digital environments for live set extensions are also coming of age, and with realtime rendering becoming a viable option, I can imagine that it will only be a matter of time for LED walls to become the new greenscreen. Can you imagine a live-action set extension that parallaxes, distorts and is exposed in the same way as its real-life foreground? How about adding explosions, bullet hits or even an armada of spaceships landing in the BG, all on cue. I imagine this will happen in short order. Exciting times.

Are game engines affecting how you work or how you will work in the future?
Game engines have affected how we work. The speed and quality that they offer is undoubtably a game changer, but they don’t always create the desired elements and AOVs that are typically needed in TV/film production.

They are also creating a level of competition that is spurring other render engines to be competitive and provide a similar or better solution. I can imagine that our future will use Unreal/Unity engines for fast turnaround productions like previz and stylized content, as well as for visualizing virtual environments and digital sets as realtime set extensions and a lot more.

Snowfall

What about realtime raytracing? How will that affect VFX and the way you work?
GPU rendering has single-handedly changed how we render and what we render with. A handful of GPUs and a GPU-accelerated render engine can equal or surpass a CPU farm that’s several times larger and much more expensive. In VFX, iterations equal quality, and if multiple iterations can be completed in a fraction of the time — and with production time usually being finite — then GPU-accelerated rendering equates to higher quality in the time given.

There are a lot of hidden variables to that equation (change of direction, level of talent provided, work ethics, hardware/software limitations, etc.), but simply said, if you can hit the notes as fast as they are given, and not have to wait hours for a render farm to churn out a product, then clearly the faster an iteration can be provided the more iterations can be produced, allowing for a higher-quality product in the time given.

How have AR or ML affected your workflows, if at all?
ML and AR haven’t significantly affected our current workflows yet … but I believe they will very soon.

One aspect of AR/VR/MR that we occasionally use in TV/film production is to previz environments, props and vehicles, which allows everyone in production and on set/location to see what the greenscreen will be replaced with, which allows for greater communication and understanding with the directors, DPs, gaffers, stunt teams, SFX and talent. I can imagine that AR/VR/MR will only become more popular as a preproduction tool, allowing productions to front load and approve all aspects of production way before the camera is loaded and the clock is running on cast and crew.
Machine learning is on the cusp of general usage, but it currently seems to be used by productions with lengthy schedules that will benefit from development teams building those toolsets. There are tasks that ML will undoubtably revolutionize, but it hasn’t affected our workflows yet.

The Uncanny Valley. Where are we now?
Making the impossible possible … That *is* what we do in VFX. Looking at everything from Digital Emily in 2011 to Thanos and Hulk in Avengers: Endgame, we’ve seen what can be done, and the Uncanny Valley will likely remain, but only on productions that can’t afford the time or cost of flawless execution.

Can you name some recent projects?
Big Little Lies, Dead to Me, NOS4A2, True Detective, Veep, This Is Us, Snowfall, The Loudest Voice, and Avengers: Endgame.

 

James Knight, virtual production director, AMD 
AMD is a semiconductor company that develops computer processors and related technologies for M&E as well as other markets. Its tools include Ryzen and Threadripper.

What are some of today’s VFX trends?
Well, certainly the exploration for “better, faster, cheaper” keeps going. Faster rendering, so our community can accomplish more iterations in a much shorter amount of time, seems to something I’ve heard the whole time I’ve been in the business.

I’d surely say the virtual production movement (or on-set visualization) is gaining steam, finally. I work with almost all the major studios in my role, and all of them, at a minimum, have the ability to speed up post and blend it with production on their radar; many have virtual production departments.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
I would say game engines are where most of the innovation comes from these days. Think about Unreal, for example. Epic pioneered Fortnite, and the revenue from that must be astonishing, and they’re not going to sit on their hands. The feature film and TV post/VFX business benefits from the requirement of the gaming consumer to see higher-resolution, more photorealistic images in real time. That gets passed on to our community in eliminating guess work on set when framing partial or completely CG shots.

It should be for everyone or most, because the realtime and post production time savings are rather large. I think many still have a personal preference for what they’re used to. And that’s not wrong, if it works for them, obviously that’s fine. I just think that even in 2019, use of game engines is still new to some … which is why it’s not completely ubiquitous.

How do ML or AR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Well, it’s more the reverse. With our new Rome and Threadripper CPUs, we’re powering AR. Yes, we are supporting OpenXR 1.0.

What is next on the horizon for VFX?
Well, the demand for VFX is increasing, not the opposite, so the pursuit of faster photographic reality is perpetually in play. That’s good job security for me at a CPU/GPU company, as we have a way to go to properly bridge the Uncanny Valley completely, for example.

I’d love to say lower-cost CG is part of the future, but then look at the budgets of major features — they’re not exactly falling. The dance of Moore’s law will forever be in effect more than likely, with momentary huge leaps in compute power — like with Rome and Threadripper — catching amazement for a period. Then, when someone sees the new, expanded size of their sandpit, they then fill that and go, “I now know what I’d do if it was just a bit bigger.”

I am vested and fascinated by the future of VFX, but I think it goes hand in hand with great storytelling. If we don’t have great stories, then directing and artistry innovations don’t properly get noticed. Look at the top 20 highest grossing films in history … they’re all fantasy. We all want to be taken away from our daily lives and immersed in a beautiful, realistic VFX intense fictional world for 90 minutes, so we’ll be forever pushing the boundaries of rigging, texturing, shading, simulations, etc. To put my finger on exactly what’s next, I’d say I happen to know of a few amazing things that are coming, but sadly, I’m not at liberty to say right now.

 

Michel Suissa, managing director of pro solutions, The Studio-B&H 
The Studio-B&H provides hands-on experience to high-end professionals. Its Technology Center is a fully operational studio with an extensive display of high-end products and state-of-the-art workflows.

What are some of today’s VFX trends?
AI, ML, NN (GAN) and realtime environments

Will realtime raytracing play a role in how the tools you provide work?
It already does with most relevant applications in the market.

How are game engines changing how VFX are done? Is this for everyone or just a select few?
The ubiquity of realtime game engines is becoming more mainstream with every passing year. It is becoming fairly accessible to a number of disciplines within different market targets.

What is next on the horizon for VFX?
New pipeline architectures that will rely on different implementations (traditional and AI/ML/NN) and mixed infrastructures (local and cloud-based).

What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
AI, ML and realtime environments. New cloud toolsets. Prominence of neural networks and GANs. Proliferation of convincing “deepfakes” as a proof of concept for the use of generative networks as resources for VFX creation.

What about realtime raytracing? How will that affect VFX workflows?
RTX is changing how most people see their work being done. It is also changing expectations about what it takes to create and render CG images.



The Uncanny Valley. Where are we now?
AI and machine learning will help us get there. Perfection still remains too costly. The amount of time and resources required to create something convincing is prohibitive for the large majority of the budgets.

 

Marc Côté, CEO, Real by Fake 
Real by Fake services include preproduction planning, visual effects, post production and tax-incentive financing.

What film or show inspired you to work in VFX?
George Lucas’ Star Wars and Indiana Jones (Raiders of the Lost Ark). For Star Wars, I was a kid and I saw this movie. It brought me to another universe. Star Wars was so inspiring even though I was too young to understand what the movie was about. The robots in the desert and the spaceships flying around. It looked real; it looked great. I was like, “Wow, this is amazing.”

Indiana Jones because it was a great adventure; we really visit the worlds. I was super-impressed by the action, by the way it was done. It was mostly practical effects, not really visual effects. Later on I realized that in Star Wars, they were using robots (motion control systems) to shoot the spaceships. And as a kid, I was very interested in robots. And I said, “Wow, this is great!” So I thought maybe I could use my skills and what I love and combine it with film. So that’s the way it started.

What trends have you been seeing? What do you feel is important?
The trend right now is using realtime rendering engines. It’s coming on pretty strong. The game companies who build engines like Unity or Unreal are offering a good product.

It’s bit of a hack to use these tools in rendering or in production at this point. They’re great for previz, and they’re great for generating realtime environments and realtime playback. But having the capacity to change or modify imagery with the director during the process of finishing is still not easy. But it’s a very promising trend.

Rendering in the cloud gives you a very rapid capacity, but I think it’s very expensive. You also have to download and upload 4K images, so you need a very big internet pipe. So I still believe in local rendering — either with CPUs or GPUs. But cloud rendering can be useful for very tight deadlines or for small companies that want to achieve something that’s impossible to do with the infrastructure they have.

My hope is that AI will minimize repetition in visual effects. For example, in keying. We key multiple sections of the body, but we get keying errors in plotting or transparency or in the edges, and they are all a bit different, so you have to use multiple keys. AI would be useful to define which key you need to use for every section and do it automatically and in parallel. AI could be an amazing tool to be able to make objects disappear by just selecting them.

Pixar’s USD is interesting. The question is: Will the industry take it as a standard? It’s like anything else. Kodak invented DPX, and it became the standard through time. Now we are using EXR. We have different software, and having exchange between them will be great. We’ll see. We have FBX, which is a really good standard right now. It was built by Filmbox, a Montreal company that was acquired by Autodesk. So we’ll see. The demand and the companies who build the software — they will be the ones who take it up or not. A big company like Pixar has the advantage of other companies using it.

The last trend is remote access. The internet is now allowing us to connect cross-country, like from LA to Montreal or Atlanta. We have a sophisticated remote infrastructure, and we do very high-quality remote sessions with artists who work from disparate locations. It’s very secure and very seamless.

What about realtime raytracing? How will that affect VFX and the way you work?
I think we have pretty good raytracing compared to what we had two years ago. I think it’s a question of performance, and of making it user-friendly in the application so it’s easy to light with natural lighting. To not have to fake the rebounds so you can get two or three rebounds. I think it’s coming along very well and quickly.

Sharp Objects

So what about things like AI/ML or AR/VR? Have those things changed anything in the way movies and TV shows are being made?
My feeling right now is that we are getting into an era where I don’t think you’ll have enough visual effects companies to cover the demand.

Every show has visual effects. It can be a complete character, like a Transformer, or a movie from the Marvel Universe where the entire film is CG. Or it can be the huge number of invisible effects that are starting to appear in virtually every show. You need capacity to get all this done.

AI can help minimize repetition so artists can work more on the art and what is being created. This will accelerate and give us the capacity to respond to what’s being demanded of us. They want a faster cheaper product, and they want the quality to be as high as a movie.

The only scenario where we are looking at using AR is when we are filming. For example, you need to have a good camera track in real time, and then you want to be able to quickly add a CGI environment around the actors so the director can make the right decision in terms of the background or interactive characters who are in the scene. The actors will not see it until they have a monitor or a pair of glasses or something to be able to give them the result.

So AR is a tool to be able to make faster decisions when you’re on set shooting. This is what we’ve been working on for a long time: bringing post production and preproduction together. To have an engineering department who designs and conceptualizes and creates everything that needs to be done before shooting.

The Uncanny Valley. Where are we now?
In terms of the environment, I think we’re pretty much there. We can create an environment that nobody will know is fake. Respectfully, I think our company Real by Fake is pretty good at doing it.

In terms of characters, I think we’re still not there. I think the game industry is helping a lot to push this. I think we’re on the verge of having characters look as close as possible to live actors, but if you’re in a closeup, it still feels fake. For mid-ground and long shots, it’s fine. You can make sure nobody will know. But I don’t think we’ve crossed the valley just yet.

Can you name some recent projects?
Big Little Lies and Sharp Objects for HBO, Black Summer for Netflix
and Brian Banks, an indie feature.

 

Jeremy Smith, CTO, Jellyfish Pictures
Jellyfish Pictures provides a range of services including VFX for feature film, high-end TV and episodic animated kids’ TV series and visual development for projects spanning multiple genres.

What film or show inspired you to work in VFX?
Forrest Gump really opened my eyes to how VFX could support filmmaking. Seeing Tom Hanks interact with historic footage (e.g., John F. Kennedy) was something that really grabbed my attention, and I remember thinking, “Wow … that is really cool.”

What trends have you been seeing? What do you feel is important?
The use of cloud technology is really empowering “digital transformation” within the animation and VFX industry. The result of this is that there are new opportunities that simply wouldn’t have been possible otherwise.

Jellyfish Pictures uses burst rendering into the cloud, extending our capacity and enabling us to take on more work. In addition to cloud rendering, Jellyfish Pictures were early adopters of virtual workstations, and, especially after Siggraph this year, it is apparent to see that this is the future for VFX and animation.

Virtual workstations promote a flexible and scalable way of working, with global reach for talent. This is incredibly important for studios to remain competitive in today’s market. As well as the cloud, formats such as USD are making it easier to exchange data with others, which allow us to work in a more collaborative environment.

It’s important for the industry to pay attention to these, and similar, trends, as they will have a massive impact on how productions are carried out going forward.
Are game engines affecting how you work, or how you will work in the future?

Game engines are offering ways to enhance certain parts of the workflow. We see a lot of value in the previz stage of the production. This allows artists to iterate very quickly and helps move shots onto the next stage of production.

What about realtime raytracing? How will that affect VFX and the way you work?
The realtime raytracing from Nvidia (as well as GPU compute in general) offers artists a new way to iterate and help create content. However, with recent advancements in CPU compute, we can see that “traditional” workloads aren’t going to be displaced. The RTX solution is another tool that can be used to assist in the creation of content.

How have AR/VR and ML/AI affected your workflows, if at all?
Machine learning has the power to really assist certain workloads. For example, it’s possible to use machine learning to assist a video editor by cataloging speech in a certain clip. When a director says, “find the spot where the actor says ‘X,’” we can go directly to that point in time on the timeline.

 In addition, ML can be used to mine existing file servers that contain vast amounts of unstructured data. When mining this “dark data,” an organization may find a lot of great additional value in the existing content, which machine learning can uncover.

The Uncanny Valley. Where are we now?
With recent advancements in technology, the Uncanny Valley is closing, however it is still there. We see more and more digital humans in cinema than ever before (Peter Cushing in Rogue One: A Star Wars Story was a main character), and I fully expect to see more advances as time goes on.

Can you name some recent projects?
Our latest credits include Solo: A Star Wars Story, Captive State, The Innocents, Black Mirror, Dennis & Gnasher: Unleashed! and Floogals Seasons 1 through 3.

 

Andy Brown, creative director, Jogger 
Jogger Studios is a boutique visual effects studio with offices in London, New York and LA. With capabilities in color grading, compositing and animation, Jogger works on a variety of projects, from TV commercials and music videos to projections for live concerts.

What inspired you to work in VFX?
First of all, my sixth form English project was writing treatments for music videos to songs that I really liked. You could do anything you wanted to for this project, and I wanted to create pictures using words. I never actually made any of them, but it planted the seed of working with visual images. Soon after that I went to university in Birmingham in the UK. I studied communications and cultural studies there, and as part of the course, we visited the BBC Studios at Pebble Mill. We visited one of the new edit suites, where they were putting together a story on the inquiry into the Handsworth riots in Birmingham. It struck me how these two people, the journalist and the editor, could shape the story and tell it however they saw fit. That’s what got me interested on a critical level in the editorial process. The practical interest in putting pictures together developed from that experience and all the opportunities that opened up when I started work at MPC after leaving university.

What trends have you been seeing? What do you feel is important?
Remote workstations and cloud rendering are all really interesting. It’s giving us more opportunities to work with clients across the world using our resources in LA, SF, Austin, NYC and London. I love the concept of a centralized remote machine room that runs all of your software for all of your offices and allows you scaled rendering in an efficient and seamless manner. The key part of that sentence is seamless. We’re doing remote grading and editing across our offices so we can share resources and personnel, giving the clients the best experience that we can without the carbon footprint.

Are game engines affecting how you work or how you will work in the future?
Game engines are having a tremendous effect on the entire media and entertainment industry, from conception to delivery. Walking around Siggraph last month, seeing what was not only possible but practical and available today using gaming engines, was fascinating. It’s hard to predict industry trends, but the technology felt like it will change everything. The possibilities on set look great, too, so I’m sure it will mean a merging of production and post production in many instances.

What about realtime raytracing How will that affect VFX and the way you work?
Faster workflows and less time waiting for something to render have got to be good news. It gives you more time to experiment and refine things.

Chico for Wendy’s

How have AR/VR or ML/AI affected your workflows, if at all?
Machine learning is making its way into new software releases, and the tools are useful. Anything that makes it easier to get where you need to go on a shot is welcome. AR, not so much. I viewed the new Mac Pro sitting on my kitchen work surface through my phone the other day, but it didn’t make me want to buy it any more or less. It feels more like something that we can take technology from rather than something that I want to see in my work.

I’d like 3D camera tracking and facial tracking to be realtime on my box, for example. That would be a huge time-saver in set extensions and beauty work. Anything that makes getting perfect key easier would also be great.

The Uncanny Valley. Where are we now?
It always used to be “Don’t believe anything you read.” Now it’s, “Don’t believe anything you see.” I used to struggle to see the point of an artificial human, except for resurrecting dead actors, but now I realize the ultimate aim is suppression of the human race and the destruction of democracy by multimillionaire despots and their robot underlings.

Can you name some recent projects?
I’ve started prepping for the apocalypse, so it’s hard to remember individual jobs, but there’s been the usual kind of stuff — beauty, set extensions, fast food, Muppets, greenscreen, squirrels, adding logos, removing logos, titles, grading, finishing, versioning, removing rigs, Frankensteining, animating, removing weeds, cleaning runways, making tenders into wings, split screens, roto, grading, polishing cars, removing camera reflections, stabilizing, tracking, adding seatbelts, moving seatbelts, adding photos, removing pictures and building petrol stations. You know, the usual.

 

James David Hattin, founder/creative director, VFX Legion 
Based in Burbank and British Columbia, VFX Legion specializes in providing episodic shows and feature films with an efficient approach to creating high-quality visual effects.

What film or show inspired you to work in VFX?
Star Wars was my ultimate source of inspiration for doing visual effects. Much of the effects in the movies didn’t make sense to me as a six-year-old, but I knew that this was the next best thing to magic. Visual effects create a wondrous world where everyday people can become superheroes, leaders of a resistance or ruler of a 5th century dynasty. Watching X-wings flying over the surface of a space station, the size of a small moon was exquisite. I also learned, much later on, that the visual effects that we couldn’t see were as important as what we could see.

I had already been steeped in visual effects with Star Trek — phasers, spaceships and futuristic transporters. Models held from wires on a moon base convinced me that we could survive on the moon as it broke free from orbit. All of this fueled my budding imagination. Exploring computer technology and creating alternate realities, CGI and digitally enhanced solutions have been my passion for over a quarter of century.

What trends have you been seeing? What do you feel is important?
More and more of the work is going to happen inside a cloud structure. That is definitely something that is being pressed on very heavily by the tech giants like Google and Amazon that rule our world. There is no Moore’s law for computers anymore. The prices and power we see out of computers is almost plateauing. The technology is now in the world of optimizing algorithms or rendering with video cards. It’s about getting bigger, better effects out more efficiently. Some companies are opting to run their entire operations in the cloud or co-located server locations. This can theoretically free up the workers to be in different locations around the world, provided they have solid, low-latency, high-speed internet.

When Legion was founded in 2013, the best way around cloud costs was to have on-premises servers and workstations that supported global connectivity. It was a cost control issue that has benefitted the company to this day, enabling us to bring a global collective of artists and clients into our fold in a controlled and secure way. Legion works in what we consider a “private cloud,” eschewing the costs of egress from large providers and working directly with on-premises solutions.

Are game engines affecting how you work or how you will work in the future?
Game engines are perfect for revisualization in large, involved scenes. We create a lot of environments and invisible effects. For the larger bluescreen shoots, we can build out our sets in Unreal engines, previsualizing how the scene will play for the director or DP. This helps get everyone on the same page when it comes to how a particular sequence is going to be filmed. It’s a technique that also helps the CG team focus on adding details to the areas of a set that we know will be seen. When the schedule is tight, the assets are camera-ready by the time the cut comes to us.

What about realtime raytracing via Nvidia’s RTX? How will that affect VFX and the way you work?
The type of visual effects that we create for feature films and television shows involves a lot of layers and technology that provides efficient, comprehensive compositing solutions. Many of the video card rendering engines like Octanerender, Redshift and V-Ray RT are limited when it comes to what they can create with layers. They often have issues with getting what is called a “back to beauty,” in which the sum of the render passes equals the final render. However, the workarounds we’ve developed enable us to achieve the quality we need. Realtime raytracing introduces a fantastic technology that will someday make it an ideal fit with our needs. We’re keeping an out eye for it as it evolves and becomes more robust.

How have AR/VR or ML/AI affected your workflows, if at all?
AR has been in the wings of the industry for a while. There’s nothing specific that we would take advantage of. Machine learning has been introduced a number of times to solve various problems. It’s a pretty exciting time for these things. One of our partner contacts, who left to join Facebook, was keen to try a number of machine learning tricks for a couple of projects that might have come through, but we didn’t get to put it through the test. There’s an enormous amount of power to be had in machine learning, and I think we are going to see big changes over the next five years in that field and how it affects all of post production.

The Uncanny Valley. Where are we now?
Climbing up the other side, not quite at the summit for daily use. As long as the character isn’t a full normal human, it’s almost indistinguishable from reality.

Can you name some recent projects?
We create visual effects on an ongoing basis for a variety of television shows that include How to Get Away with Murder, DC’s Legends of Tomorrow, Madam Secretary and The Food That Built America. Our team is also called upon to craft VFX for a mix of movies, from the groundbreaking feature film Hardcore Henry to recently released films such as Ma, SuperFly and After.

MAIN IMAGE: Good Morning Football via Chapeau Studios.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years.