Tag Archives: greenscreen

VFX Supervisor Sam O’Hare on Craig Gillespie’s Dumb Money

By Randi Altman

Remember when GameStop, the aging brick-and-mortar video game retailer, caused a stir on Wall Street thanks to a stock price run-up that essentially resulted from a pump-and-dump scheme?

Director Craig Gillespie took on this crazy but true story in Dumb Money, which follows Keith Gill (Paul Dano), a normal guy with a wife and baby who starts it all by sinking his life savings into the GameStop stock. His social media posts start blowing up, and he makes millions, angering the tried-and-true Wall Street money guys who begin to fight back.Needless to say, things get ugly for both sides.

Sam O’Hare

While this type of film, which has an all-star cast, doesn’t scream visual effects movie, there were 500 shots, many of which involved putting things on computer and phone screens and changing seasons. To manage this effort, Gillespie and team called on New York City-based visual effects supervisor Sam O’Hare.

We reached out to O’Hare to talk about his process on the film.

When did you first get involved on Dumb Money?
I had just finished a meeting at the Paramount lot in LA and was sitting on the Forrest Gump bench waiting for an Uber when I got a call about the project. I came back to New York and joined the crew when they started tech scouting.

So, early on in the project?
It wasn’t too early, but just early enough that I could get a grip on what we’d need to achieve for the film, VFXwise. I had to get up to speed with everything before the shoot started.

Talk about your role as VFX supervisor on the film. What were you asked to do?
The production folks understood that there was enough VFX on the film that it needed a dedicated supervisor. I was on-set for the majority of the movie, advising and gathering data, and then, after the edit came together, I continued through post. Being on-set means you can communicate with all the other departments to devise the best shoot strategy. It also means you can ensure that the footage you are getting will work as well as possible in post and minimize costs in post.

I also acted as VFX producer for the show, so I got the bids from vendors and worked out the budgets with director Craig Gillespie and producer Aaron Ryder. I then distributed and oversaw the shots, aided by my coordinator, Sara Rosenthal. I selected and booked the vendors.

Who were they, and what did they each supply?
Chicken Bone tackled the majority of the bluescreen work, along with some screens and other sequences. Powerhouse covered a lot of the screens, Pete Davidson’s car sequence, the pool in Florida and other elements. Basilic Fly handled the split screens and the majority of the paint and cleanup. HiFi 3D took on the sequences with the trees outside Keith Gill’s house.

I also worked closely with the graphics vendors since much of their work had to be run through a screen look that I designed. Since the budget was tight, I ended up executing around 100 shots myself, mostly the screen looks on the graphics.

There were 500 VFX shots? What was the variety of the VFX work?
The editor, Kirk Baxter, is amazing at timing out scenes to get the most impact from them. To that end we had a lot of split screens to adjust timing on the performances. We shot primarily in New Jersey, with a short stint in LA, but the film was set in Massachusetts and Miami, so there was also a fair amount of paint and environmental work to make that happen. In particular, there was a pool scene that needed some extensive work to make it feel like Florida.

The film took place mostly over the winter, but we shot in the fall, so we had a couple of scenes where we had to replace all of the leafy trees with bare ones. HiFi handled these, with CG trees placed referencing photogrammetry I shot on-set to help layout.

There was a fair amount of bluescreen, both in car and plane sequences and to work around actors’ schedules when we couldn’t get them in the right locations at the right times. We shot background plates and then captured the actors later with matched lighting to be assembled afterward.

Screens were a big part of the job. Can you walk us through dealing with those?
We had a variety of approaches to the screens, depending on what we needed to do. The Robinhood app features heavily in the film, and we had to ensure that the actors’ interaction with it was accurate. To that end, I built green layouts with buttons and tap/swipe sequences for them to follow, which mimicked the app accurately at the time.

For the texting sequence, we set up users on the phones, let the actors text one another and used as much of it as possible. Their natural movements and responses to texts were great. All we did was replace the bubbles at the top of the screen to make the text consistent.

For Roaring Kitty, art department graphics artists built his portfolio and the various website layouts, which were on the screens on the shoot. We used these when we could and replaced some for continuity. We also inserted footage that was shot with a GoPro on-set. This footage was then treated with rough depth matte built in Resolve to give a low-fi cut-out feel and then laid over the top of the graphics for the YouTube section.

The screen look for the close-ups was built using close-up imagery of LED screens, with different amounts of down-rez and re-up-rez to get the right amount of grid look for different screens and levels of zoom. Artists also added aberration, focus falloff, etc.

Any other challenging sequences?
We had very limited background plates for the car sequences that were shot. Many had sun when we needed overcast light, so getting those to feel consistent and without repeating took a fair bit of editing and juggling. Seamlessly merging the leafless CG trees into the real ones for the scene outside Keith Gill’s house was probably the most time-consuming section, but it came out looking great.

What tools did you use, and how did they help?
On-set, I rely on my Nikon D750 and Z6 for reference, HDRI and photogrammetry work.

I used Blackmagic Resolve for all my reviews. I wrote some Python pipeline scripts to automatically populate the timeline with trimmed plates, renders and references all in the correct color spaces from ShotGrid playlists. This sped up the review process a great deal and left me time enough to wrangle the shots I needed to work on.

I did all my compositing in Blackmagic Fusion Studio, but I believe all the vendors worked in Foundry Nuke.

Butcher Bird

Greenscreen Versus LED Walls: What’s Right for You?

By Steven Calcote

If you have the time, budget and access to spin up a production on an LED volume, then go for it! It’s amazing technology, and there’s a reason shows like The Mandalorian take our breath away. But if you’re like 99% of the production world, greenscreen is probably still the right answer for virtual production (VP). And it’s worth mentioning that VP may not always be the best solution. Sometimes a location shoot is still the best way to tell your story.

Butcher Bird

Steven Calcote

To be clear, when we say, “virtual production,” we mean real-time integration of live-tracked CG environments with a platform like Unreal Engine. As with LED-driven VP, we too aim to end our shoot day with the final product — often referred to as final pixels — in the can and ready for editorial. But for Butcher Bird shoots, we’re focused on real-time compositing with a greenscreen rather than a moving frustum on an LED wall.

Greenscreen
Over the past three years, across a wide range of shows — from Netflix’s weeklong live series Geeked Week to narrative and commercial projects — we as a creative content company have chosen greenscreen virtual production because of considerations that include infrastructure, flexibility, time, budget and multi-cam production.

Our stage was constructed seven years ago with a three-wall cyc, so we already have a great setup for a greenscreen volume. But even if we had started with video walls in mind, outfitting an equivalent amount of imaging space using LED tech would have required a massive infrastructure upgrade to accommodate power draw, truss support and maintenance. This doesn’t even take into account the LED processors and multiple panel replacements and upgrades, given that display technology operates on the same Moore’s Law schedule that drives computing.

Also, with how often Butcher Bird needs to turn over our stage from virtual production shows to full-set builds and back again, permanent LED walls simply wouldn’t give us the flexibility we need. They would, at the very least, significantly encroach on our shooting space for nonVP shows and make it impossible to pull around the sliding black drapes that turn our stage into a black box when needed. Finally, we find that it’s faster to jump into a show on greenscreen because we can skip past the color calibration, sync, moire and other troubleshooting associated with LED walls.

But perhaps most important for us, we need to be able to shoot with multiple cameras — as many as six or more for many of our live shows — which greenscreen can handle as long as we dedicate a game engine and tracking setup for each camera. At this point, most LED walls max out at two cameras with one frustum for each.

Butcher Bird

Before

LED
There are core advantages to embracing an LED workflow. To name just a few: You won’t have to worry about correcting for green spill; avoiding shiny furniture, costumes and props; eradicating green from your real-world color palette; or maintaining a minimum lighting level to generate good keys.

With an LED volume, you can add atmospherics like real-world haze, rain or dust as well as colorful interactive lighting that would ruin a greenscreen composite. For example, if you’re shooting large reflective objects like cars, then plan to use LED walls from the start. And the big, beautiful virtual oceans of Netflix’s 1899 and HBO’s Our Flag Means Death  wouldn’t have been feasible economically and aesthetically on greenscreen

But whichever capture volume setup you choose, you’ll still need to overcome virtual production’s key challenges: (A) making sure scenes are performing at a high enough frame rate to make real-time capture possible, (B) preparing both real and virtual environments well enough in advance to make sure they’re ready to shoot, (C) supporting a brand-new discipline of highly skilled technicians and artists, and (D) making sure that your VP choices don’t wag the dog when it comes to the story you want to tell.

After

Lines Blurring?
I’m happy to share a little production secret with you: We can still accomplish a number of perceived LED wall advantages on greenscreen by applying some additional hardware and software solutions. With the latest series of high-end Nvidia graphics cards and Unreal Engine 5 advances, we’ve started to add real-time atmospheric effects digitally that previously could have only been added practically.

Image-based LED lighting solutions from companies like Astera, Aputure and Quasar Science — using the same Unreal Engine files as the main show — enable us to add edge and foreground lighting interactions that further enhance the realism of our composites. And for situations with tricky reflective surfaces, we can deploy resources like an 82-inch consumer 4K LED screen to capture detailed close-up image interaction, a technique we recently used on a sci-fi short featuring an astronaut’s helmet visor reflecting an alien landscape.

What’s Ahead
Looking to the future, we couldn’t be more excited by the incredible pace of new storytelling technologies that will make virtual production even easier. Count on AI add-on apps to appear in every link in the chain — from Unreal Engine to automated image correction; compositing clean-up; real-time motion capture smoothing; and streaming software to make virtual production faster, cheaper, and more beautiful than ever before.


Steven Calcote is a partner and director at Butcher Bird Studios in Los Angeles.

 

GhostFrame

GhostFrame: Hidden Chromakey, Hidden Tracking, Multiple Sources

At NAB, AGS announced that the new 4x UHD capability of Sony HDC-5500 is fully certified for use with GhostFrame, a virtual production toolkit that combines hidden chromakey, hidden tracking and multiple sources.

With GhostFrame, users of Sony cameras will be able to simultaneously capture four independent 4K UHD images on-camera while the human eye sees only one.

Until now, the Sony HDC-5500 and HDC-F5500 cameras have only been able to view two phases of UHD output simultaneously. Sony’s upgraded HDC-5500 can now view four video channels live from the camera in native UHD quality directly via 12G-SDI when connected to GhostFrame.

Combining hidden chromakey compositing, hidden tracking and multiple source video feeds into a single production frame, AGS’ patented technology enables GhostFrame to deliver a simplified, efficient and faster workflow for virtual productions and XR studios.

“We have been working closely with Sony for many years to ensure that its sensor technology is fully compatible with our patented processes at GhostFrame, and this latest pioneering development is an extension of that collaboration,” says Peter Angell from GhostFrame. “Now, film, TV and live event producers with virtual production and XR projects can view up to four realities of GhostFrame in pristine UHD quality in combination with Sony’s live production camera.”

Live FX Open Beta

Assimilate’s Live FX Open Beta: Live Compositing of Virtual Productions

Assimilate has announced an open beta for its new Live FX software, which enables real-time, live compositing for green-screen and LED-wall-based virtual productions on set and enables quick creation of comps while scouting locations.

Live FX features the latest technology in keying, camera tracking, DMX light control and Notch Block integration, and a live link to Unreal Engine that simplifies on-set virtual production for not only previsualization but also final pixel for in-camera VFX. According to Assimilate, by recording any incoming dynamic metadata along with the imagery, Live FX can automatically prep all content for VFX/post, saving time and money. Filmmakers and artists — from freelancers to big studios — can now work in a familiar film/video workflow rather than in a programming-style environment.

Live FX Open Beta

Camera Tracking: Live FX supports a broad range of tracking systems for a variety of on-set situations, such as indoor, outdoor, big stage or small set, and tethered or untethered. To track camera movement, Live FX allows use of Intel RealSense tracking cameras, Mo-Sys StarTracker, HTC Vive trackers and even gyro and ARKit technology inside smartphones mounted to the cinema camera. Even without a dedicated tracking device, Live FX can create camera tracks using its own live tracking algorithm. All tracking data can be refined and synced to the incoming live camera signal, so all data stays in sync throughout the entire pipeline. Through its built-in virtual camera calibration, Live FX can calculate lens distortion and create final composites live on-set.

Greenscreen Replacement: The greenscreen can be easily replaced on set with a virtual background of choice, whether it’s a simple 2D texture, 360 equirectangular footage, a 3D Notch Block or a live feed from another camera or Unreal Engine. This is made possible by a variety of keying algorithms within Live FX. Users can create and combine multiple keyers using the layer stack and add multiple garbage masks to their composite. RGB and Alpha channels can be output separately via SDI or NDI to be captured by other tools on set, such as Unreal Engine.

LED Rear Projection Creates Photorealistic Environment: Live FX supports color-managed playback of any footage, at any resolution, and at any frame rate. It allows projection of conventional 2D files, 360 equirectangular footage, Notch Blocks, or live feeds, such as 3D environments directly from Unreal Engine, onto LED walls. Once the content is loaded, it can be color-graded, framed, and tweaked further. 

DMX Lighting Control: For a realistic and natural look, accurate lighting is a key aspect of virtual productions. The DMX lighting control engine in Live FX allows users to mark a region of interest on the footage that is being projected onto the LED wall. Live FX analyzes and averages the color inside the marked region for every frame and sends the color down to the stage lighting, making it easy and straightforward to adjust the lighting of any scene. Not only bright blue sky, but also dynamic scenes like a house on fire with flickering ARRI Sky panels or a car-chase scene through tunnels are now easy to create and set up.

Live Links: Live FX features Live Links, which allows any piece of dynamic metadata entering the system to be used inside a live composition. The metadata comes in from a dedicated tracker system, via the SDI signal of the camera or through the Open Sound Control (OSC) protocol from another system. Examples include using the dynamic lens metadata that comes in from the camera through SDI to directly animate the blur of a texture. Or, link the camera’s own gyro metadata to the virtual camera inside Live FX to link camera movement completely without additional tracking devices.

Timecode Sync Signal Routing: One of the most important aspects in live compositing is managing latency. Live FX allows for compositing multiple streams together: Live streams, dynamic renders like those from Unreal Engine, and any prerecorded footage. If an input is timecode-managed, Live FX can auto-sync multiple feeds by delaying the fastest feeds to ensure 100 percent frame sync in the composite. The camera tracking can be manually delayed to more precisely align the tracking data with the video feed. The additional latency from SDI output is kept to a minimum in Live FX, and the output can be genlocked. Users can also exchange image data using NDI, or even direct GPU texture sharing with zero latency.

Recording, Metadata and Post Prep: Anything that goes into Live FX can be recorded and instantly played back for review. The media can be recorded in H.264, Avid DNx or Apple ProRes, including alpha channel. In the case of multiple live feeds, users can decide to record the final baked result or the individual raw feeds separately. The media is recorded, including all metadata that is captured and used in the live composite — plus camera tracking data — is saved to file as well.

Users start with the live composite and record the raw feeds and metadata separately. As soon as the recording stops, Live FX automatically creates an offline composite that is ready for instant playback and review. The offline composite is a duplicate of the live composite, using the just recorded files instead of the live signals, and includes all other elements and animation channels for further manipulation.

Once the high-resolution media is offloaded from the camera, Live FX automatically assembles the online composite and replaces the on-set recorded clips with the high-quality camera raw media. These online composites can be loaded into Assimilate’s Scratch to create metadata-rich dailies or VFX plates in the OpenEXR format, including all the recorded frame-based metadata, which saves time in postproduction.

 

Ingenuity Studios Provides 65 VFX Shots for Spotify Campaign

LA-based VFX house Ingenuity Studios contributed 65 VFX shots in five unique CG environments for Spotify’s star-studded, technically complex “Today’s Top Hits” campaign, which highlights the streaming service’s largest playlist of the same name. The campaign features Dua Lipa, Bad Bunny, Blackpink, Travis Scott and Billie Eilish.

In this new campaign, viewers fly through fantastical worlds, where their favorite artists find inspiration to create music. Just after viewers are seated in an antigravity chamber, they experience a wild transition. The chair flies out of the chamber and across the moon before zooming in on the final environment, which features a magical tree perched atop the moon. One environment is situated within another environment. The studio delivered 90-second, 60-second and 15-second edits for each artist.

Ingenuity Studios built all of these environments in CG, with live-action, on-set footage shot completely on greenscreen or bluescreen using an ARRI Alexa LF camera. The studio helped to provide a sense of framing for the on-set virtual environments during previz to prepare for effects work.

“Ingenuity came on board early in the project’s development, providing concept art and previz to help realize key set pieces and refine the edit,” says Ingenuity Studios owner and VFX supervisor Grant Miller. “We worked closely with director Warren Fu, whose background in visual development yielded great feedback and really helped elevate the work.”

On set, Ingenuity Studios deployed a virtual production workflow it had already used on a variety of projects, allowing the previz environments to be viewed live in the monitors. Not only did the process help line up shots and lay out camera moves, but it kept the whole team aligned on what they’d be seeing in the background once the production went to post.

Transitioning from environment to environment was a technical challenge, and placing one environment inside the other added a whole other level of complexity.

“The team wanted to drive home that all of these environments were inside the same ship, so it was a challenge to ensure that transitions feel seamless. These are really big worlds,” Miller says. “For example, the largest scene that Ingenuity handled was made possible by Solaris inside Houdini, a new USD system, and by rendering with RenderMan. In the past, this scene would have been split up into different parts and reassembled, but working with USD, we’re able to hold the entire scene in one file, rendering everything together for better lighting and interaction.”

Besides SideFX Houdini and Pixar RenderMan, Ingenuity Studios also called upon Foundry Nuke, Autodesk Maya, Adobe’s After Effects and Premiere, and 3DEqualizer.

“Having done a number of videos and other projects with both Billie and Travis in the past, this commercial felt like a natural progression of those relationships,” adds Miller. “All of the artists involved gave great feedback and were keen to help craft environments that were representative of their varied identities.”

 

Assimilate intros live grading, video monitoring and dailies tools

Assimilate has launched Live Looks and Live Assist, production tools that give pros speed and specialized features for on-set live grading, look creation, advanced video monitoring and recording.

Live Looks provides an easy-to-set-up environment for video monitoring and live grading that supports any resolution, from standard HD up to 8K workflows. Featuring professional grading and FX/greenscreen tools, it is straightforward to operate and offers a seamless connection into dailies and post workflows. With Live Looks being available on both macOS and Windows, users are, for the first time, free to use the platform and hardware of their choice. You can see their intro video here.

“I interact regularly with DITs to get their direct input about tools that will help them be more efficient and productive on set, and Live Looks and Live Assist are a result of that,” says Mazze Aderhold, product marketing manager at Assimilate. “We’ve bundled unique and essential features with the needed speed to boost their capabilities, and enabling them to contribute to time savings and lower costs in the filmmaking workflow.”

Users can run this on a variety of places — from a  laptop to a full-blown on-set DIT rig. Live Looks provides LUT-box control over Flanders, Teradek and TVLogic devices. It also supports video I/O from AJA, Bluefish444 and Blackmagic for image and full-camera metadata capture. There is also now direct reference recording to Apple ProRes on macOS and Windows.

Live Looks goes beyond LUT-box control. Users can process the live camera feed via video I/O, making it possible to do advanced grading, compare looks, manage all metadata, annotate camera input and generate production reports. Its fully color-managed environment ensures the created looks will come out the same in dailies and post. Live Looks provides a seamless path into dailies and post with look-matching in Scratch and CDL-EDL transfer to DaVinci Resolve.

With Live Looks, Assimilate takes its high-end grading tool set beyond Lift, Gamma, Gain and CDL by adding Powerful Curves and an easy-to-use Color Remapper. On-set previews can encompass not just color but realtime texture effects, like Grain, Highlight Glow, Diffusion and Vignette — all GPU-accelerated.

Advanced chroma keying lets users replace greenscreen backgrounds with two clicks. This allows for proper camera angles, greenscreen tracking/anchor point locations and lighting. As with all Assimilate software, users can load and play back any camera format, including raw formats such as Red raw and Apple ProRes raw.

Live Assist has all of the features of Live Looks but also handles basic video-assist tasks, and like Live Assist, it is available on both macOS and Windows. It provides multicam recording and instant playback of all recorded channels and seamlessly combines live grading with video-assist tasks in an easy-to-use UI. Live Assist automatically records camera inputs to file based on the Rec-flag inside the SDI signal, including all live camera metadata. It also extends the range of supported “edit-ready” capture formats: Apple ProRes (Mov), H264 (MP4) and Avid DNxHD/HR (MXF). Operators can then choose whether they want to record the clean signal or record with the grade baked in.

Both Live Looks and Live Assist are available now. Live Looks starts at $89 per month, and Live Assist starts at $325 per month. Both products and free trials are available on the Assimilate site.

Quick Chat: Lord Danger takes on VFX-heavy Devil May Cry 5 spot

By Randi Altman

Visual effects for spots have become more and more sophisticated, and the recent Capcom trailer promoting the availability of its game Devil May Cry 5 is a perfect example.

 The Mike Diva-directed Something Greater starts off like it might be a commercial for an anti-depressant with images of a woman cooking dinner for some guests, people working at a construction site, a bored guy trimming hedges… but suddenly each of our “Everyday Joes” turns into a warrior fighting baddies in a video game.

Josh Shadid

The hedge trimmer’s right arm turns into a futuristic weapon, the construction worker evokes a panther to fight a monster, and the lady cooking is seen with guns a blazin’ in both hands. When she runs out of ammo, and to the dismay of her dinner guests, her arms turn into giant saws. 

Lord Danger’s team worked closely with Capcom USA to create this over-the-top experience, and they provided everything from production to VFX to post, including sound and music.

We reached out to Lord Danger founder/EP Josh Shadid to learn more about their collaboration with Capcom, as well as their workflow.

How much direction did you get from Capcom? What was their brief to you?
Capcom’s fight-games director of brand marketing, Charlene Ingram, came to us with a simple request — make a memorable TV commercial that did not use gameplay footage but still illustrated the intensity and epic-ness of the DMC series.

What was it shot on and why?
We shot on both Arri Alexa Mini and Phantom Flex 4k using Zeiss Super Speed MKii Prime lenses, thanks to our friends at Antagonist Camera, and a Technodolly motion control crane arm. We used the Phantom on the Technodolly to capture the high-speed shots. We used that setup to speed ramp through character actions, while maintaining 4K resolution for post in both the garden and kitchen transformations.

We used the Alexa Mini on the rest of the spot. It’s our preferred camera for most of our shoots because we love the combination of its size and image quality. The Technodolly allowed us to create frame-accurate, repeatable camera movements around the characters so we could seamlessly stitch together multiple shots as one. We also needed to cue the fight choreography to sync up with our camera positions.

You had a VFX supervisor on set. Can you give an example of how that was beneficial?
We did have a VFX supervisor on site for this production. Our usual VFX supervisor is one of our lead animators — having him on site to work with means we’re often starting elements in our post production workflow while we’re still shooting.

Assuming some of it was greenscreen?
We shot elements of the construction site and gardening scene on greenscreen. We used pop-ups to film these elements on set so we could mimic camera moves and lighting perfectly. We also took photogrammetry scans of our characters to help rebuild parts of their bodies during transition moments, and to emulate flying without requiring wire work — which would have been difficult to control outside during windy and rainy weather.

Can you talk about some of the more challenging VFX?
The shot of the gardener jumping into the air while the camera spins around him twice was particularly difficult. The camera starts on a 45-degree frontal, swings behind him and then returns to a 45-degree frontal once he’s in the air.

We had to digitally recreate the entire street, so we used the technocrane at the highest position possible to capture data from a slow pan across the neighborhood in order to rebuild the world. We also had to shoot this scene in several pieces and stitch it together. Since we didn’t use wire work to suspend the character, we also had to recreate the lower half of his body in 3D to achieve a natural looking jump position. That with the combination of the CG weapon elements made for a challenging composite — but in the end, it turned out really dramatic (and pretty cool).

Were any of the assets provided by Capcom? All created from scratch?
We were provided with the character and weapons models from Capcom — but these were in-game assets, and if you’ve played the game you’ll see that the environments are often dark and moody, so the textures and shaders really didn’t apply to a real-world scenario.

Our character modeling team had to recreate and re-interpret what these characters and weapons would look like in the real world — and they had to nail it — because game culture wouldn’t forgive a poor interpretation of these iconic elements. So far the feedback has been pretty darn good.

In what ways did being the production company and the VFX house on the project help?
The separation of creative from production and post production is an outdated model. The time it takes to bring each team up to speed, to manage the communication of ideas between creatives and to ensure there is a cohesive vision from start to finish, increases both the costs and the time it takes to deliver a final project.

We shot and delivered all of Devil May Cry’s Something Greater in four weeks total, all in-house. We find that working as the production company and VFX house reduces the ratio of managers per creative significantly, putting more of the money into the final product.


Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years. 

Nutmeg and Nickelodeon team up to remix classic SpongeBob songs

New York creative studio Nutmeg Creative was called on by Nickelodeon to create trippy music-video-style remixes of some classic SpongeBob SquarePants songs for the kids network’s YouTube channel. Catchy, sing-along kids’ songs have been an integral part of SpongeBob since its debut in 1999.

Though there are dozens of unofficial fan remixes on YouTube, Nickelodeon frequently turns to Nutmeg for official remixes: vastly reimagined versions accompanied by trippy, trance-inducing visuals that inevitably go viral. It all starts with the music, and the music is inspired by the show.

Infused with the manic energy of classic Warner Bros. Looney Toons, SpongeBob is simultaneously slapstick and surreal with an upbeat vibe that has attracted a cult-like following from the get-go. Now in its 10th season, SpongeBob attracts fans that span two generations: kids who grew up watching SpongeBob now have kids of their own.

The show’s sensibility and multi-generational audience informs the approach of Nutmeg sound designer, mixer and composer JD McMillin, whose remixes of three popular and vintage SpongeBob songs have become viral hits: Krusty Krab Pizza and Ripped My Pants from 1999, and The Campfire Song Song (yes, that’s correct) from 2004. With musical styles ranging from reggae, hip-hop and trap/EDM to stadium rock, drum and bass and even Brazilian dance, McMillin’s remixes expand the appeal of the originals with ear candy for whole new audiences. That’s why, when Nickelodeon provides a song to Nutmeg, McMillin is given free rein to remix it.

“No one from Nick is sitting in my studio babysitting,” he says. “They could, but they don’t. They know that if they let me do my thing they will get something great.”

“Nickelodeon gives us a lot of creative freedom,” says executive producer Mike Greaney. “The creative briefs are, in a word, brief. There are some parameters, of course, but, ultimately, they give us a track and ask us to make something new and cool out of it.”

All three remixes have collectively racked up hundreds of thousands of views on YouTube, with The Campfire Song Song remix generating 655K views in less than 24 hours on the SpongeBob Facebook page.

McMillin credits the success to the fact that Nutmeg serves as a creative collaborative force: what he delivers is more reinvention than remix.

“We’re not just mixing stuff,” he says. “We’re making stuff.”

Once Nick signs off on the audio, that approach continues with the editorial. Editors Liz Burton, Brian Donnelly and Drew Hankins each bring their own unique style and sensibility, with graphic Effects designer Stephen C. Walsh adding the finishing touches.

But Greaney isn’t always content with cut, shaken and stirred clips from the show, going the extra mile to deliver something unexpected. Case in point: he recently donned a pair of red track pants and high-kicked in front of a greenscreen to add a suitably outrageous element to the Ripped My Pants remix.

In terms of tools used for audio work, Nutmeg used Ableton Live, Native Instruments Maschine and Avid Pro Tools. For editorial they called on Avid Media Composer, Sapphire and Boris FX. Graphics were created in Adobe After Effects, and Mocha Pro.

Lost in Time game show embraces ‘Interactive Mixed Reality’

By Daniel Restuccio

The Future Group — who has partnered with Fremantle Media, Ross Video and Epic Games — have created a new super-agile entertainment platform that blends linear television and game technology into a hybrid format called “Interactive Mixed Reality.”

The brainchild of Bård Anders Kasin, this innovative content deployment medium generated a storm of industry buzz at NAB 2016, and their first production Lost in Time — a weekly primetime game show — is scheduled to air this month on Norwegian television.

The Idea
The idea originated more than 13 years ago in Los Angeles. In 2003, at age 22, Kasin, a self-taught multimedia artist from Notodden, Norway, sent his CV and a bunch of media projects to Warner Bros. in Burbank, California, in hopes of working on The Matrix. They liked it. His interview was on a Wednesday and by Friday he had a job as a technical director.

Kasin immersed himself in the cutting-edge movie revolution that was The Matrix franchise. The Wachowskis visionary production was a masterful inspiration and featured a compelling sci-fi action story, Oscar-winning editing, breakthrough visual effects (“bullet-time”) and an expanded media universe that included video games and an animè-style short The Animatrix. The Matrix Reloaded and The Matrix Revolutions were shot at the same time, as well as more than an hour of footage specifically designed for the video game. The Matrix Online, an Internet gaming platform, was a direct sequel to The Matrix Revolutions.

L-R: Bård Anders Kasin and Jens Petter Høili.

Fast forward to 2013 and Kasin has connected with software engineer and serial entrepreneur Jens Petter Høili, founder of EasyPark and Fairchance. “There was this producer I knew in Norway,” explains Kasin, “who runs this thing called the Artists’ Gala charity. He called and said, ‘There’s this guy you should meet. I think you’ll really hit it off.’” Kasin met Høili had lunch and discussed projects they each were working on. “We both immediately felt there was a connection,” recalls Kasin. No persuading was necessary. “We thought that if we combined forces we were going to get something that’s truly amazing.”

That meeting of the minds led to the merging of their companies and the formation of The Future Group. The mandate of Oslo-based The Future Group is to revolutionize the television medium by combining linear TV production with cutting-edge visual effects, interactive gameplay, home viewer participation and e-commerce. Their IMR concept ditches the individual limiting virtual reality (VR) headset, but conceptually keeps the idea of creating content that is a multi-level, intricate and immersive experience.

Lost in Time
Fast forward again, this time to 2014. Through another mutual friend, The Future Group formed an alliance with Fremantle Media. Fremantle, a global media company, has produced some of the highest-rated and longest-running shows in the world, and is responsible for top international entertainment brands such as Got Talent, Idol and The X Factor.

Kasin started developing the first IMR prototype. At this point, the Lost in Time production had expanded to include Ross Video and Epic Games. Ross Video is a broadcast technology innovator and Epic Games is a video game producer and the inventor of the Unreal game engine. The Future Group, in collaboration with Ross Video, engineered the production technology and developed a broadcast-compatible version of the Unreal game engine called Frontier, shown at NAB 2016, to generate high-resolution, realtime graphics used in the production.

On January 15, 2015 the first prototype was shown. When Freemantle saw the prototype, they were amazed. They went directly to stage two, moving to the larger stages at Dagslys Studios. “Lost in Time has been the driver for the technology,” explains Kasin. “We’re a very content-driven company. We’ve used that content to drive the development of the platform and the technology, because there’s nothing better than having actual content to set the requirements for the technology rather than building technology for general purposes.”

In Lost in Time, three studio contestants are set loose on a greenscreen stage and perform timed, physical game challenges. The audience, which could be watching at home or on a mobile device, sees the contestant seamlessly blended into a virtual environment built out of realtime computer graphics. The environments are themed as western, ice age, medieval times and Jurassic period sets (among others) with interactive real props.

The audience can watch the contestants play the game or participate in the contest as players on their mobile device at home, riding the train or literally anywhere. They can play along or against contestants, performing customized versions of the scripted challenges in the TV show. The mobile content uses graphics generated from the same Unreal engine that created the television version.

“It’s a platform,” reports partner Høili, referring to the technology behind Lost in Time. A business model is a way you make money, notes tech blogger Jonathan Clarks, and a platform is something that generates business models. So while Lost in Time is a specific game show with specific rules, built on television technology, it’s really a business technology framework where multiple kinds of interactive content could be generated. Lost in Time is like the Unreal engine itself, software that can be used to create games, VR experiences and more, limited only by the imagination of the content creator. What The Future Group has done is create a high-tech kitchen from which any kind of cuisine can be cooked up.

Soundstages and Gear
Lost in Time is produced on two greenscreen soundstages at Dagslys Studios in Oslo. The main “gameplay set” takes up all of Studio 1 (5,393 square feet) and the “base station set” is on Studio 3 (1,345 square feet). Over 150 liters (40 gallons) of ProCyc greenscreen paint was used to cover both studios.

Ross Video, in collaboration with The Future Group, devised an integrated technology of hardware and software that supports the Lost in Time production platform. This platform consists of custom cameras, lenses, tracking, control, delay, chroma key, rendering, greenscreen, lighting and switcher technology. This system includes the new Frontier hardware, introduced at NAB 2016, which runs the Unreal game engine 3D graphics software.

Eight Sony HDC-2500 cameras running HZC-UG444 software are used for the production. Five are deployed on the “gameplay set.” One camera rides on a technocrane, two are on manual pedestal dollies and one is on Steadicam. For fast-action tracking shots, another camera sits on the Furio RC dolly that rides on a straight track that runs the 90-foot length of the studio. The Furio RC pedestal, controlled by SmartShell, guarantees smooth movement in virtual environments and uses absolute encoders on all axes to send complete 3D tracking data into the Unreal engine.

There is also one Sony HDC-P1 camera that is used as a static, center stage, ceiling cam flying 30 feet above the gameplay set. There are three cameras in the home base set, two on Furio Robo dollies and one on a technocrane. In the gameplay set, all cameras (except the ceiling cam) are tracked with the SolidTrack IR markerless tracking system.

All filming is done at 1080p25 and output RGB 444 via SDI. They use a custom LUT on the cameras to avoid clipping and an expanded dynamic range for post work. All nine camera ISOs, separate camera “clean feeds,” are recorded with a “flat” LUT in RGB 444. For all other video streams, including keying and compositing, they use LUT boxes to invert the signal back to Rec 709.

Barnfind provided the fiber optic network infrastructure that links all the systems. Ross Video Dashboard controls the BarnOne frames as well as the router, Carbonite switchers, Frontier graphics system and robotic cameras.

A genlock signal distributed via OpenGear syncs all the gear to a master clock. The Future Group added proprietary code to Unreal so the render engine can genlock, receive and record linear timecode (LTC) and output video via SDI in all industry standard formats. They also added additional functionality to the Unreal engine to control lights via DMX, send and receive GPI signals, communicate with custom sensors, buttons, switches and wheels used for interaction with the games and controlling motion simulation equipment.

In order for the “virtual cameras” in the graphics systems and the real cameras viewing the real elements to have the exact same perspectives, an “encoded” camera lens is required that provides the lens focal length (zoom) and focus data. In addition the virtual lens field of view (FOV) must be properly calibrated to match the FOV of the real lens. Full servo digital lenses with 16-bit encoders are needed for virtual productions. Lost in Time uses three Canon lenses with these specifications: Canon Hj14ex4.3B-IASE, Canon Hj22ex7.6B-IASE-A and Canon Kj17ex7.7B-IASE-A.

The Lost in Time camera feeds are routed to the Carbonite family hardware: Ultrachrome HR, Carbonite production frame and Carbonite production switcher. Carbonite Ultrachrome HR is a stand-alone multichannel chroma key processor based on the Carbonite Black processing engine. On Lost in Time, the Ultrachrome switcher accepts the Sony camera RGB 444 signal and uses high-resolution chroma keyers, each with full control of delay management, fill color temperature for scene matching, foreground key and fill, and internal storage for animated graphics.

Isolated feeds of all nine cameras are recorded, plus two quad-splits with the composited material and the program feed. Metus Ingest, a The Future Group proprietary hardware solution, was used for all video recording. Metus Ingest can simultaneously capture and record  up to six HD channels of video and audio from multiple devices on a single platform.

Post Production
While the system is capable of being broadcast live, they decided not to go live for the debut. Instead they are only doing a modest amount of post to retain the live feel. That said, the potential of the post workflow on Lost in Time arguably sets a whole new post paradigm. “Post allows us to continue to develop the virtual worlds for a longer amount of time,” says Kasin. “This gives us more flexibility in terms of storytelling. We’re always trying to push the boundaries with the creative content. How we tell the story of the different challenges.”

All camera metadata, including position, rotation, lens data, etc., and all game interaction, were recorded in the Unreal engine with a proprietary system. This allowed graphics playback as a recorded session later. This also let the editors change any part of the graphics non-destructively. They could choose to replace 3D models or textures or in post change the tracking or point-of-view of any of the virtual cameras as well as add cameras for more virtual “coverage.”

Lost in Time episodes were edited as a multicam project, based on the program feed, in Adobe Premiere CC. They have a multi-terabyte storage solution from Pixit Media running Tiger Technology’s workflow manager. “The EDL from the final edit is fed through a custom system, which then builds a timeline in Unreal to output EXR sequences for a final composite.”

That’s it for now, but be sure to visit this space again to see part two of our coverage on The Future Group’s Lost in Time. Our next story will include the real and virtual lighting systems, the SolidTrack IR tracking system, the backend component, and interview with Epic Games’ Kim Libreri about Unreal engine development/integration and a Lost in Time episode editor.


Daniel Restuccio, who traveled to Oslo for this piece, is a writer, producer and teacher. He is currently multimedia department chairperson at California Lutheran in Thousand Oaks.

ILM’s Richard Bluff talks VFX for Marvel’s Doctor Strange

By Daniel Restuccio

Comic book fans have been waiting for over 30 years for Marvel’s Doctor Strange to come to the big screen, and dare I say it was worth the wait. This is in large part because of the technology now available to create the film’s stunning visual effects.

Fans have the option to see the film in traditional 2D, Dolby Cinema (worthy of an interstate or plane fare pilgrimage, in my opinion) and IMAX 3D. Doctor Strange, Marvel Studios’ 15th film offering, is also receiving good critical reviews and VFX Oscar buzz — it’s currently on the list of 20 films still in the running in the Visual Effects category for the 89th Academy Awards.

Marvel Doctor StrangeThe unapologetically dazzling VFX shots, in many cases directly inspired by the original comic visuals by Steve Dittko, were created by multiple visual effects houses, including Industrial Light & Magic, Luma Pictures, Lola VFX, Method Studios, Rise FX, Crafty Apes, Framestore, Perception and previs house The Third Floor. Check out our interview with the film’s VFX supervisor Stephane Ceretti.

Director Scott Derrickson said in in a recent Reddit chat that Doctor Strange is “a fantastical superhero movie.

“Watching the final cut of the film was deeply satisfying,” commented Derrickson. “A filmmaker cannot depend upon critical reviews or box office for satisfaction — even if they are good. The only true reward for any artist is to pick a worthy target and hit it. When you know you’ve hit your target that is everything. On this one, I hit my target.”

Since we got an overview of how the visual effects workflow went from Ceretti, we decided to talk to one of the studios that provided VFX for the film, specifically ILM and their VFX supervisor Richard Bluff.

Richard Bluff

According to Bluff, early in pre-production Marvel presented concept art, reference images and previsualization on “what were the boundaries of what the visuals could be.” After that, he says, they had the freedom to search within those bounds.

During VFX presentations with Marvel, they frequently showed three versions of the work. “They went with the craziest version to the point where the next time we would show three more versions and we continued to up the ante on the crazy,” recalls Bluff.

As master coordinator of this effort for ILM, Bluff encouraged his artists, “to own the visuals and try to work out how the company could raise the quality of the work or the designs on the show to another level. How could we introduce something new that remains within the fabric of the movie?”

As a result, says Bluff, they had some amazing ideas flow from individuals on the film. Jason Parks came up with the idea of traveling through the center of a subway train as it fractured. Matt Cowey invented the notion of continually rotating the camera to heighten the sense of vertigo. Andrew Graham designed the kaleidoscope-fighting arena “largely because his personal hobby is building and designing real kaleidoscopes.”

Unique to Doctor Strange is that the big VFX sequences are all very “self-contained.” For example, ILM did the New York and Hong Kong sequence, Luma did the Dark Dimension and Method did the multi-universe. ILM also designed and developed the original concept for the Eldridge Magic and provided all the shared “digital doubles” — CGI rigged and animatable versions of the actors — that tied sequences together. The digital doubles were customized to the needs of each VFX house.

Previs
In some movies previs material is generated and thrown away. Not so with Doctor Strange. What ILM did this time was develop a previs workflow where they could actually hang assets and continue to develop, so it became part of the shot from the earliest iteration.

There was extensive previs done for Marvel by The Third Floor as a creative and technical guide across the movie, and further iterations internal to ILM done by ILM’s lead visualization artist, Landis Fields.

Warning! Spoiler! Once Doctor Strange moves the New York fight scene into the mirror universe, the city starts coming apart in an M.C. Escher-meets-Chris Nolan-Inception kind of way. To make that sequence, ILM created a massive tool kit of New York set pieces and geometry, including subway cars, buildings, vehicles and fire escapes.

In the previs, Fields started breaking apart, duplicating and animating those objects, like the fire escapes, to tell the story of what a kaleidoscoping city would look like. The artists then fleshed out a sequence of shots, a.k.a. “mini beats.” They absorbed the previs into the pipeline by later switching out the gross geometry elements in Fields’ previs with the actual New York hero assets.

Strange Cam
Landis and the ILM team also designed and built what ILM dubbed the “strange cam,” a custom 3D printed 360 GoPro rig that had to withstand the rigors of being slung off the edge of skyscrapers. What ILM wanted to do was to be able to capture 360 degrees of rolling footage from that vantage point to be used as a moving background “plates” that could be reflected within the New York City glass buildings.

VFX, Sound Design and the Hong Kong
One of the big challenges with the Hong Kong sequence was that time was reversing and moving forward at the same time. “What we had to do was ensure the viewer understands that time is reversing throughout that entire sequence.” During the tight hand-to-hand action moments that are moving forward in time, there’s not really much screen space to show you time reversing in the background. So they designed the reversing destruction sequence to work in concert with the sound design. “We realized we had to move away from a continuous shower of debris toward rhythmic beats of debris being sucked out of frame.”

before-streetafter-street

Bluff says the VFX the shot count on the film — 1,450 VFX — was actually a lot less than Captain America: Civil War. From a VFX point of view, The Avengers movies lean on the assets generated in Iron Man and Captain America. The Thor movies help provide the context for what an Avengers movie would look and feel like. In Doctor Strange “almost everything in the movie had to be designed (from scratch) because they haven’t already existed in a previous Marvel film. It’s a brand-new character to the Marvel world.”

Bluff started development on the movie in October of 2014 and really started doing hands on work in February of 2016, frequently traveling between Vancouver, San Francisco and London. A typical day, working out of the ILM London office, would see him get in early and immediately deal with review requests from San Francisco. Then he would jump into “dailies” in London and work with them until the afternoon. After “nightlies” with London there was a “dailies” session with San Francisco and Vancouver, work with them until evening, hit the hotel, grab some dinner, come back around 11:30pm or midnight and do nightlies with San Francisco. “It just kept the team together, and we never missed a beat.”

2D vs. IMAX 3D vs. Dolby Cinema
Bluff saw the entire movie for the first time in IMAX 3D, and is looking forward to seeing it in 2D. Considering sequences in the movie are surreal in nature and Escher-like, there’s an argument that suggests that IMAX 3D is a better way to see it because it enhances the already bizarre version of that world. However, he believes the 2D and 3D versions are really “two different experiences.”

Dolby Cinema is the merging of Dolby Atmos — 128-channel surround sound — with the high dynamic range of Dolby Vision, plus really comfortable seats. It is, arguably, the best way to see a movie. Bluff says as far as VFX goes, high dynamic range information has been there for years. “I’m just thankful that exhibition technology is finally catching up with what’s always been there for us on the visual effects side.”

During that Reddit interview, Derrickson commented, “The EDR (Extended Dynamic Range) print is unbelievable — if you’re lucky enough to live where an EDR print is playing. As for 3D and/or IMAX, see it that way if you like that format. If you don’t, see it 2D.”

Doctor Strange is probably currently playing in a theater near you, but go see it in Dolby Cinema if you can.


In addition to being a West Coast correspondent for postPerspective, Daniel Restuccio is the multimedia department chair at California Lutheran University and former Walt Disney Imagineer.

Making ‘Being Evel’: James Durée walks us through post

Compositing played a huge role in this documentary film.

By Randi Altman

Those of us of a certain age will likely remember being glued to the TV as a child watching Evel Knievel jump his motorcycle over cars and canyons. It felt like the world held its collective breath, hoping that something horrible didn’t happen… or maybe wondering what it would be like if something did.

Well, Johnny Knoxville, of Jackass and Bad Grandpa fame, was one of those kids, as witnessed by, well, his career. Knoxville and Oscar-winning filmmaker Daniel Junge (Saving Face) combined to make Being Evel, a documentary on the daredevil’s life and career. Produced by Knoxville’s Dickhouse Productions (yup, that’s right) and HeLo, it premiered at Sundance this year.

Continue reading

Quick Chat: Randall Dark partners with Bulltiger for new media productions

By Randi Altman

Randall Dark has always been on my radar, almost since I started in this business all those years ago. To me he was the guy who was working in HD long before it turned into the high def we see on our televisions today. He truly was a pioneer of the format, but he’s also more than that. He’s a producer, a director, a cameraman and a company owner.

Recently, Randall Dark partnered with Bulltinger Productions’ CEO and founder, Stephen Brent, on a new film studio in North Austin.

Housing a 10,000-square-foot soundstage (also available for rent) with a 2,500-square-foot greenscreen, Bulltiger’s goal is creating compelling stories for all types of screens. Bulltiger Continue reading

DuArt adds 21 edit bays, two insert stages

 

NEW YORK — DuArt, a New York-based audio, video and digital media post studio, has added 21 additional edit bays, along with two new insert stages. The new edit bays, featuring a combination of Avid Media Composers and Apple Final Cut, were constructed on DuArt’s 7th floor, a fully renovated space that features seven-foot windows on all four exposures, 11-foot ceilings, exposed brick and appealing Midtown views. Each edit bay, which can also be quickly converted into production office space, provides fiber and Ethernet connectivity, as well as wireless and hard-wired Internet.

The new insert studios are ideal for shooting greenscreen shoots or interviews, with hair/makeup and wardrobe rooms next door. A freight elevator and loading dock are available for easy load in and out.

DuArt’s newest facilities have been used by clients including MTV2, Park Slope Productions, The Documentary Group and other short-term and long-term tenants.

This latest expansion complements the 54 existing DuArt production suites and edit bays. In May, the facility added three new audio production rooms for a total of seven – all of those suites are suitable for a wide range of VO uses, with an emphasis on audio books and television narration.

In addition to its complete list of post services, DuArt provides short, medium and long-term space and four-wall support to content production companies.