By Randi Altman
With Siggraph 2019 in our not-too-distant rearview mirror, we thought it was a good time to reach out to visual effects experts to talk about trends. Everyone has had a bit of time to digest what they saw. Users are thinking what new tools and technologies might help their current and future workflows. Manufacturers are thinking about how their products will incorporate these new technologies.
We provided these experts with questions relating to realtime raytracing, the use of game engines in visual effects workflows, easier ways to share files and more.
Ben Looram, partner/owner, Chapeau Studios
Chapeau Studios provides production, VFX/animation, design and creative IP development (both for digital content and technology) for all screens.
What film inspired you to work in VFX?
There was Ray Harryhausen’s film Jason and the Argonauts, which I watched on TV when I was seven. The skeleton-fighting scene has been visually burned into my memory ever since. Later in life I watched an artist compositing some tough bluescreen shots on a Quantel Henry in 1997, and I instantly knew that that was going to be in my future.
What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
Double the content for half the cost seems to be the industry’s direction lately. This is coming from new in-house/client-direct agencies that sometimes don’t know what they don’t know … so we help guide/teach them where it’s OK to trim budgets or dedicate more funds for creative.
Are game engines affecting how you work, or how you will work in the future?
Yes, rendering on device and all the subtle shifts in video fidelity shifted our attention toward game engine technology a couple years ago. As soon as the game engines start to look less canned and have accurate depth of field and parallax, we’ll start to integrate more of those tools into our workflow.
Right now we have a handful of projects in the forecast where we will be using realtime game engine outputs as backgrounds on set instead of shooting greenscreen.
What about realtime raytracing? How will that affect VFX and the way you work?
We just finished an R&D project with Intel’s new raytracing engine OSPRay for Siggraph. The ability to work on a massive scale with last-minute creative flexibility was my main takeaway. This will allow our team to support our clients’ swift changes in direction with ease on global launches. I see this ingredient as really exciting for our creative tech devs moving into 2020. Proof of concept iterations will become finaled faster, and we’ve seen efficiencies in lighting, render and compositing effort.
How have ML/AI affected your workflows, if at all?
None to date, but we’ve been making suggestions for new tools that will make our compositing and color correction process more efficient.
The Uncanny Valley. Where are we now?
Still uncanny. Even with well-done virtual avatar influencers on Instagram like Lil Miquela, we’re still caught with that eerie feeling of close-to-visually-correct with a “meh” filter.
Can you name some recent projects?
The Rookie’s Guide to the NFL. This was a fun hybrid project where we mixed CG character design with realtime rendering voice activation. We created an avatar named Matthew for the NFL’s Amazon Alexa Skills store that answers your football questions in real time.
Microsoft AI: Carlsberg and Snow Leopard. We designed Microsoft’s visual language of AI on multiple campaigns.
Apple Trade In campaign: Our team concepted, shot and created an in-store video wall activation and on-all-device screen saver for Apple’s iPhone Trade In Program.
Mac Moore, CEO, Conductor
Conductor is a secure cloud-based platform that enables VFX, VR/AR and animation studios to seamlessly offload rendering and simulation workloads to the public cloud.
What are some of today’s VFX trends? Is cloud playing an even larger role?
Cloud is absolutely a growing trend. I think for many years the inherent complexity and perceived cost of cloud has limited adoption in VFX, but there’s been a marked acceleration in the past 12 months.
Two years ago at Siggraph, I was explaining the value of elastic compute and how it perfectly aligns with the elastic requirements that define our project-based industry; this year there was a much more pragmatic approach to cloud, and many of the people I spoke with are either using the cloud or planning to use it in the near future. Studios have seen referenceable success, both technically and financially, with cloud adoption and are now defining cloud’s role in their pipeline for fear of being left behind. Having a cloud-enabled pipeline is really a game changer; it is leveling the field and allowing artistic talent to be the differentiation, rather than the size of the studio’s wallet (and its ability to purchase a massive render farm).
How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines for VFX have definitely attracted interest lately and show a lot of promise in certain verticals like virtual production. There’s more work to be done in terms of out-of-the-box usability, but great strides have been made in the past couple years. I also think various open source initiatives and the inherent collaboration those initiatives foster will help move VFX workflows forward.
Will realtime raytracing play a role in how your tool works?
There’s a need for managing the “last mile,” even in realtime raytracing, which is where Conductor would come in. We’ve been discussing realtime assist scenarios with a number of studios, such as pre-baking light maps and similar applications, where we’d perform some of the heavy lifting before assets are integrated in the realtime environment. There are certainly benefits on both sides, so we’ll likely land in some hybrid best practice using realtime and traditional rendering in the near future.
How do ML/AI and AR/VR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Machine learning and artificial intelligence are critical for our next evolutionary phase at Conductor. To date we’ve run over 250 million core-hours on the platform, and for each of those hours, we have a wealth of anonymous metadata about render behavior, such as the software run, duration, type of machine, etc.
For our next phase, we’re focused on delivering intelligent rendering akin to ride-share app pricing; the goal is to provide producers with an upfront cost estimate before they submit the job, so they have a fixed price that they can leverage for their bids. There is also a rich set of analytics that we can mine, and those analytics are proving invaluable for studios in the planning phase of a project. We’re working with data science experts now to help us deliver this insight to our broader customer base.
AR/VR front presents a unique challenge for cloud, due to the large size and variety of datasets involved. The rendering of these workloads is less about compute cycles and more about scene assembly, so we’re determining how we can deliver more of a whole product for this market in particular.
OpenXR and USD are certainly helping with industry best practices and compatibility, which build recipes for repeatable success, and Conductor is collaborating on creating those guidelines for success when it comes to cloud computing with those standards.
What is next on the horizon for VFX?
Cloud, open source and realtime technologies are all disrupting VFX norms and are converging in a way that’s driving an overall democratization of the industry. Gone are the days when you need a pile of cash and a big brick-and-mortar building to house all of your tech and talent.
Streaming services and new mediums, along with a sky-high quality bar, have increased the pool of available VFX work, which is attracting new talent. Many of these new entrants are bootstrapping their businesses with cloud, standards-based approaches and geographically dispersed artistic talent.
Conductor recently became a fully virtual company for this reason. I hire based on expertise, not location, and today’s technology allows us to collaborate as if we are in the same building.
Aruna Inversin, creative director/VFX supervisor, Digital Domain
Digital Domain has provided visual effects and technology for hundreds of motion pictures, commercials, video games, music videos and virtual reality experiences. It also livestreams events in 360-degree virtual reality, creates “virtual humans” for use in films and live events, and develops interactive content, among other things.
What film inspired you to work in VFX?
RoboCop in 1984. The combination of practical effects, miniatures and visual effects inspired me to start learning about what some call “The Invisible Art.”
What trends have you been seeing? What do you feel is important?
There has been a large focus on realtime rendering and virtual production and using it to help increase the throughput and workflow of visual effects. While indeed realtime rendering does increase throughput, there is now a greater onus on filmmakers to plan their creative ideas and assets before you can render them. No longer is it truly post production, but we are back into the realm of preproduction, using post tools and realtime tools to help define how a story is created and eventually filmed.
USD and cloud rendering are also important components, which allow many different VFX facilities the ability to manage their resources effectively. I think another trend that has since passed and has gained more traction is the availability of ACES and a more unified color space by the Academy. This allows quicker throughput between all facilities.
Are game engines affecting how you work or how you will work in the future?
As my primary focus is in new media and experiential entertainment at Digital Domain, I already use game engines (cinematic engines, realtime engines) for the majority of my deliverables. I also use our traditional visual effects pipeline; we have created a pipeline that flows from our traditional cinematic workflow directly into our realtime workflow, speeding up the development process of asset creation and shot creation.
What about realtime raytracing? How will that affect VFX and the way you work?
The ability to use Nvidia’s RTX and raytracing increases the physicality and realistic approximations of virtual worlds, which is really exciting for the future of cinematic storytelling in realtime narratives. I think we are just seeing the beginnings of how RTX can help VFX.
How have AR/VR and AI/ML affected your workflows, if at all?
Augmented reality has occasionally been a client deliverable for us, but we are not using it heavily in our VFX pipeline. Machine learning, on the other hand, allows us to continually improve our digital humans projects, providing quicker turnaround with higher fidelity than competitors.
The Uncanny Valley. Where are we now?
There is no more uncanny valley. We have the ability to create a digital human with the nuance expected! The only limitation is time and resources.
Can you name some recent projects?
I am currently working on a Time project but I cannot speak too much about it just yet. I am also heavily involved in creating digital humans for realtime projects for a number of game companies that wish to push the boundaries of storytelling in realtime. All these projects have a release date of 2020 or 2021.
Matt Allard, strategic alliances lead, M&E, Dell Precision Workstations
Dell Precision workstations feature the latest processors and graphics technology and target those working in the editing studio or at a drafting table, at the office or on location.
What are some of today’s VFX trends?
We’re seeing a number of trends in VFX at the moment — from 4K mastering from even higher-resolution acquisition formats and an increase in HDR content to game engines taking a larger role on set in VFX-heavy productions. Of course, we are also seeing rising expectations for more visual sophistication, complexity and film-level VFX, even in TV post (for example, Game of Thrones).
Will realtime raytracing play a role in how your tools work?
We expect that Dell customers will embrace realtime and hardware-accelerated raytracing as creative, cost-saving and time-saving tools. With the availability of Nvidia Quadro RTX across the Dell Precision portfolio, including on our 7000 series mobile workstations, customers can realize these benefits now to deliver better content wherever a production takes them in the world.
Large-scale studio users will not only benefit from the freedom to create the highest-quality content faster, but they’ll likely see overall impact to their energy consumption as they assess the move from CPU rendering, which dominates studio data centers today. Moving toward GPU and hybrid CPU/GPU rendering approaches can offer equal or better rendering output with less energy consumption.
How are game engines changing how VFX are done? Is this for everyone or just a select few?
Game engines have made their way into VFX-intensive productions to deliver in-context views of the VFX during the practical shoot. With increasing quality driven by realtime raytracing, game engines have the potential to drive a master-quality VFX shot on set, helping to minimize the need to “fix it in post.”
What is next on the horizon for VFX?
The industry is at the beginning of a new era as artificial intelligence and machine learning techniques are brought to bear on VFX workflows. Analytical and repetitive tasks are already being targeted by major software applications to accelerate or eliminate cumbersome elements in the workflow. And as with most new technologies, it can result in improved creative output and/or cost savings. It really is an exciting time for VFX workflows!
Ongoing performance improvements to the computing infrastructure will continue to accelerate and democratize the highest-resolution workflows. Now more than ever, small shops and independents can access the computing power, tools and techniques that were previously available only to top-end studios. Additionally, virtualization techniques will allow flexible means to maximize the utilization and proliferation of workstation technology.
Carl Flygare, manager, Quadro Marketing, PNY
Providing tools for realtime raytracing, augmented reality and virtual reality with the goal of advancing VFX workflow creativity and productivity. PNY is NVIDIA’s Quadro channel partner throughout North America, Latin America, Europe and India..
How will realtime raytracing play a role in workflows?
Budgets are getting tighter, timelines are contracting, and audience expectations are increasing. This sounds like a perfect storm, in the bad sense of the term, but with the right tools, it is actually an opportunity.
Realtime raytracing, based on Nvidia’s RTX technology and support from leading ISVs, enables VFX shops to fit into these new realities while delivering brilliant work. Whiteboarding a VFX workflow is a complex task, so let’s break it down by categories. In preproduction, specifically previz, realtime raytracing will let VFX artists present far more realistic and compelling concepts much earlier in the creative process than ever before.
This extends to the next phase, asset creation and character animation, in which models can incorporate essentially lifelike nuance, including fur, cloth, hair or feathers – or something else altogether! Shot layout, blocking, animation, simulation, lighting and, of course, rendering all benefit from additional iterations, nuanced design and the creative possibilities that realtime raytracing can express and realize. Even finishing, particularly compositing, can benefit. Given the applicable scope of realtime raytracing, it will essentially remake VFX workflows and overall film pipelines, and Quadro RTX series products are the go-to tools enabling this revolution.
How are game engines changing how VFX is done? Is this for everyone or just a select few?
Variety had a great article on this last May. ILM substituted realtime rendering and five 4K laser projectors for a greenscreen shot during a sequence from Solo: A Star Wars Story. This allowed the actors to perform in context — in this case, a hyperspace jump — but also allowed cinematographers to capture arresting reflections of the jump effect in the actors’ eyes. Think of it as “practical digital effects” created during shots, not added later in post. The benefits are significant enough that the entire VFX ecosystem, from high-end shops and major studios to independent producers, are using realtime production tools to rethink how movies and TV shows happen while extending their vision to realize previously unrealizable concepts or projects.
How do ML and AR play a role in your tool? And are you supporting OpenXR 1.0? What about Pixar’s USD?
Those are three separate but somewhat interrelated questions! ML (machine learning) and AI (artificial intelligence) can contribute by rapidly denoising raytraced images in far less time than would be required by letting a given raytracing algorithm run to conclusion. Nvidia enables AI denoising in Optix 5.0 and is working with a broad array of leading ISVs to bring ML/AI enhanced realtime raytracing techniques into the mainstream.
OpenXR 1.0 was released at Siggraph 2019. Nvidia (among others) is supporting this open, royalty-free and cross-platform standard for VR/AR. Nvidia is now providing VR enhancing technologies, such as variable rate shading, content adaptive shading and foveated rendering (among others), with the launch of Quadro RTX. This provides access to the best of both worlds — open standards and the most advanced GPU platform on which to build actual implementations.
Pixar and Nvidia have collaborated to make Pixar’s USD (Universal Scene Description) and Nvidia’s complementary MDL (Materials Definition Language) software open source in an effort to catalyze the rapid development of cinematic quality realtime raytracing for M&E applications.
What is next on the horizon for VFX?
The insatiable desire on the part of VFX professionals, and audiences, to explore edge-of-the-envelope VFX will increasingly turn to realtime raytracing, based on the actual behavior of light and real materials, increasingly sophisticated shader technology and new mediums like VR and AR to explore new creative possibilities and entertainment experiences.
AI, specifically DNNs (deep neural networks) of various types, will automate many repetitive VFX workflow tasks, allowing creative visionaries and artists to focus on realizing formerly impossible digital storytelling techniques.
One obvious need is increasing the resolution at which VFX shots are rendered. We’re in a 4K world, but many films are finished at 2K, primarily based on VFX. 8K is unleashing the abilities (and changing the economics) of cinematography, so expect increasingly powerful realtime rendering solutions, such as Quadro RTX (and successor products when they come to market), along with amazing advances in AI, to allow the VFX community to innovate in tandem.
Chris Healer, CEO/CTO/VFX supervisor, The Molecule
Founded in 2005, The Molecule creates bespoke VFX imagery for clients worldwide. Over 80 artists, producers, technicians and administrative support staff collaborate at our New York City and Los Angeles studios.
What film or show inspired you to work in VFX?
I have to admit, The Matrix was a big one for me.
Are game engines affecting how you work or how you will work?
Game engines are coming, but the talent pool is difficult and the bridge is hard to cross … a realtime artist doesn’t have the same mindset as a traditional VFX artist. The last small percentage of completion on a shot can invalidate any values gained by working in a game engine.
What about realtime raytracing?
I am amazed at this technology, and as a result bought stock in Nvidia, but the software has to get there. It’s a long game, for sure!
How have AR/VR and ML/AI affected your workflows?
I think artists are thinking more about how images work and how to generate them. There is still value in a plain-old four-cornered 16:9 rectangle that you can make the most beautiful image inside of.
AR,VR, ML, etc., are not that, to be sure. I think there was a skip over VR in all the hype. There’s way more to explore in VR, and that will inform AR tremendously. It is going to take a few more turns to find a real home for all this.
What trends have you been seeing? Cloud workflows? What else?
Everyone is rendering in the cloud. The biggest problem I see now is lack of a UBL model that is global enough to democratize it. UBL = usage-based licensing. I would love to be able to render while paying by the second or minute at large or small scales. I would love for Houdini or Arnold to be rentable on a Satoshi level … that would be awesome! Unfortunately, it is each software vendor that needs to provide this, which is a lot to organize.
The Uncanny Valley. Where are we now?
We saw in the recent Avengers film that Mark Ruffalo was in it. Or was he? I totally respect the Uncanny Valley, but within the complexity and context of VFX, this is not my battle. Others have to sort this one out, and I commend the artists who are working on it. Deepfake and Deeptake are amazing.
Can you name some recent projects?
We worked on Fosse/Verdon, but more recent stuff, I can’t … sorry. Let’s just say I have a lot of processors running right now.
Matt Bach and William George, lab technicians, Puget Systems
Puget Systems specializes in high-performance custom-built computers — emphasizing each customer’s specific workflow.
What are some of today’s VFX trends?
Matt Bach: There are so many advances going on right now that it is really hard to identify specific trends. However, one of the most interesting to us is the back and forth between local and cloud rendering.
Cloud rendering has been progressing for quite a few years and is a great way to get a nice burst in rendering performance when you are in a crunch. However, there have been high improvements in GPU-based rendering with technology like Nvidia Optix. Because of these, you no longer have to spend a fortune to have a local render farm, and even a relatively small investment in hardware can often move the production bottleneck away from rendering to other parts of the workflow. Of course, this technology should make its way to the cloud at some point, but as long as these types of advances keep happening, the cloud is going to continue playing catch-up.
A few other that we are keeping our eyes on are the growing use of game engines, motion capture suits and realtime markerless facial tracking in VFX pipelines.
Realtime raytracing is becoming more prevalent in VFX. What impact does realtime raytracing have on system hardware, and what do VFX artists need to be thinking about when optimizing their systems?
William George: Most realtime raytracing requires specialized computer hardware, specifically video cards with dedicated raytracing functionality. Raytracing can be done on the CPU and/or normal video cards as well, which is what render engines have done for years, but not quickly enough for realtime applications. Nvidia is the only game in town at the moment for hardware raytracing on video cards with its RTX series.
Nvidia’s raytracing technology is available on its consumer (GeForce) and professional (Quadro) RTX lines, but which one to use depends on your specific needs. Quadro cards are specifically made for this kind of work, with higher reliability and more VRAM, which allows for the rendering of more complex scenes … but they also cost a lot more. GeForce, on the other hand, is more geared toward consumer markets, but the “bang for your buck” is incredibly high, allowing you to get several times the performance for the same cost.
In between these two is the Titan RTX, which offers very good performance and VRAM for its price, but due to its fan layout, it should only be used as a single card (or at most in pairs, if used in a computer chassis with lots of airflow).
Another thing to consider is that if you plan on using multiple GPUs (which is often the case for rendering), the size of the computer chassis itself has to be fairly large in order to fit all the cards, power supply, and additional cooling needed to keep everything going.
How are game engines changing or impacting VFX workflows?
Bach: Game engines have been used for previsualization for a while, but we are starting to see them being used further and further down the VFX pipeline. In fact, there are already several instances where renders directly captured from game engines, like Unity or Unreal, are being used in the final film or animation.
This is getting into speculation, but I believe that as the quality of what game engines can produce continues to improve, it is going to drastically shake up VFX workflows. The fact that you can make changes in real time, as well as use motion capture and facial tracking, is going to dramatically reduce the amount of time necessary to produce a highly polished final product. Game engines likely won’t completely replace more traditional rendering for quite a while (if ever), but it is going to be significant enough that I would encourage VFX artists to at least familiarize themselves with the popular engines like Unity or Unreal.
What impact do you see ML/AI and AR/VR playing for your customers?
We are seeing a lot of work being done for machine learning and AI, but a lot of it is still on the development side of things. We are starting to get a taste of what is possible with things like Deepfakes, but there is still so much that could be done. I think it is too early to really tell how this will affect VFX in the long term, but it is going to be exciting to see.
AR and VR are cool technologies, but it seems like they have yet to really take off, in part because designing for them takes a different way of thinking than traditional media, but also in part because there isn’t one major platform that’s an overwhelming standard. Hopefully, that is something that gets addressed over time, because once creative folks really get a handle on how to use the unique capabilities of AR/VR to their fullest, I think a lot of neat stories will be told.
What is the next on the horizon for VFX?
Bach: The sky is really the limit due to how fast technology and techniques are changing, but I think there are two things in particular that are going to be very interesting to see how they play out.
First, we are hitting a point where ethics (“With great power comes great responsibility” and all that) is a serious concern. With how easy it is to create highly convincing Deepfakes of celebrities or other individuals, even for someone who has never used machine learning before, I believe that there is the potential of backlash from the general public. At the moment, every use of this type of technology has been for entertainment or otherwise rightful purposes, but the potential to use it for harm is too significant to ignore.
Something else I believe we will start to see is “VFX for the masses,” similar to how video editing used to be a purely specialized skill, but now anyone with a camera can create and produce content on social platforms like YouTube. Advances in game engines, facial/body tracking for animated characters and other technologies that remove a number of skills and hardware barriers for relatively simple content are going to mean that more and more people with no formal training will take on simple VFX work. This isn’t going to impact the professional VFX industry by a significant degree, but I think it might spawn a number of interesting techniques or styles that might make their way up to the professional level.
Paul Ghezzo, creative director, Technicolor Visual Effects
Technicolor and its family of VFX brands provide visual effects services tailored to each project’s needs.
What film inspired you to work in VFX?
At a pretty young age, I fell in love with Star Wars: Episode IV – A New Hope and learned about the movie magic that was developed to make those incredible visuals come to life.
What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
USD will help structure some of what we currently do, and cloud rendering is an incredible source to use when needed. I see both of them maturing and being around for years to come.
As for other trends, I see new methods of photogrammetry and HDRI photography/videography providing datasets for digital environment creation and capturing lighting content; performance capture (smart 2D tracking and manipulation or 3D volumetric capture) for ease of performance manipulation or layout; and even post camera work. New simulation engines are creating incredible and dynamic sims in a fraction of the time, and all of this coming together through video cards streamlining the creation of the end product. In many ways it might reinvent what can be done, but it might take a few cutting-edge shows to embrace and perfect the recipe and show its true value.
Production cameras tethered to digital environments for live set extensions are also coming of age, and with realtime rendering becoming a viable option, I can imagine that it will only be a matter of time for LED walls to become the new greenscreen. Can you imagine a live-action set extension that parallaxes, distorts and is exposed in the same way as its real-life foreground? How about adding explosions, bullet hits or even an armada of spaceships landing in the BG, all on cue. I imagine this will happen in short order. Exciting times.
Are game engines affecting how you work or how you will work in the future?
Game engines have affected how we work. The speed and quality that they offer is undoubtably a game changer, but they don’t always create the desired elements and AOVs that are typically needed in TV/film production.
They are also creating a level of competition that is spurring other render engines to be competitive and provide a similar or better solution. I can imagine that our future will use Unreal/Unity engines for fast turnaround productions like previz and stylized content, as well as for visualizing virtual environments and digital sets as realtime set extensions and a lot more.
What about realtime raytracing? How will that affect VFX and the way you work?
GPU rendering has single-handedly changed how we render and what we render with. A handful of GPUs and a GPU-accelerated render engine can equal or surpass a CPU farm that’s several times larger and much more expensive. In VFX, iterations equal quality, and if multiple iterations can be completed in a fraction of the time — and with production time usually being finite — then GPU-accelerated rendering equates to higher quality in the time given.
There are a lot of hidden variables to that equation (change of direction, level of talent provided, work ethics, hardware/software limitations, etc.), but simply said, if you can hit the notes as fast as they are given, and not have to wait hours for a render farm to churn out a product, then clearly the faster an iteration can be provided the more iterations can be produced, allowing for a higher-quality product in the time given.
How have AR or ML affected your workflows, if at all?
ML and AR haven’t significantly affected our current workflows yet … but I believe they will very soon.
One aspect of AR/VR/MR that we occasionally use in TV/film production is to previz environments, props and vehicles, which allows everyone in production and on set/location to see what the greenscreen will be replaced with, which allows for greater communication and understanding with the directors, DPs, gaffers, stunt teams, SFX and talent. I can imagine that AR/VR/MR will only become more popular as a preproduction tool, allowing productions to front load and approve all aspects of production way before the camera is loaded and the clock is running on cast and crew.
Machine learning is on the cusp of general usage, but it currently seems to be used by productions with lengthy schedules that will benefit from development teams building those toolsets. There are tasks that ML will undoubtably revolutionize, but it hasn’t affected our workflows yet.
The Uncanny Valley. Where are we now?
Making the impossible possible … That *is* what we do in VFX. Looking at everything from Digital Emily in 2011 to Thanos and Hulk in Avengers: Endgame, we’ve seen what can be done, and the Uncanny Valley will likely remain, but only on productions that can’t afford the time or cost of flawless execution.
Can you name some recent projects?
Big Little Lies, Dead to Me, NOS4A2, True Detective, Veep, This Is Us, Snowfall, The Loudest Voice, and Avengers: Endgame.
James Knight, virtual production director, AMD
AMD is a semiconductor company that develops computer processors and related technologies for M&E as well as other markets. Its tools include Ryzen and Threadripper.
What are some of today’s VFX trends?
Well, certainly the exploration for “better, faster, cheaper” keeps going. Faster rendering, so our community can accomplish more iterations in a much shorter amount of time, seems to something I’ve heard the whole time I’ve been in the business.
I’d surely say the virtual production movement (or on-set visualization) is gaining steam, finally. I work with almost all the major studios in my role, and all of them, at a minimum, have the ability to speed up post and blend it with production on their radar; many have virtual production departments.
How are game engines changing how VFX are done? Is this for everyone or just a select few?
I would say game engines are where most of the innovation comes from these days. Think about Unreal, for example. Epic pioneered Fortnite, and the revenue from that must be astonishing, and they’re not going to sit on their hands. The feature film and TV post/VFX business benefits from the requirement of the gaming consumer to see higher-resolution, more photorealistic images in real time. That gets passed on to our community in eliminating guess work on set when framing partial or completely CG shots.
It should be for everyone or most, because the realtime and post production time savings are rather large. I think many still have a personal preference for what they’re used to. And that’s not wrong, if it works for them, obviously that’s fine. I just think that even in 2019, use of game engines is still new to some … which is why it’s not completely ubiquitous.
How do ML or AR play a role in your tool? Are you supporting OpenXR 1.0? What about Pixar’s USD?
Well, it’s more the reverse. With our new Rome and Threadripper CPUs, we’re powering AR. Yes, we are supporting OpenXR 1.0.
What is next on the horizon for VFX?
Well, the demand for VFX is increasing, not the opposite, so the pursuit of faster photographic reality is perpetually in play. That’s good job security for me at a CPU/GPU company, as we have a way to go to properly bridge the Uncanny Valley completely, for example.
I’d love to say lower-cost CG is part of the future, but then look at the budgets of major features — they’re not exactly falling. The dance of Moore’s law will forever be in effect more than likely, with momentary huge leaps in compute power — like with Rome and Threadripper — catching amazement for a period. Then, when someone sees the new, expanded size of their sandpit, they then fill that and go, “I now know what I’d do if it was just a bit bigger.”
I am vested and fascinated by the future of VFX, but I think it goes hand in hand with great storytelling. If we don’t have great stories, then directing and artistry innovations don’t properly get noticed. Look at the top 20 highest grossing films in history … they’re all fantasy. We all want to be taken away from our daily lives and immersed in a beautiful, realistic VFX intense fictional world for 90 minutes, so we’ll be forever pushing the boundaries of rigging, texturing, shading, simulations, etc. To put my finger on exactly what’s next, I’d say I happen to know of a few amazing things that are coming, but sadly, I’m not at liberty to say right now.
Michel Suissa, managing director of pro solutions, The Studio-B&H
The Studio-B&H provides hands-on experience to high-end professionals. Its Technology Center is a fully operational studio with an extensive display of high-end products and state-of-the-art workflows.
What are some of today’s VFX trends?
AI, ML, NN (GAN) and realtime environments
Will realtime raytracing play a role in how the tools you provide work?
It already does with most relevant applications in the market.
How are game engines changing how VFX are done? Is this for everyone or just a select few?
The ubiquity of realtime game engines is becoming more mainstream with every passing year. It is becoming fairly accessible to a number of disciplines within different market targets.
What is next on the horizon for VFX?
New pipeline architectures that will rely on different implementations (traditional and AI/ML/NN) and mixed infrastructures (local and cloud-based).
What trends have you been seeing? USD? Rendering in the cloud? What do you feel is important?
AI, ML and realtime environments. New cloud toolsets. Prominence of neural networks and GANs. Proliferation of convincing “deepfakes” as a proof of concept for the use of generative networks as resources for VFX creation.
What about realtime raytracing? How will that affect VFX workflows?
RTX is changing how most people see their work being done. It is also changing expectations about what it takes to create and render CG images.
The Uncanny Valley. Where are we now?
AI and machine learning will help us get there. Perfection still remains too costly. The amount of time and resources required to create something convincing is prohibitive for the large majority of the budgets.
Marc Côté, CEO, Real by Fake
Real by Fake services include preproduction planning, visual effects, post production and tax-incentive financing.
What film or show inspired you to work in VFX?
George Lucas’ Star Wars and Indiana Jones (Raiders of the Lost Ark). For Star Wars, I was a kid and I saw this movie. It brought me to another universe. Star Wars was so inspiring even though I was too young to understand what the movie was about. The robots in the desert and the spaceships flying around. It looked real; it looked great. I was like, “Wow, this is amazing.”
Indiana Jones because it was a great adventure; we really visit the worlds. I was super-impressed by the action, by the way it was done. It was mostly practical effects, not really visual effects. Later on I realized that in Star Wars, they were using robots (motion control systems) to shoot the spaceships. And as a kid, I was very interested in robots. And I said, “Wow, this is great!” So I thought maybe I could use my skills and what I love and combine it with film. So that’s the way it started.
What trends have you been seeing? What do you feel is important?
The trend right now is using realtime rendering engines. It’s coming on pretty strong. The game companies who build engines like Unity or Unreal are offering a good product.
It’s bit of a hack to use these tools in rendering or in production at this point. They’re great for previz, and they’re great for generating realtime environments and realtime playback. But having the capacity to change or modify imagery with the director during the process of finishing is still not easy. But it’s a very promising trend.
Rendering in the cloud gives you a very rapid capacity, but I think it’s very expensive. You also have to download and upload 4K images, so you need a very big internet pipe. So I still believe in local rendering — either with CPUs or GPUs. But cloud rendering can be useful for very tight deadlines or for small companies that want to achieve something that’s impossible to do with the infrastructure they have.
My hope is that AI will minimize repetition in visual effects. For example, in keying. We key multiple sections of the body, but we get keying errors in plotting or transparency or in the edges, and they are all a bit different, so you have to use multiple keys. AI would be useful to define which key you need to use for every section and do it automatically and in parallel. AI could be an amazing tool to be able to make objects disappear by just selecting them.
Pixar’s USD is interesting. The question is: Will the industry take it as a standard? It’s like anything else. Kodak invented DPX, and it became the standard through time. Now we are using EXR. We have different software, and having exchange between them will be great. We’ll see. We have FBX, which is a really good standard right now. It was built by Filmbox, a Montreal company that was acquired by Autodesk. So we’ll see. The demand and the companies who build the software — they will be the ones who take it up or not. A big company like Pixar has the advantage of other companies using it.
The last trend is remote access. The internet is now allowing us to connect cross-country, like from LA to Montreal or Atlanta. We have a sophisticated remote infrastructure, and we do very high-quality remote sessions with artists who work from disparate locations. It’s very secure and very seamless.
What about realtime raytracing? How will that affect VFX and the way you work?
I think we have pretty good raytracing compared to what we had two years ago. I think it’s a question of performance, and of making it user-friendly in the application so it’s easy to light with natural lighting. To not have to fake the rebounds so you can get two or three rebounds. I think it’s coming along very well and quickly.
So what about things like AI/ML or AR/VR? Have those things changed anything in the way movies and TV shows are being made?
My feeling right now is that we are getting into an era where I don’t think you’ll have enough visual effects companies to cover the demand.
Every show has visual effects. It can be a complete character, like a Transformer, or a movie from the Marvel Universe where the entire film is CG. Or it can be the huge number of invisible effects that are starting to appear in virtually every show. You need capacity to get all this done.
AI can help minimize repetition so artists can work more on the art and what is being created. This will accelerate and give us the capacity to respond to what’s being demanded of us. They want a faster cheaper product, and they want the quality to be as high as a movie.
The only scenario where we are looking at using AR is when we are filming. For example, you need to have a good camera track in real time, and then you want to be able to quickly add a CGI environment around the actors so the director can make the right decision in terms of the background or interactive characters who are in the scene. The actors will not see it until they have a monitor or a pair of glasses or something to be able to give them the result.
So AR is a tool to be able to make faster decisions when you’re on set shooting. This is what we’ve been working on for a long time: bringing post production and preproduction together. To have an engineering department who designs and conceptualizes and creates everything that needs to be done before shooting.
The Uncanny Valley. Where are we now?
In terms of the environment, I think we’re pretty much there. We can create an environment that nobody will know is fake. Respectfully, I think our company Real by Fake is pretty good at doing it.
In terms of characters, I think we’re still not there. I think the game industry is helping a lot to push this. I think we’re on the verge of having characters look as close as possible to live actors, but if you’re in a closeup, it still feels fake. For mid-ground and long shots, it’s fine. You can make sure nobody will know. But I don’t think we’ve crossed the valley just yet.
Can you name some recent projects?
Big Little Lies and Sharp Objects for HBO, Black Summer for Netflix
and Brian Banks, an indie feature.
Jeremy Smith, CTO, Jellyfish Pictures
Jellyfish Pictures provides a range of services including VFX for feature film, high-end TV and episodic animated kids’ TV series and visual development for projects spanning multiple genres.
What film or show inspired you to work in VFX?
Forrest Gump really opened my eyes to how VFX could support filmmaking. Seeing Tom Hanks interact with historic footage (e.g., John F. Kennedy) was something that really grabbed my attention, and I remember thinking, “Wow … that is really cool.”
What trends have you been seeing? What do you feel is important?
The use of cloud technology is really empowering “digital transformation” within the animation and VFX industry. The result of this is that there are new opportunities that simply wouldn’t have been possible otherwise.
Jellyfish Pictures uses burst rendering into the cloud, extending our capacity and enabling us to take on more work. In addition to cloud rendering, Jellyfish Pictures were early adopters of virtual workstations, and, especially after Siggraph this year, it is apparent to see that this is the future for VFX and animation.
Virtual workstations promote a flexible and scalable way of working, with global reach for talent. This is incredibly important for studios to remain competitive in today’s market. As well as the cloud, formats such as USD are making it easier to exchange data with others, which allow us to work in a more collaborative environment.
It’s important for the industry to pay attention to these, and similar, trends, as they will have a massive impact on how productions are carried out going forward.
Are game engines affecting how you work, or how you will work in the future?
Game engines are offering ways to enhance certain parts of the workflow. We see a lot of value in the previz stage of the production. This allows artists to iterate very quickly and helps move shots onto the next stage of production.
What about realtime raytracing? How will that affect VFX and the way you work?
The realtime raytracing from Nvidia (as well as GPU compute in general) offers artists a new way to iterate and help create content. However, with recent advancements in CPU compute, we can see that “traditional” workloads aren’t going to be displaced. The RTX solution is another tool that can be used to assist in the creation of content.
How have AR/VR and ML/AI affected your workflows, if at all?
Machine learning has the power to really assist certain workloads. For example, it’s possible to use machine learning to assist a video editor by cataloging speech in a certain clip. When a director says, “find the spot where the actor says ‘X,’” we can go directly to that point in time on the timeline.
In addition, ML can be used to mine existing file servers that contain vast amounts of unstructured data. When mining this “dark data,” an organization may find a lot of great additional value in the existing content, which machine learning can uncover.
The Uncanny Valley. Where are we now?
With recent advancements in technology, the Uncanny Valley is closing, however it is still there. We see more and more digital humans in cinema than ever before (Peter Cushing in Rogue One: A Star Wars Story was a main character), and I fully expect to see more advances as time goes on.
Can you name some recent projects?
Our latest credits include Solo: A Star Wars Story, Captive State, The Innocents, Black Mirror, Dennis & Gnasher: Unleashed! and Floogals Seasons 1 through 3.
Andy Brown, creative director, Jogger
Jogger Studios is a boutique visual effects studio with offices in London, New York and LA. With capabilities in color grading, compositing and animation, Jogger works on a variety of projects, from TV commercials and music videos to projections for live concerts.
What inspired you to work in VFX?
First of all, my sixth form English project was writing treatments for music videos to songs that I really liked. You could do anything you wanted to for this project, and I wanted to create pictures using words. I never actually made any of them, but it planted the seed of working with visual images. Soon after that I went to university in Birmingham in the UK. I studied communications and cultural studies there, and as part of the course, we visited the BBC Studios at Pebble Mill. We visited one of the new edit suites, where they were putting together a story on the inquiry into the Handsworth riots in Birmingham. It struck me how these two people, the journalist and the editor, could shape the story and tell it however they saw fit. That’s what got me interested on a critical level in the editorial process. The practical interest in putting pictures together developed from that experience and all the opportunities that opened up when I started work at MPC after leaving university.
What trends have you been seeing? What do you feel is important?
Remote workstations and cloud rendering are all really interesting. It’s giving us more opportunities to work with clients across the world using our resources in LA, SF, Austin, NYC and London. I love the concept of a centralized remote machine room that runs all of your software for all of your offices and allows you scaled rendering in an efficient and seamless manner. The key part of that sentence is seamless. We’re doing remote grading and editing across our offices so we can share resources and personnel, giving the clients the best experience that we can without the carbon footprint.
Are game engines affecting how you work or how you will work in the future?
Game engines are having a tremendous effect on the entire media and entertainment industry, from conception to delivery. Walking around Siggraph last month, seeing what was not only possible but practical and available today using gaming engines, was fascinating. It’s hard to predict industry trends, but the technology felt like it will change everything. The possibilities on set look great, too, so I’m sure it will mean a merging of production and post production in many instances.
What about realtime raytracing How will that affect VFX and the way you work?
Faster workflows and less time waiting for something to render have got to be good news. It gives you more time to experiment and refine things.
How have AR/VR or ML/AI affected your workflows, if at all?
Machine learning is making its way into new software releases, and the tools are useful. Anything that makes it easier to get where you need to go on a shot is welcome. AR, not so much. I viewed the new Mac Pro sitting on my kitchen work surface through my phone the other day, but it didn’t make me want to buy it any more or less. It feels more like something that we can take technology from rather than something that I want to see in my work.
I’d like 3D camera tracking and facial tracking to be realtime on my box, for example. That would be a huge time-saver in set extensions and beauty work. Anything that makes getting perfect key easier would also be great.
The Uncanny Valley. Where are we now?
It always used to be “Don’t believe anything you read.” Now it’s, “Don’t believe anything you see.” I used to struggle to see the point of an artificial human, except for resurrecting dead actors, but now I realize the ultimate aim is suppression of the human race and the destruction of democracy by multimillionaire despots and their robot underlings.
Can you name some recent projects?
I’ve started prepping for the apocalypse, so it’s hard to remember individual jobs, but there’s been the usual kind of stuff — beauty, set extensions, fast food, Muppets, greenscreen, squirrels, adding logos, removing logos, titles, grading, finishing, versioning, removing rigs, Frankensteining, animating, removing weeds, cleaning runways, making tenders into wings, split screens, roto, grading, polishing cars, removing camera reflections, stabilizing, tracking, adding seatbelts, moving seatbelts, adding photos, removing pictures and building petrol stations. You know, the usual.
James David Hattin, founder/creative director, VFX Legion
Based in Burbank and British Columbia, VFX Legion specializes in providing episodic shows and feature films with an efficient approach to creating high-quality visual effects.
What film or show inspired you to work in VFX?
Star Wars was my ultimate source of inspiration for doing visual effects. Much of the effects in the movies didn’t make sense to me as a six-year-old, but I knew that this was the next best thing to magic. Visual effects create a wondrous world where everyday people can become superheroes, leaders of a resistance or ruler of a 5th century dynasty. Watching X-wings flying over the surface of a space station, the size of a small moon was exquisite. I also learned, much later on, that the visual effects that we couldn’t see were as important as what we could see.
I had already been steeped in visual effects with Star Trek — phasers, spaceships and futuristic transporters. Models held from wires on a moon base convinced me that we could survive on the moon as it broke free from orbit. All of this fueled my budding imagination. Exploring computer technology and creating alternate realities, CGI and digitally enhanced solutions have been my passion for over a quarter of century.
What trends have you been seeing? What do you feel is important?
More and more of the work is going to happen inside a cloud structure. That is definitely something that is being pressed on very heavily by the tech giants like Google and Amazon that rule our world. There is no Moore’s law for computers anymore. The prices and power we see out of computers is almost plateauing. The technology is now in the world of optimizing algorithms or rendering with video cards. It’s about getting bigger, better effects out more efficiently. Some companies are opting to run their entire operations in the cloud or co-located server locations. This can theoretically free up the workers to be in different locations around the world, provided they have solid, low-latency, high-speed internet.
When Legion was founded in 2013, the best way around cloud costs was to have on-premises servers and workstations that supported global connectivity. It was a cost control issue that has benefitted the company to this day, enabling us to bring a global collective of artists and clients into our fold in a controlled and secure way. Legion works in what we consider a “private cloud,” eschewing the costs of egress from large providers and working directly with on-premises solutions.
Are game engines affecting how you work or how you will work in the future?
Game engines are perfect for revisualization in large, involved scenes. We create a lot of environments and invisible effects. For the larger bluescreen shoots, we can build out our sets in Unreal engines, previsualizing how the scene will play for the director or DP. This helps get everyone on the same page when it comes to how a particular sequence is going to be filmed. It’s a technique that also helps the CG team focus on adding details to the areas of a set that we know will be seen. When the schedule is tight, the assets are camera-ready by the time the cut comes to us.
What about realtime raytracing via Nvidia’s RTX? How will that affect VFX and the way you work?
The type of visual effects that we create for feature films and television shows involves a lot of layers and technology that provides efficient, comprehensive compositing solutions. Many of the video card rendering engines like Octanerender, Redshift and V-Ray RT are limited when it comes to what they can create with layers. They often have issues with getting what is called a “back to beauty,” in which the sum of the render passes equals the final render. However, the workarounds we’ve developed enable us to achieve the quality we need. Realtime raytracing introduces a fantastic technology that will someday make it an ideal fit with our needs. We’re keeping an out eye for it as it evolves and becomes more robust.
How have AR/VR or ML/AI affected your workflows, if at all?
AR has been in the wings of the industry for a while. There’s nothing specific that we would take advantage of. Machine learning has been introduced a number of times to solve various problems. It’s a pretty exciting time for these things. One of our partner contacts, who left to join Facebook, was keen to try a number of machine learning tricks for a couple of projects that might have come through, but we didn’t get to put it through the test. There’s an enormous amount of power to be had in machine learning, and I think we are going to see big changes over the next five years in that field and how it affects all of post production.
The Uncanny Valley. Where are we now?
Climbing up the other side, not quite at the summit for daily use. As long as the character isn’t a full normal human, it’s almost indistinguishable from reality.
Can you name some recent projects?
We create visual effects on an ongoing basis for a variety of television shows that include How to Get Away with Murder, DC’s Legends of Tomorrow, Madam Secretary and The Food That Built America. Our team is also called upon to craft VFX for a mix of movies, from the groundbreaking feature film Hardcore Henry to recently released films such as Ma, SuperFly and After.
MAIN IMAGE: Good Morning Football via Chapeau Studios.
Randi Altman is the founder and editor-in-chief of postPerspective. She has been covering production and post production for more than 20 years.