By Alyssa Heater
A virtual art department, aka VAD, comprises a team of artists and engineers who work their magic (and use game engine technology) to create environments, characters, props and beyond for a virtual production.
First, we spoke with Halon Entertainment virtual art department supervisor Jess Marley and project manager Meredith Brace Sloss to learn about their VAD, the technology they use to power their asset creation and the opportunities for cost savings. The team recently contributed to John Wick: Chapter 4 and Robert Zemeckis’ Pinocchio and are working toward using virtual production as an application for education.
We then sat down with Meptik director of creative operations Joshua Eason, business development manager Andrew Amato and marketing manager Katie Pietzka to learn about the various roles within their VAD and the asset catalogue they are developing to help smaller-budget projects use this technology. Meptik excels in live and experiential events powered by virtual production in addition to film and television.
Halon Entertainment
Can you tell us a bit about the VAD’s role in the workflow process? How early in the preproduction process does the VAD usually get involved?
Jess Marley: VAD’s role in the process is whatever production needs it to be. From design to planning/staging, implementation and effects. It can be almost anything. As early as we can is optimal, but we’re flexible and have been brought on during almost every stage of production. From preproduction to help visualize the story, in the middle of production to work out problems and staging, or later on to fix shoots or aid in reshooting of film. Anytime is the answer, it depends on the project.
In the process itself, there are a variety of different areas and fields where we help. One is scouting and visualizing the space. A client could have sketch-up models or something from a construction or art department, other times scribbles or napkin sketches, regardless. We need to see what it looks like, convert it, get it into engine by any means necessary, and then let the decision-makers see and feel the space.
Other times, we’re brought in once storyboards are done. If that’s the case, we come in and bid out what the project is going to entail and what it will cost us to build environments, vehicles, characters and props.
Once involved in the building of the content, the VAD works closely with production to understand the look and the design of the intended content. Creating virtual environments, lighting and mood, and working with animation to develop setups to help better visualize the story for all creatives involved. We very much like to be working with production, an extension of the creative to help drive the creative process and choices when virtual production is involved. So in short, early is best.
Who is on the team? Tell us a little bit about the various roles within the VAD.
Marley: There is a supervisor who oversees everything and makes sure that everybody is on-task; an environment artist, who is in charge of building the main set areas that you’re going to be shooting on; and a lighter who comes in and captures the different times of the day or bakes down lighting and assets. There will also be asset artists and potentially a set dresser to help refine areas of interest or places a production may shoot.
Once we get into the stage aspect of the work, we are ensuring the wall is optimized, we bring in a tech artist who is extremely engine-savvy. Their job is to understand where we can cut down some of the resources the engine is using, and speed up our frame rate so our handoff to the stage is simple. Once delivered to stage, they take it through the rest of the support process. Additionally, you may also require a manager and an effects artist, but that’s the general group a VAD is composed of.
Meredith Brace Sloss: The way that our team is built, we have the production manager who oversees everything that is going on. Depending on the size and scope of the project, we’ll also have a project manager and/or a coordinator.
You mentioned that Halon Entertainment uses Unreal. Please elaborate on that, as well as any other technology you use.
Marley: We were early adopters of Unreal Engine. Getting integrated with it early on allowed us to be very fluent when it became a bigger, more popular piece of software. We’ve been using it for everything from previz to finals (Fortnite) – we have a team that pumps out the Fortnite trailers that come out every two to three months. This puts us on the cutting edge of working with Epic Games (makers of Unreal) and understanding what is coming down the pipe. It sets us up to do game cinematics as well as better looking previz into finals for commercials and TV, because that bridge is slowly closing that gap as far as these two things working hand-in-hand.
Meredith, you collaborate closely with the photogrammetry team. Can you expand on the process of photogrammetry? Has this enabled you to travel to any cool destinations?
Brace Sloss: Yes. That’s the department where we’ve looked to use our skills outside of film and television, especially in the education and museum space.
We’ve started considering what we can do for museums. They often have collections that are sitting in warehouses or items that are being rightfully repatriated back to the countries they belong in. There are traveling or temporary exhibits that go up and come down. So is there a way that we can capture those exhibits for the future? Is there a way that we can take photographs of past exhibits and breathe new life into them and experience them in a virtual way? We’ve been doing a lot of talking.
Back in 2018, some photogrammetry team members went to Monument Valley in Utah. They worked with elders on sacred Navajo land to capture areas that were not open to the public. Anytime we have an opportunity to go into these places, we need to leave them better than we found them. They captured a couple of different buttes in Monument Valley, and we’ve been able to put them up on the virtual production stage. We’ve been able to use this material for many different tests and demos, including for the Sphere in Las Vegas.
We’re never advocating for film or television to not film in a location. We’ve been speaking to film commissions and have told them that we’re not looking to replace local film crews with virtual production; we’re looking to add to them. Certain parts of Monument Valley are sacred and delicate, so we don’t want to go in there with a 100-person crew, craft services, catering and the whole kit and caboodle. If there is a choreographed fight that can’t be done in a delicate place, we can do that on a virtual production stage.
We can also modularize pieces. Our team went behind our office in Santa Monica and captured the buildings, then reassembled them to look like a dollhouse or a Lego kit and created a fictionalized street based on these assets that they scanned. The choices are endless.
From the VAD’s perspective, do you believe that virtual production can help with cost savings? If so, how is money saved on this side of things?
Marley: Yes, it can help save money for production. But it does need to be the right type of production. One of the reasons we use Mandalorian as a good example is because there are a lot of shiny assets and parts like his helmet, which is going to be great for reflections and reduce the need for traditional keying. If you do the planning up front, you reduce the need to fix any/a lot of work in post. This ultimately cuts your costs and saves you money in general, but it means you need to make decisions early on.
Also, as Meredith was saying about an entire crew going out to a remote destination… this saves a lot of money in travel costs. It is not to say that it will always save production money. Sometimes people are trying to do very complicated things, so it may not be the right fit. But yes, if done properly, it can be a huge cost savings.
Brace Sloss: Additionally, cost savings come in when planning is done properly at the beginning. If we’re able to get in there and understand what the project is and what the necessities are, we will consult as we go along. If it comes to us at the end and we have to figure out how to put it together, it becomes more of a challenge.
Marley: And finally, when it comes to taking shots that are traditionally done in post, we make the hard shots medium, the medium shots easy, and easy shots almost completely final on the volume. What I mean by that is, if you are tracking cameras on-set, you also have an entire sequence that you can hand off to a post house. You also have all the camera data that is tracked, so you no longer need to track the shots and do extra work. You pretty much get all that information to finish the shot. So I think there are many different versions of cost-cutting.
You’ve worked on some huge recent theatrical releases, like Mission Impossible: Dead Reckoning and John Wick 4. I’d love to dive into a recent project and learn about your approach.
Marley: John Wick 4 was a fun one, and we did some interesting stuff. Our previz department planned the car crash sequence to take place in the Arc de Triomphe. Once approved, they transferred animation to a hydraulic rig to mimic exactly what was going to happen with the car in the shot to post, getting all the reflections, movement and lighting needed out of the animation to help speed up the post process.
One of the cool things about virtual production is being able to take the previz and apply it to something shot later down the road. That was just one of the things that we did on the post side of things. They knew they wanted to do the shot, they didn’t have it fully planned out, but we were able to help them build the actual setting, light it properly, get the car that they wanted to use, and then help hand it off as they went to stage for shooting it on the wall.
We also worked on Robert Zemeckis’ Pinocchio for Disney. We were on that job from the get-go until it finished in post. That was probably a 2.5-year project as a team. We were using actual models from an art department and oversaw all of the assets, including anything that moved or a prop. Eventually, we were pulled in to do a couple of environments. All of it was shot in VCam, which meant that they brought a mocap stage up to Carpenteria.
This was during the height of the pandemic. We built a motion capture stage, sensors and a remote-controlled iPad/camera setup, which allowed Zemeckis to be the camera. The zoom functions enabled him to change his scale to be Jimmy Cricket size, Pinocchio size or Geppetto size, whatever the shot needed. He could shoot a variety of different types of shots all in virtual environments: everything from the interior of Monstro’s mouth to Geppetto’s workshop interior, the town square or the very busy Pleasure Island.
One of the largest benefits to using virtual production for Pinnochio, was for the blocking and the planning of space. Cricket to boy, boy to man, man to whale. The scale changes were intense. And even though our departments create stunning environments, we need to consider efficiency first and foremost. Optimization, getting them ready for a camera and making sure they work properly are our top priorities. Pinocchio was also at the very beginning of the work-from-home order, the beginning of the pandemic, which was an interesting experience. We were able to have remote meetings and sessions with everybody, and it was the beginning of what feels like now the “regular” or “current” state of things.
Meptik
Tell us about the VAD’s role in the workflow process? How early in the preproduction process does the VAD typically get involved?
Joshua Eason: As early as possible. It definitely changes from project to project, but the most successful projects I’ve seen are ones where the art department and virtual art department are collaborating from the very beginning. The most ideal time to engage a VAD is shortly after the production designer and/or VFX supervisor begin and at the same time or before the director of photography. If that’s not possible, then as early as possible. Often, the virtual art department is taking direction from the traditional art department to make sure that what they are creating in the background matches the overall vision.
Who is typically part of the VAD? Tell us about those various roles.
Andrew Amato: It depends on the project and scale. I have budgeted for AAA projects where we’ve had teams of around 20 people. Those would consist of four to five leadership roles: your VAD art director, VAD supervisor, VAD producer, VAD manager, VAD coordinator, as well as a VAD lead. In the case of AAA, depending on how many environments you’re constructing, you’ll have anywhere from three to five seniors and six non-senior environment artists. If you are incorporating props or certain locations where you’re 3D or LIDAR scanning, you’ll need to do cleanup, so you’ll have generalists and photogrammetry cleanup artists. You should also always have tech artists and pipeline engineers.
One of the bridges that you need to cross when scaling up to bigger projects, is having a pipeline that’s able to deliver finals/ICVFX. You can have a room full of amazing Unreal artists, but without a pipeline, engineers, and tools that facilitate real-time workflows, you’ll likely run into problems. If you have creatives in multiple locations, you need someone who understands perforce, cloud computing, and data storage. This needs to be assembled on a project-to-project basis. So you’ll definitely want a whole tech team. That’s one of the strengths of our teams at Meptik, we have both virtual artists and engineers working together from the beginning. If you’re doing virtual location scouting, you’ll want a senior artist who is really good at communicating with clients and filmmakers. Aside from that, there are ad-hoc roles – and you’ll need a really amazing lighting tech.
Eason: On smaller productions, we’ve done projects with as few as three designers and a virtual production supervisor who doubles as that pipeline engineer. The full spectrum depends on whether the project is a film shooting for weeks or a small commercial that’s shooting in a single day. There is a lot of flexibility and scalability in virtual production, which makes it useful for many different types of projects and budgets.
From the VAD’s perspective, how do you believe virtual production can help with cost savings?
Eason: It really depends. We’ve worked with companies who are being really smart about it. For example, if we needed to be in three different locations on the same day, that might not be practical to film on location. There are many ways to save, but that takes a greater understanding and a level of education for companies that maybe aren’t coming from a background of using virtual production. It’s about education and having the humility to ask questions, but then also considering, “Is virtual production the right tool for my project?” Sometimes the answer is yes, and sometimes the answer is no.
Amato: Any studio involved in virtual production has a responsibility to determine which environments, which worlds, and which shots and sequences are appropriate for virtual production. You could be wasting your money if you are looking at shooting virtual production for scenes that should be part of a traditional VFX workflow.
Hybrid workflows are strong in many cases as far as cost savings. Producers have to factor in that this is going to be both a good creative decision and financially make sense. There really is no one-to-one savings in that $1 goes into VAD, so $1 can be taken away from the art department or something of that equivalent.
The best way to really see the savings is if you look at the VFX movies that have gone over budget. In many cases, their post budgets will vary anywhere from 10 to 100 percent over budget, depending on how big the studio is and how deep their pockets are. They’ll continue to fix it in post for a year longer and have 30% more in VFX overages.
VAD projects tend to not have these overages in post, because shots have to be better planned during pre-production. So the best way to comparatively think about it is – historically, what are the overages in traditional VFX workflows and then historically, what are the overages with a VAD on the first 15 weeks of preparation.
How can virtual production technology be used for smaller-budget or indie productions?
Eason: Something that we’ve been doing at Meptik is creating a VAD catalog. It is comprised of more generic virtual scenes that we get asked about a lot. For instance a Northwestern forest, a tropical beach, or a European valley. Basic geographical locations that we could use as a starting place to pull from or build upon. We maintain this library so that when smaller projects come our way, we can pull from that catalog and make modifications. Generally, we’re shaving off about 30% of the time that it would normally take to get a project to the finish line.
How do you see virtual production being used outside of entertainment, like education or different adjacent markets?
Katie Pietzka: We have installed the virtual production workflows for two stages at the Savannah College of Art and Design (SCAD). They now offer a range of specialized courses to provide students with practical insights into virtual production technology, teaching the next generation of filmmakers, animators and game designers how to use this technology. It’s been very exciting to see them evolve their curricular program, and we see more and more universities implementing virtual production workflows into their curriculum.
Another great use case is corporate applications. Boston-based software company PTC has an xR stage in their office with a bi-directional robotic pedestal that allows them to communicate with clients from the xR stage in real-time. We’ve created virtual environments for them that replicate their customers’ environments, providing PTC with the opportunity to showcase their software applications in their clients’ own environments. While they’re talking to customers, they can get feedback, and change those environments on the fly based on what they are discussing.
What technology and tools does Meptik use for creating content?
Eason: Starting with the camera… we always need a camera tracking solution. There are many options available, for example, Stype, OptiTrack and Mo-Sys. Meptik is pretty agnostic, and we use all of them depending on the needs of the project. From there, there is the LED wall itself. There are some requirements as to what kind of LED can be used. For virtual production specifically, you want LED that has a wider color gamut so it can recreate colors more accurately to what the physical world produces.
From there, we work in game engine software. We’re, again, pretty agnostic when it comes to software. We work in Unreal and we love working in a program called Notch, which is used more for experiential design. As it is more of a motion graphics tool and interactive VFX tool, it could be used for more stylized and music video-type work. We also work with Unity, Twinmotion, and, most recently, Snap AR Lens Studio. But Unreal is still mostly used when it comes to virtual production.
For media servers, we preferably use Disguise, not only because we are part of the company, but because Disguise has designed a workflow for automatically calibrating camera lenses and cameras into Unreal instead of having to do it manually. Even before we became part of the Disguise ecosystem, we used their workflow, because it was the most straightforward to work with.
Is there anything else you’d like to touch on or anything you’re excited about?
Pietzka: I look forward to seeing virtual production take over the film and TV industry, but also other industries, such as the corporate space, education and even live events. I’m also excited to see what happens beyond productions, and utilizing using other virtual assets such as augmented and virtual reality elements to take viewer and fan experiences beyond the screen. This technology has so many untapped opportunities, and we are just at the tip of the iceberg.
Eason: Virtual production has driven an increase in quality towards photorealism for real-time rendering. I’m excited to see these advances in rendering be used more within the experiential and live events industries. Currently, we are working on projects that are set up just like a virtual production LED volume, but have a live audience instead of tracked cameras. We are building giant walls where people can walk up to the wall and interact with virtual environments in a museum-like experience. It’s really cool how this technology can be used in so many different verticals, and we’re excited to see how the industry continues to grow and how technology and art evolve.
Amato: I’m excited to see what happens with AI. I have my reservations about it, but when it comes to virtual production and AI, there are some amazing applications when it comes to mocap and generative design. It’s a very powerful tool that needs a lot of exploration to ensure that we’re going about it responsibly.
Main Image: Halon previs for Pinocchio
Alyssa Heater is a writer and marketer in the entertainment industry. When not writing, you can find her front row at heavy metal shows or remodeling her cabin in the San Gabriel Mountains.