By Michael Burns
After last year’s post-Covid restart, the annual European M&E trade show was busier, bigger and a bit more diverse than in recent years. The IBC show and conference in Amsterdam always seems to have an underlying theme, but this year there was more than one: Virtual production and the quest for efficiency were big drivers, but the implementation of AI and machine learning was everywhere.
Adobe got its AI and efficiency fix in early, revealing its AI-driven Enhance Speech and filler word detection for Premiere Pro, AI rotoscoping for After Effects (as well as a true 3D environment for the motion graphics workhorse) and enhancements to Frame.io.
At the Adobe stand on IBC’s final day, Meagan Keane, the director of product marketing for pro video strategy, said, “We have been focusing over the last year on community engagement interaction. In the upcoming release of Premiere Pro, we have over 20 new smaller features that have come directly from community feedback. Our timeline performance is five times faster, while project templates are very straightforward and build efficiencies into how people work. It’s the small things, for editors in particular, that make a huge difference.”
Keane said AI was the biggest topic at IBC. “We hosted a really exciting panel at the Innovation Stage talking about how the future of generative AI is going to change the entertainment industry. It was standing room only. Our Firefly demo pod has been very popular. We announced that the Firefly generative fill is launched in Photoshop, and already video users are figuring out ways to bring it into their video workflow, which is awesome.”
AJA was at the show with its Kona X, a 12G-SDI, four-lane, PCIe 3.0 video I/O card with dual HDMI 2.0 I/O that lets users achieve as low as subframe latencies with streaming DMA (direct memory access). “We were really excited to be able to release Kona X at IBC because this is the type of show where PCIe capture cards with multifaceted abilities are popular,” said Abe Abt, AJA senior product consultant. “This board can be used in post, in production and in live events, but we’ve also had esports, medical imaging, and the virtual set and motion capture people here. This board has under a frame of latency, so that’s what the virtual guys really care about: Everything that’s being keyed into backgrounds must be synchronized with what’s coming out of the speaker’s mouth.”
Also new was the LucidLink panel for Premiere Pro. “This is our first integration with any front-end creative tool,” said LucidLink’s Matthew Schneider, director of product management, M&E. “The panel brings all the smarts of the LucidLink caching technology front and center directly into the application. The editor can just choose to pin the sequence and not the terabytes of data that may be sitting on their LucidLink file space. If you have problematic bandwidth but need to play heavy media files, pinning in advance will cache the entire file without duplicating the data, without creating a security concern and without breaking the collaborative workflow.
“We expect to ship this in the mid-October time frame,” added Schneider. “In the next version, we expect to have the ability to pin just the ranges of the clips that are used in the editing timeline, which is even more surgical and precise. Both end users and other technology companies alike have been superexcited to see that.”
Tiny Acorns
Often, some of the most intriguing innovations at IBC emerge from startups. One such is Quine, a Norwegian company with a very arresting stand and an even more interesting product.
Launched at the show, Quine CopySafe is a free software utility that offers simple automated secure copying, project structuring, mirroring of data and transcoding of proxies. It detects faults in data transfers and reports if your copy-job has an issue. There’s also a paid version.
“People ask why we’re doing it for free, at least mostly,” said Benjamin Kippersund, end user support and onboarding for Quine. “We’re a production asset management company. We’re dealing with workflows that bring metadata into the editing software and trying to make all those productions go seamlessly through one platform. We integrate with Premiere Pro, Resolve and Avid Media Composer, sending the metadata to the right fields without needing all the manual labor to just fill stuff in. We also have a Dailies Preview Browser, so you can go to our website and sort and edit the metadata in the cloud before you take it into the editing software.”
Kippersund adds: “This is all for getting attention and for making us more well-known, so we can step out from Norway.”
Another startup at IBC was Twelve Labs, showing a demo of “multimodal, contextual understanding for video” — aka, the much catchier “ChatGPT for video” — an AI API that can recognize what’s going on in a clip.
Anthony Guiliani, head of operations, said it was the result of a customized AI model. “It ‘understands’ what’s happening in the world. It will allow media asset management companies, who have clients with large video archives, to help them be able to more intelligently understand everything that’s happening in their videos. Anybody, without even talking to us, can integrate our API into their product.”
Virtual Production
Zero Density took prime position in Hall 7 at the RAI, greeting visitors with new branding, a preview of Reality 5 XR/AR/virtual studio software and its new Traxis camera-tracking system.
CMO Ralf van Vegten claims Traxis “creates the most accurate tracking that you can get in a studio environment. It’s an optical tracking system where you have an array of small cameras in the trusses, and then you have a unit on the camera that sends out infrared signals. All the cabling is managed through the camera, so there is no additional cabling running through your studios. It’s easy to set up, and once it’s installed, it recalibrates itself. The accuracy is 0.02 millimeters, which is phenomenal. I’m from a production background, so I know it can be a nightmare to constantly recalibrate everything.
“For the first time, people can see the integration of our software with our hardware, with our control layer on top,” continued van Vegten. “Reality 5, which we’ll launch in Q4, has been completely redeveloped, so it can be an ecosystem for virtual production. It’s not only for greenscreen or XR, but we also will open it up for others to develop tooling on top of that.”
Pixotope was showing the new Live Controller functionality for all of its real-time broadcast graphics products. Underpinned by Unreal Engine, Live Controller introduces reusable no-code templates and rundown-based virtual production workflows to all broadcast control rooms in a single-user software package. “It’s primarily focused on the production graphics market but also very useful for all of virtual production,” said VP of global marketing Ben Davenport. “It allows you to create and deploy templates for production graphics but also all other elements of virtual production. You can very quickly change between scenes or graphics elements.”
Samsung is also getting into the VP world with the launch of The Wall for Virtual Production. With a 12,288Hz refresh rate, Black Seal Technology+ for pure black levels and 20-bit processing for color mapping, The Wall has a curvature range that can stretch up to 5,800R, which creates a more realistic field of view.
“We can load lookup tables or LUTs for various cameras within our Virtual Production Manager software, which allows you to take control of the color – we can cover 98% of DCI-P3, so our color rendition is phenomenal,” said Hugh Bourne, technical solutions specialist LFD for Samsung. “Eighty cabinets make up The Wall, but the Black Seal Technology and microLED technology means it’s almost seamless. There’s also very little heat output.”
The display also packs an updated genlock feature that keeps The Wall in line with the system’s signals, so there aren’t any dropped or doubled frames.
Still in the virtual realm, Stefan Söllner, head of technology, mixed reality production solutions for ARRI, was showing several prototype VP technologies. For example, a demo showed a controller that uses Unreal Engine and DMX to place and control ARRI products in the virtual environment. “When it comes to virtual production, there is a detachment between the Unreal operator and the gaffer,” said Söllner. “We are bringing the control of the lighting back to the gaffer, who is able suddenly to control the real world and the virtual world in the camera frustum at the same time.”
More developed, and due early next year, is the Color Management Pipeline. “It’s for virtual production or LED production environments,” said Söllner. “We can give the Unreal artist or playout artist, whoever is doing the prep, a preview in the calibrated virtual scene that’s very similar to the scene that will then be filmed in front of the LED. So you don’t need to be in the stage to do proper color management. You can do it upfront and then go on the stage. You are basically saving one complete day on the stage with this.”
Camera Cues
Canon had rethought its stand for this year, with the centerpiece being a 360-degree live multi-camera broadcast workflow powered by its IP-based remote control XC Protocol, as well as stations showing how home studios and other solutions could be configured using Canon kit.
“This is to show the Canon imaging ecosystem in one huge set-up,” said Jack Adair, product marketing specialist for Canon EMEA. “We have a range of Cinema EOS camcorders, PTZ cameras and Pro Video camcorders, all communicating via our own XC protocol, which goes into the new RCIP1000 controller. It’s a broadcast standard PTZ controller, which can monitor up to 200 cameras but can be operated by one person.
“As well as two new EOS R entry-level cameras, we also have our first RF mount Cine Prime lenses, so you get 12-pin data connection as well as distortion correction included,” continued Adair. “People at the show have been happy to see we are now starting to bring dedicated video lenses out into the RF range.”
On the stand shared with its Nikon parent, MRMC was using its robotics expertise to teleport live talent on different cameras in different locations, anywhere in the world, into a live virtual set that makes them look as if they are side-by-side.
“Effectively, it’s a robotics sync mode; the camera heads are in sync or locked together to form the same action anywhere in the world,” said Sacha Kunze, broadcast business development manager for MRMC. “You can bring multiple studios together into the same space. Because of the synchronization of the cameras, you can just key the videos together, and just seamlessly move without cutting from one studio to the other. If we do have a network delay between the two studios, we also compensate for that in the software.”
On the massive Sony stand, filmmaker and Sony ambassador Alister Chapman was fielding technical enquiries about the just launched 8.6K Burano camera — picture a more compact, lighter version of the Venice 2, “re-imagined” for solo and small team productions, and even drone shoots.
“Burano shares a huge amount of the Venice DNA with a lot of the FX6 and FX9 image processing,” said Chapman. “It has in-body image stabilization, so as the camera moves, the sensor is moved in response, and that can stabilize any lens you put on the camera.”
And if you thought we had finished with AI, not so fast: Burano sports an AI processor that recognizes not only faces but the profile of a human being. “The focus will follow that person, even when they’re not facing the Burano,” said Chapman, “It’s a very feature-packed camera.”
Michael Burns, who is based in Scotland, covers production and post production for a variety of international publications.