By Randi Altman
The world of storage is ever changing and complicated. There are many flavors that are meant to match up to specific workflow needs. What matters most to users? In addition to easily-installed and easy-to-use systems that let them focus on the creative and not the tech? Scalability, speed, data protection, the cloud and the need to handle higher and higher frame rates with higher resolutions — meaning larger and larger files. The good news is the tools are growing to meet these needs. New technologies and software enhancements around NVMe are providing extremely low-latency connectivity that supports higher performance workflows. Time will tell how that plays a part in day-to-day workflows.
For this virtual roundtable, we reached out to makers of storage and users of storage. Their questions differ a bit, but their answers often overlap. Enjoy.
Western Digital Global Director M&E Strategy & Market Development Erik Weaver
What is the biggest trend you’ve seen in the past year in terms of storage?
There’s a couple that immediately come to mind. Both have to do with the massive amounts of data generated by the media and entertainment industry.
The first is the need to manage this data to understand what you have, where it resides and where it’s going. With multiple storage architectures in play – cloud, hybrid, legacy, remote, etc. — some may be out of your purview, making data management challenging. The key is abstraction, creating a unique identifier(s) for every file everywhere so assets can be identified regardless of file name or location.
Some companies are already making progress using the C4 framework and the C4 ID system. With abstraction, you can apply rules so you always know where assets are located within these environments. It allows you to see all your assets and easily move them between storage tiers, if needed. Better data management will also help with analytics and AI/ML.
The second big trend, which we’ll talk about some more, is NVMe (and NVMe-over-Fabric) and the incredible speed and flexibility it provides. It has the ability to radically change the workflow for M&E to genuinely handle multiple 4K, 6K and 8K feeds and manage massive volumes of data. NVMe all-Flash arrays such as our IntelliFlash N-Series product line, as opposed to traditional NAS, bring transfer rates to a whole new level. Using the NVMe protocol can deliver three to five times faster performance than traditional flash technology and 20 times faster than traditional NAS.
With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
For AI, VR and machine learning, there’s a general trend toward using Flash on the front end and object storage on the back end. Our customers use ActiveScale object storage to scale up and out and store the primary dataset, then use an NVMe tier to process that data. You need a storage architecture large enough to capture all those datasets, then analyze them. This is driving an extreme amount of data.
Take, for example, VR. The move from simple 360 video into volumetric capture is analogous to what film used to be: it’s expensive. With film, you only have a limited number of takes and only so much storage, but with digital you capture everything, then fix it and post. The expansion in storage needs is outrageous and you need cost-effective storage that can scale.
As far as AI and ML, think about a popular Internet entertainment or streaming service. They’re running analytics looking at patterns of what customers are watching. They’re constantly growing and adapting in order to provide recommendations, 24×7. It would be tedious and downright unfeasible for humans to track this.
All of this requires compute power and storage. And having the right balance of performance, storage economics and low TCO is critical. We’re helping many companies define that strategy today leveraging our family of IntelliFlash, ActiveScale, Ultrastar and G-Technology branded products.
Can you talk about NVMe?
NVMe is a game changer. NVMe, with extreme performance, low latencies and incredible throughput is opening up new possibilities for the media workflow. NVMe can offer 5x the performance of traditional Flash at comparable prices and will be the foundation for next-generation workflows for production, gaming and VFX. It’s a radical change to traditional workflows today.
NVMe also lays the foundation for NVMe over fabric (NVMf). With that, it’s important to mention the difference between NVMe and NVMf.
Unlike SAS and SATA protocols that were designed for disk drives, NVMe was designed from the ground up for persistent Flash memory technologies and the massively parallel transfer capabilities of SSDs. As such, it delivers significant advantages including extreme performance, improved queuing, low-latency and the reduction of I/O stack overheads.
NVMf is a networked storage protocol that allows NVMe Flash storage to be disaggregated from the server and made widely available to concurrent applications and multiple compute resources. There is no limit to the number of servers or NVMf storage devices that can be shared. It promises to deliver the lowest end-to-end latency from application to storage while delivering agility and flexibility by sharing resources throughout the enterprise.
The bottom line is NVMe and NVMf are enablers for next-generation workflows that can give you a competitive edge in terms of efficiency, productivity and extracting the most value from your data.
What do you do in your products to help safeguard your users’ data?
As one of the largest storage companies in the world, we understand the value of data. Our goal is to deliver the highest quality storage solutions that deliver consistent performance, high-capacity and value to our customers. We design and manufacture storage solutions from silicon to systems. This vertical innovation gives us a unique advantage to fine-tune and optimize virtually any layer within the stack, including firmware, software, processing, interconnect, storage, mechanical and even manufacturing disciplines. This approach helps us deliver purpose-built products across all of our brands that provide the performance, reliability, total cost of ownership and sustainability demanded by our customers.
Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
We believe hybrid workflows are critical in today’s environment. M&E companies are increasingly leveraging a hybrid of on-premises and multi-cloud architectures. Core intellectual property (in the form of digital assets) is stored in private, secure storage, while they access multi-cloud vendors to render, run post workflows or take advantage of various tools and services such as AI.
Object storage in a private cloud configuration is enabling new capabilities by providing “warm” online access to petabyte-scale repositories that were previously stored on tape or other “cold” storage archives. Suddenly, with this hybrid approach, companies can access and retain all their assets, and create new content services, monetize opportunities or run analytics across a much larger dataset. Combined with the ability to use AI for audience viewing, demographic and geographic data allows companies to deliver high-value, tailored content and services on a global scale.
Final Thoughts?
We’re seeing a third dimension to the “digital dilemma.” The digital dilemma is not new and has been talked about before. The first dilemma is the physical device itself. No physical device lasts forever. Tape and media degradation happen over extended periods of time. You also need to think about the limitation of the device itself and will it become obsolete? The second is the age of the media format and compatibility with modern operating systems, leaving data possibly unreadable. But the third thing that’s happening, and it’s quite serious, is that the experts who manage the libraries are “aging out” and nearing retirement. They’ve owned or worked on these infrastructures for generations and have this tribal knowledge of what assets they have and where they’re stored as well as the fickle nature of the underlying hardware. Because of these factors, we strongly encourage that companies evaluate their archive strategy, or potentially risk losing enormous amounts of data.
Company 3 NY and Deluxe NY Data/IO Supervisor Hollie Grant
Company 3 specializes in DI, finishing and color correction, and Deluxe is an end-to-end post house working on projects from dailies through finishing.
How much data did you use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Over the past year, as a rough estimate, my team dealt with around 1.5 petabytes of data. The latter half of this year really ramped up storage-wise. We were cruising along with a normal increase in data per show until the last few months where we had an influx of UHD, 4K and even 6K jobs, which take up to quadruple the space of a “normal” HD or 2K project.
I don’t think we’ll see a decrease in this trend with the take off of 4K televisions as the baseline for consumers and with streaming becoming more popular than ever. OTT films and television have raised the bar for post production, expecting 4K source and native deliveries. Even smaller indie films that we would normally not think twice about space-wise are shooting and finishing 4K in the hopes that Netflix or Amazon will buy their film. This means that even for the projects that once were not a burden on our storage will have to be factored in differently going forward.
Have you ever lost important data due to a hardware failure?Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
Triple knock on wood! In my time here we have not lost any data due to an operator error. We follow strict procedures and create redundancy in our data, so if there is a hardware failure we don’t lose anything permanently. We have received hard drives or tapes that failed, but this far along in the digital age most people have more than one copy of their work, and if they don’t, a backup is the first thing I recommend.
Do you find access speed to be a limiting factor with you current storage solution?
We can reach read and write speeds of 1GB on our SAN. We have a pretty fast configuration of disks. Of course, the more sessions you have trying to read or write on a volume, the harder it can be to get playback. That’s why we have around 2.5PB of storage across many volumes so I can organize projects based on the bandwidth they will need and their schedules so we don’t have trouble with speed. This is one of the more challenging aspects of my day-to-day as the size of projects and their demand for larger frame playback increase.
What percentage of your data’s value do you budget toward storage and data security?
I can’t speak to exact percentages, but storage upgrades are a large part of our yearly budget. There is always an ask for new disks in the funding for the year because every year we’re growing along with the size of the data for productions. Our production network infrastructure is designed around security regulations set forth by many studios and the MPAA. A lot of work goes into maintaining that and one of the most important things to us is keeping our clients’ data safe behind multiple “locks and keys.”
What trends do you see in storage?
I see the obvious trends in physical storage size decreasing while bandwidth and data size increases. Along those lines I’m sure we’ll see more movies being post produced with everything needed in “the cloud.” The frontrunners of cloud storage have larger, more secure and redundant forms of storing data, so I think it’s inevitable that we’ll move in that direction. It will also make collaboration much easier. You could have all camera-original material stored there, as well as any transcoded files that editorial and VFX will be working with. Using the cloud as a sort of near-line storage would free up the disks in post facilities to focus on only having online what the artists need while still being able to quickly access anything else. Some companies are already working in a manner similar to this, but I think it will start to be a more common solution moving forward.
creative.space‘s Nick Anderson
What is the biggest trend you’ve seen in the past year in terms of storage?
The biggest trend is NVMe storage. SSDs are finally entering a range where they are forcing storage vendors to re-evaluate their architectures to take advantage of its performance benefits.
Can you talk more about NVMe?
When it comes to NVMe, speed, price and form factor are three key things users need to understand. When it comes to speed, it blasts past the limitations of hard drives speeds to deliver 3GB/s per drive, which requires a faster connector (PCIe) to take advantage of. With parallel access and higher IOPS (input/output operations per second), NVMe drives can handle operations that would bring an HDD to its knees. When it comes to price, it is cheaper per GB than past iterations of SSD, making it a feasible alternative for tier one storage in many workflows. Finally, when it comes to form factor, it is smaller and requires less hardware bulk in a purpose-built system so you can get more drives in a smaller amount of space at a lower cost. People I talk to are surprised to hear that they have been paying a premium to put fast SSDs into HDD form factors that choke their performance.
Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
This is something we have been thinking a lot about and we have some exciting stuff in the works that addresses this need that I can’t go into at this time. For now, we are working with our early adopters to solve these needs in ways that are practical to them, integrating custom software as needed. Moving forward we hope to bring an intuitive and seamless storage experience to the larger industry.
With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
This gets down to a shift in what kind of data is being processed and how it can be accessed. When it comes to video, big media files and image sequences have driven the push for better performance. 360° video pushes the performance storage further past 4K into 8K, 12K, 16K and beyond. On the other hand, as CGI continues to become more photorealistic and we emerge from the “uncanny valley,” the performance need shifts from big data to small data in many cases as render engines are used instead of video or image files. Moving lots of small data is what these systems were originally designed for, so it will be a welcome shift for users.
When it comes to AI, our file system architectures and NVMe technology are making data easily accessible with less impact on performance. Apart from performance, we monitor thousands of metrics on the system that can be easily connected to your machine learning system of choice. We are still in the early days of this technology and its application to media production, so we are excited to see how customers take advantage of it.
What do you do in your products to help safeguard your users’ data?
From a data integrity perspective, every bit of data gets checksumed on copy and can be restored from that checksum if it gets corrupted. This means that that storage is self-healing with 100% data integrity once it is written to the disk.
As far as safeguarding data from external threats, this is a complicated issue. There are many methods of securing a system, but for post production, performance can’t be compromised. For companies following MPAA recommendations, putting the storage behind physical security is often considered enough. Unfortunately, for many companies without an IT staff, this is where the security stops and the system is left open once you get access to the network. To solve this problem, we developed an LDAP user management system that is built-in to our units that provides that extra layer of software security at no additional charge. Storage access becomes user-based, so system activity can be monitored. As far as administering support, we designed an API gatekeeper to manage data to and from the database that is auditable and secure.
AlphaDogs‘ Terence Curren
Alpha Dogs is a full-service post house in Burbank, California. They provide color correction, graphic design, VFX, sound design and audio mixing.
How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
We are primarily a finishing house, so we use hundreds of TBs per year on our SAN. We work at higher resolutions, which means larger file sizes. When we have finished a job and delivered the master files, we archive to LTO and clear the project off the SAN. When we handle the offline on a project, obviously our storage needs rise exponentially. We do foresee those requirements rising substantially this year.
Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
We’ve been lucky in that area (knocking on wood) as our SANs are RAID-protected and we maintain a degree of redundancy. We have had clients’ transfer drives fail. We always recommend they deliver a copy of their media. In the early days of our SAN, which is the Facilis TerraBlock, one of our editors accidentally deleted a volume containing an ongoing project. Fortunately, Facilis engineers were able to recover the lost partition as it hadn’t been overwritten yet. That’s one of the things I really have appreciated about working with Facilis over the years — they have great technical support which is essential in our industry.
Do you find access speed to be a limiting factor with you current storage solution?
Not yet, As we get forced into heavily marketed but unnecessary formats like the coming 8K, we will have to scale to handle the bandwidth overload. I am sure the storage companies are all very excited about that prospect.
What percentage of your data’s value do you budget toward storage and data security?
Again, we don’t maintain long-term storage on projects so it’s not a large consideration in budgeting. Security is very important and one of the reasons our SANs are isolated from the outside world. Hopefully, this is an area in which easily accessible tools for network security become commoditized. Much like deadbolts and burglar alarms in housing, it is now a necessary evil.
What trends do you see in storage?
More storage and higher bandwidths, some of which is being aided by solid state storage, which is very expensive on our level of usage. The prices keep coming down on storage, yet it seems that the increased demand has caused our spending to remain fairly constant over the years.
Cinesite London‘s Chris Perschky
Perschky ensures that Cinesite’s constantly evolving infrastructure provides the technical backbone required for a visual effects facility. His team plans, installs and implements all manner of technology, in addition to providing technical support to the entire company.
How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Depending on the demands of the project that we are working on we can generate terabytes of data every single day. We have become increasingly adept at separating out data we need to keep long-term from what we only require for a limited time, and our cleanup tends to be aggressive. This allows us to run pretty lean data sets when necessary.
I expect more 4K work to creep in next year and, as such, expect storage demands to increase accordingly.
Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
Our thorough backup procedures mean that we have an offsite copy of all production data within a couple of hours of it being written. As such, when an artist has accidentally overwritten a file we are able to retrieve it from backup swiftly.
Do you find access speed to be a limiting factor with your current storage solution?
Only remotely, thereby requiring a caching solution.
What percentage of your data’s value do you budget toward storage and data security?
Due to the requirements of our clients, we do whatever is necessary to ensure the security of their IP and our work.
What trends do you see in storage?
The trendy answer is to move all storage to the cloud, but it is just too expensive. That said, the benefits of cloud storage are well documented, so we need some way of leveraging it. I see more hybrid on-prem and cloud solutions. providing the best of both worlds as demand requires. Full SSD solutions are still way too expensive for most of us, but multi-tier storage solutions will have a larger SSD cache tier as prices drop.
Panasas‘ RW Hawkins
What is the biggest trend you’ve seen in the past year in terms of storage?
The demand for more capacity certainly isn’t slowing down! New formats like ProRes RAW, HDR and stereoscopic images required for VR continue to push the need to scale storage capacity and performance. New Flash technologies address the speed, but not the capacity. As post production houses scale, they see that complexity increases dramatically. Trying to scale to petabytes with individual and limited file servers is a big part of the problem. Parallel file systems are playing a more important role, even in medium-sized shops.
With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
VR (and, more generally, interactive content creation) is particularly interesting as it takes many of the aspects of VFX and interactive gaming and combines them with post. The VFX industry, for many years, has built batch-oriented pipelines running on multiple Linux boxes to solve many of their production problems. This same approach works well for interactive content production where the footage often needs to be pre-processed (stitched, warped, etc.) before editing. High speed, parallel filesystems are particularly well suited for this type of batch-based work.
The AI/ML space is red hot, and the applications seem boundless. Right now, much of the work is being done at a small scale where direct-attach, all-Flash storage boxes serve the need. As this technology is used on a larger scale, it will put demands on storage that can’t be met by direct-attached storage, so meeting those high IOP needs at scale is certainly something Panasas is looking at.
Can you talk about NVMe?
NVMe is an exciting technology, but not a panacea for all storage problems. While being very fast, and excellent at small operations, it is still very expensive, has small capacity and is difficult to scale to petabyte sizes. The next-generation Panasas ActiveStor Ultra platform uses NVMe for metadata while still leveraging spinning disk and SATA SSD. This hybrid approach, using each storage medium for what it does best, is something we have been doing for more than 10 years.
What do you do in your products to help safeguard your users’ data?
Panasas uses object-based data protection with RAID- 6+. This software-based erasure code protection, at the file level, provides the best scalable data protection. Only files affected by a particular hardware failure need to be rebuilt, and increasing the number of drives doesn’t increase the likelihood of losing data. In a sense, every file is individually protected. On the hardware side, all Panasas hardware provides non-volatile components, including cutting-edge NVDIMM technology to protect our customers’ data. The file system has been proven in the field. We wouldn’t have the high-profile customers we do if we didn’t provide superior performance as well as superior data protection.
Users want more flexible workflows — storage in the cloud, on-premises, etc. How are your offerings reflective of that?
While Panasas leverages an object storage backend, we provide our POSIX-compliant file system client called DirectFlow to allow standard file access to the namespace. Files and directories are the “lingua franca” of the storage world, allowing ultimate compatibility. It is very easy to interface between on-premises storage, remote DR storage and public cloud/REST storage using DirectFlow. Data flows freely and at high speed using standard tools, which makes the Panasas system an ideal scalable repository for data that will be used in a variety of pipelines.
Alkemy X‘s Dave Zeevalk
With studios in Philly, NYC, LA and Amsterdam, Alkemy X provides live-action, design, post, VFX and original content for spots, branded content and more.
How much data did you go use/backup this year? How much more was that than the previous year? How much more data do you expect to use next year?
Each year, our VFX department generates nearly a petabyte of data, from simulation caches to rendered frames. This year, we have seen a significant increase in data usage as client expectations continue to grow and 4K resolution becomes more prominent in episodic television and feature film projects.
In order to use our 200TB server responsibly, we have created a solid system for preserving necessary data and clearing unnecessary files on a regular basis. Additionally, we are diligent in archiving finale projects to our LTO tape systems, and removing them from our production server.
Have you ever lost important data due to a hardware failure? Have you ever lost data due to an operator error? (Accidental overwrite, etc.)
Because of our data redundancy, through hourly snapshots and daily backups, we have avoided any data loss even with hardware failure. Although hardware does fail, with these snapshots and backups on a secondary server, we are able to bring data back online extremely quickly in the case of hardware failure on our production server. Years ago, while migrating to Linux, a software issue completely wiped out our production server. Within two hours, we were able to migrate all data back from our snapshots and backups to our production server with no data loss.
Do you find access speed to be a limiting factor with your current storage solution?
There are a few scenarios where we do experience some issues with access speed to the production server. We do a good amount of heavy simulation work, at times writing dozens of terabytes per hour. While at our peak, we have experienced some throttled speeds due to the amount of data being written to the server. Our VFX team also has a checkpoint system for simulation where raw data is saved to the server in parallel to the simulation cache. This allows us to restart a simulation mid-way through the process if a render node drops or fails the job. This raw data is extremely heavy, so while using checkpoints on heavy simulations, we also experience some slower than normal speeds.
What percentage of your data’s value do you budget toward storage and data security? Our active production server houses 200TB of storage space. We have a secondary backup server, with equivalent storage space that we store hourly snapshots and daily back-ups to.
What trends do you see in storage?
With client expectations continuing to rise, and 4K (and higher at times) becoming more and more regular on jobs, the need for more storage space is ever increasing.
Quantum‘s Jamie Lerner
What is the biggest trend you’ve seen in the past year in terms of storage?
Although the digital transformation to higher resolution content in M&E has been taking place over the past several years, the interesting aspect is that the pace of change over the past 12 months is accelerating. Driving this trend is the mainstream adoption of 4K and high dynamic range (HDR) video, and the strong uptick in applications requiring 8K formats.
Virtual reality and augmented reality applications are booming across the media and entertainment landscape; everywhere from broadcast news and gaming to episodic television. These high-resolution formats add data to streams that must be ingested at a much higher rate, consume more capacity once stored and require significantly more bandwidth when doing realtime editing. All of this translates into a significantly more demanding environment, which must be supported by the storage solution.
With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
New technologies for producing stunning visual content are opening tremendous opportunities for studios, post houses, distributors, and other media organizations. Sophisticated next-generation cameras and multi-camera arrays enable organizations to capture more visual information, in greater detail than ever before. At the same time, innovative technologies for consuming media are enabling people to view and interact with visual content in a variety of new ways.
To capitalize on new opportunities and meet consumer expectations, many media organizations will need to bolster their storage infrastructure. They need storage solutions that offer scalable capacity to support new ingest sources that capture huge amounts of data, with the performance to edit and add value to this rich media.
Can you talk about NVMe?
The main benefit of NVMe storage is that it provides extremely low latency — therefore allowing users to seek content at very high speed — which is ideal for high stream counts and compressed 4K content workflows.
However, NVMe resources are expensive. Quantum addresses this issue head-on by leveraging NVMe over fabrics (NVMeoF) technology. With NVMeoF, multiple clients can use pooled NVMe storage devices across a network at local speeds and latencies. And when combined with our StorNext, all data is accessible by multiple clients in a global namespace, making this high-performance tier of storage much more cost-effective. Finally, Quantum is in early field trials of a new advancement that will allow customers to benefit even more from NVMe-enabled storage.
What do you do in your products to help safeguard your users’ data?
A storage system must be able to accommodate policies ranging from “throw it out when the job is done” to “keep it forever” and everything in between. The cost of storage demands control over where data lives and when, how many copies of the data exist and where those copies reside over time.
Xcellis scale-out storage powered by StorNext incorporates a broad range of features for data protection. This includes integrated features such as RAID, automated copying, versioning and data replication functionality, all included within our latest release of StorNext.
Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Given the differences in size and scope of organizations across the media industry, production workflows are incredibly varied and often geographically dispersed. Within this context, flexibility becomes a paramount feature of any modern storage architecture.
We provide flexibility in a number of important ways for our customers. From the perspective of system architecture, and recognizing there is no one-size fits all solution, StorNext allows customers to configure storage with multiple media types that balance performance and capacity requirements across an entire end-to-end workflow. Second, and equally important for those companies that have a global workforce, is that our data replication software FlexSync allows for content to be rapidly distributed to production staff around the globe. And no matter what tier of storage the data resides on, FlexTier provides coordinated and unified access to the content within a single global namespace.
EditShare‘s Bill Thompson
What is the biggest trend you’ve seen in the past year in terms of storage?
In no particular order, the biggest trends for storage in the media and entertainment space are:
1. The need to handle higher and higher data rates associated with higher resolution and higher frame rate content. Across the industry, this is being address with Flash-based storage and the use of emerging technology like NVMe over “X” and 25/50/100G networking.
2. The ever-increasing concern about content security and content protection, backup and restoration solutions.
3. The request for more powerful analytics solutions to better manage storage resources.
4. The movement away from proprietary hardware/software storage solutions toward ones that are compatible with commodity hardware and/or virtual environments.
Can you talk about NVMe?
NVMe technology is very interesting and will clearly change the M&E landscape going forward. One of the challenges is that we are in the midst of changing standards and we expect current PCIe-based NVMe components to be replaced by U2/M2 implementations. This migration will require important changes to storage platforms.
In the meantime, we offer non-NVMe Flash-based storage solutions whose performance and price points are equivalent to those claimed by early NVMe implementations.
What do you do in your products to help safeguard your users’ data?
EditShare has been in the forefront of user data protection for many years beginning with our introduction of disk-based and tape-based automated backup and restoration solutions.
We expanded the types of data protection schemes and provided easy-to-use management tools that allow users to tailor the type of redundant protection applied to directories and files. Similarly, we now provide ACL Media Spaces, which allow user privileges to be precisely tailored to their tasks at hand; providing only the rights needed to accomplish their tasks, nothing more, nothing less.
Most recently, we introduced EFS File Auditing, a content security solution that enables system administrators to understand “who did what to my content” and “when and how did they did it.”
Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
The EditShare file system is now available in variants that support EditShare hardware-based solutions and hybrid on-premise/cloud solutions. Our Flow automation platform enables users to migrate from on-premise high-speed EFS solutions to cloud-based solutions, such as Amazon S3 and Microsoft Azure, offering the best of both worlds.
Rohde & Schwarz‘s Dirk Thometzek
What is the biggest trend you’ve seen in the past year in terms of storage?
Consumer behavior is the most substantial change that the broadcast and media industry has experienced over the past years. Content is consumed on-demand. In order to stay competitive, content providers need to produce more content. Furthermore, to make the content more desirable, technologies such as UHD and HDR need to be adopted. This obviously has an impact on the amount of data being produced and stored.
With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
In media and entertainment there has always been a remarkable growth of data over time, from the very first simple SCSI hard drives to huge network environments. Nowadays, however, there is a tremendous growth approximating an exponential function. Considering all media will be preserved for a very long time, the M&E storage market segment will keep on growing and innovating.
Looking at the amount of footage being produced, a big challenge is to find the appropriate data. Taking it a step further, there might be content that a producer wouldn’t even think of looking for, but has a relative significance to the original metadata queried. That is where machine learning and AI come into the play. We are looking into automated content indexing with the minimum amount of human interaction where the artificial intelligence learns autonomously and shares information with other databases. The real challenge here is to protect these intelligences from being compromised by unintentional access to the information.
What do you do to help safeguard your users’ data?
In collaboration with our Rohde & Schwarz Cybersecurity division, we are offering complete and protected packages to our customers. It begins with access restrictions to server rooms up to encrypted data transfers. Cyber attacks are complex and opaque, but the security layer must be transparent and usable. In media though, latency is just as critical, which is usually introduced with every security layer.
Can you talk about NVMe?
In order to bring the best value to the customer, we are constantly looking for improvements. The direct PCI communication of NVMe certainly brings a huge improvement in terms of latency since it completely eliminates the SCSI communication layer, so no protocol translation is necessary anymore. This results in much higher bandwidth and more IOPS.
For internal data processing and databases, R&S SpycerNode NVMe is used, which really boosts its performance. Unfortunately, the economic aspects of using this technology for media data storage is currently not considered to be efficient. We are dedicated to getting the best performance-to-cost ratio for the market, and since we have been developing video workstations and servers besides storage for decades now, we know how to get the best performance out of a drive — spinning or solid state.
Economically, it doesn’t seem to be acceptable to a build system with the latest and greatest technology for a workflow when standards will do, just because it is possible. The real art of storage technology lies in a highly customized configuration according to the technical requirements of an application or workflow. R&S SpycerNode will evolve over time and technologies will be added to the family.
Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Although hybrid workflows are highly desirable, it is quite important to understand the advantages and limits of this technology. High-bandwidth and low-latency wide-area network connections involve certain economical aspects. Without the suitable connection, an uncompressed 4K production does not seem feasible from a remote location — uploading several terabytes to a co-location can take hours or even days to be transferred, even if protocol acceleration is used. However, there are workflows, such as supplemental rendering or proxy editing, that do make sense to offload to a datacenter. R&S SpycerNode is ready to be an integral part of geographically scattered networks and the Spycer Storage family will grow.
Dell EMC‘s Tom Burns
What is the biggest trend you’ve seen in the past year in terms of storage?
The most important storage trend we’ve seen is an increasing need for access to shared content libraries accommodating global production teams. This is becoming an essential part of the production chain for feature films, episodic television, sports broadcasting and now e-sports. For example, teams in the UK and in California can share asset libraries for their file-based workflow via a common object store, whether on-prem or hybrid cloud. This means they don’t have to synchronize workflows using point-to-point transmissions from California to the UK, which can get expensive.
Achieving this requires seamless integration of on-premises file storage for the high-throughput, low-latency workloads with object storage. The object storage can be in the public cloud or you can have a hybrid private cloud for your media assets. A private or hybrid cloud allows production teams to distribute assets more efficiently and saves money, versus using the public cloud for sharing content. If the production needs it to be there right now, they can still fire up Aspera, Signiant, File Catalyst or other point-to-point solutions and have prioritized content immediately available, while allowing your on-premise cloud to take care of the shared content libraries.
Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
Dell Technologies offers end-to-end storage solutions where customers can position the needle anywhere they want. Are you working purely in the cloud? Are you working purely on-prem? Or, like most people, are you working somewhere in the middle? We have a continuous spectrum of storage between high-throughput low-latency workloads and cloud-based object storage, plus distributed services to support the mix that meets your needs.
The most important thing that we’ve learned is that data is expensive to store, granted, but it’s even more expensive to move. Storing your assets in one place and having that path name never change, that’s been a hallmark of Isilon for 15 years. Now we’re extending that seamless file-to-object spectrum to a global scale, deploying Isilon in the cloud in addition to our ECS object store on premises.
With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
AR, VR, AI and other emerging technologies offer new opportunities for media companies to change the way they tell and monetize their stories. However, due to the large amounts of data involved, many media organizations are challenged when they rely on storage systems that lack either scalability or performance to meet the needs of these new workflows.
Dell EMC’s file and object storage solutions help media companies cost effectively tier their content based upon access. This allows media organizations to use emerging technologies to improve how stories are told and monetize their content with the assistance of AI-generated metadata, without the challenges inherent in many traditional storage systems.
With artificial intelligence, for example, where it was once the job of interns to categorize content in projects that could span years, AI gives media companies the ability to analyze content in near-realtime and create large, easily searchable content libraries as the content is being migrated from existing tape libraries to object-based storage, or ingested for current projects. The metadata involved in this process includes brand recognition and player/actor identification, as well as speech-to-text, making it easy to determine logo placement for advertising analytics and to find footage for use in future movies or advertisements.
With Dell EMC storage, AI technologies can be brought to the data, removing the need to migrate or replicate data to direct-attach storage for analysis. Our solutions also offer the scalability to store the content for years using affordable archive nodes in Isilon or ECS object storage.
In terms of AR and VR, we are seeing video game companies using this technology to change the way players interact with their environments. Not only have they created a completely new genre with games such as Pokemon Go, they have figured out that audiences want nonlinear narratives told through realtime storytelling. Although AR and VR adoption has been slower for movies and TV compared to the video game industry, we can learn a lot from the successes of video game production and apply similar methodologies to movie and episodic productions in the future.
Can you talk about NVMe?
NVMe solutions are a small but exciting part of a much larger trend: workflows that fully exploit the levels of parallelism possible in modern converged architectures. As we look forward to 8K, 60fps and realtime production, the usage of PCIe bus bandwidth by compute, networking and storage resources will need to be much more balanced than it is today.
When we get into realtime productions, these “next-generation” architectures will involve new production methodologies such as realtime animation using game engines rather than camera-based acquisition of physically staged images. These realtime processes will take a lot of cooperation between hardware, software and networks to fully leverage the highly parallel, low-latency nature of converged infrastructure.
Dell Technologies is heavily invested in next-generation technologies that include NVMe cache drives, software-defined networking, virtualization and containerization that will allow our customers to continuously innovate together with the media industry’s leading ISVs.
What do you do in your products to help safeguard your users’ data?
Your content is your most precious capital asset and should be protected and maintained. If you invest in archiving and backing up your content with enterprise-quality tools, then your assets will continue to be available to generate revenue for you. However, archive and backup are just two pieces of data security that media organizations need to consider. They must also take active measures to deter data breaches and unauthorized access to data.
Protecting data at the edge, especially at the scale required for global collaboration can be challenging. We simplify this process through services such as SecureWorks, which includes offerings like security management and orchestration, vulnerability management, security monitoring, advanced threat services and threat intelligence services.
Our storage products are packed with technologies to keep data safe from unexpected outages and unauthorized access, and to meet industry standards such as alignment to MPAA and TPN best practices for content security. For example, Isilon’s OneFS operating system includes SyncIQ snapshots, providing point-in-time backup that updates automatically and generates a list of restore points.
Isilon also supports role-based access control and integration with Active Directory, MIT Kerberos and LDAP, making it easy to manage account access. For production houses working on multiple customer projects, our storage also supports multi-tenancy and access zones, which means that clients requiring quarantined storage don’t have to share storage space with potential competitors.
Our on-prem object store, ECS, provides long-term, cost-effective object storage with support for globally distributed active archives. This helps our customers with global collaboration, but also provides inherent redundancy. The multi-site redundancy creates an excellent backup mechanism as the system will maintain consistency across all sites, plus automatic failure detection and self-recovery options built into the platform.
Scale Logic‘s Bob Herzan
What is the biggest trend you’ve seen in the past year in terms of storage?
There is and has been a considerable buzz around cloud storage, object storage, AI and NVMe. Scale Logic recently took a private survey to its customer base to help determine the answer to this question. What we found is none of those buzzwords can be considered a trend. We also found that our customers were migrating away from SAN and focusing on building infrastructure around high-performance and scalable NAS.
They felt on-premises LTO was still the most viable option for archiving, and finding a more efficient and cost-effective way to manage their data was their highest priority for the next couple of years. There are plenty of early adopters testing out the buzzwords in the industry, but the trend — in my opinion — is to maximize a stable platform with the best overall return on the investment.
End users are not focused so much on storage, but on how a company like ours can help them solve problems within their workflows where storage is an important component.
Can you talk more about NVMe?
NVMe provides an any-K solution and superior metadata low-latency performance and works with our scale-out file system. All of our products have had 100GbE drivers for almost two years, enabling mesh technologies with NVMe for networks as well. As cost comes down, NVMe should start to become more mainstream this year — our team is well versed in supporting NVMe and ready to help facilities research the price-to-performance of NVMe to see if it makes sense for their Genesis and HyperFS Scale Out system.
With AI, VR and machine learning, our industry is even more dependent on storage. How are you addressing this?
We are continually refining and testing our best practices. Our focus on broadcast automation workflows over the years has already enabled our products for AI and machine learning. We are keeping up with the latest technologies, constantly testing in our lab with the latest in software and workflow tools and bringing in other hardware to work within the Genesis Platform.
What do you do in your products to help safeguard your users’ data?
This is a broad question that has different answers depending on which aspect of the Genesis Platform you may be talking about. Simply speaking, we can craft any number of data safeguard strategies and practices based on our customer needs, the current technology they are using and, most importantly, where they see their growth of capacity and data protection needs moving forward. Our safeguards start as simple as enterprise-quality components, mirrored sets, RAID -6, RAID-7.3 and RAID N+M, asynchronous data sync to a second instance, full HA with synchronous data sync to a second instance, virtual IP failover between multiple sites, multi-tier DR and business continuity solutions.
In addition, the Genesis Platform’s 24×7 health monitoring service (HMS) communicates directly with installed products at customer sites, using the equipment serial number to track service outages, system temperature, power supply failure, data storage drive failure and dozens of other mission-critical status updates. This service is available to Scale Logic end users in all regions of the world and complies with enterprise-level security protocols by relying only on outgoing communication via a single port.
Users want more flexible workflows — storage in the cloud, on-premises. Are your offerings reflective of that?
Absolutely. This question defines our go-to-market strategy — it’s in our name and part of our day-to-day culture. Scale Logic takes a consultative role with its clients. We take our 30-plus years of experience and ask many questions. Based on the answers, we can give the customer several options. First off, many customers feel pressured to refresh their storage infrastructure before they’re ready. Scale Logic offers customized extended warranty coverage that takes the pressure off the client and allows them to review their options and then slowly implement the migration and process of taking new technology into production.
Also, our Genesis Platform has been designed to scale, meaning clients can start small and grow as their facility grows. We are not trying to force a single solution to our customers. We educate them on the various options to solve their workflow needs and allow them the luxury of choosing the solution that best meets both their short-term and long-term needs as well as their budget.
Facilis‘ Jim McKenna
What is the biggest trend you’ve seen in the past year in terms of storage?
Recently, I’ve found that conversations around storage inevitably end up highlighting some non-storage aspects of the product. Sort of the “storage and…” discussion where the technology behind the storage is secondary to targeted add-on functionality. Encoding, asset management and ingest are some of the ways that storage manufacturers are offering value-add to their customers.
It’s great that customers can now expect more from a shared storage product, but as infrastructure providers we should be most concerned with advancing the technology of the storage system. I’m all for added value — we offer tools ourselves that assist our customers in managing their workflow — but that can’t be the primary differentiator. A premium shared storage system will provide years of service through the deployment of many supporting products from various manufacturers, so I advise people to avoid being caught-up in the value-add marketing from a storage vendor.
With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
Our industry has always been dependent upon storage in the workflow, but now facilities need to manage large quantities of data efficiently, so it’s becoming more about scaled networks. In the traditional SAN environment, hard-wired Fibre Channel clients are the exclusive members of the production workgroup.
With scalable shared-storage through multiple connection options, everyone in the facility can be included in the collaboration on a project. This includes offload machines for encoding and rendering large HDR and VR content, and MAM systems with localized and cloud analysis of data. User accounts commonly grow into the triple digits when producers, schedulers and assistants all require secure access to the storage network.
Can you talk about NVMe?
Like any new technology, the outlook for NVMe is promising. Solid state architecture solves a lot of problems inherent in HDD-based systems — seek times, read speeds, noise and cooling, form factor, etc. If I had to guess a couple years ago, I would have thought that SATA SSDs would be included in the majority of systems sold by now; instead they’ve barely made a dent in the HDD-based unit sales in this market. Our customers are aware of new technology, but they also prioritize tried-and-true, field-tested product designs and value high capacity at a lower cost per GB.
Spinning HDD will still be the primary storage method in this market for years to come, although solid state has advantages as a helper technology for caching and direct access for high-bandwidth requirements.
What do you do in your products to help safeguard your users’ data?
Integrity and security are priority features in a shared storage system. We go about our security differently than most, and because of this our customers have more confidence in their solution. By using a system of permissions that emanate from the volume-level, and are obscured from the complexities of network ownership attributes, network security training is not required. Because of the simplicity of securing data to only the necessary people, data integrity and privacy is increased.
In the case of data integrity during hardware failure, our software-defined data protection has been guarding our customers assets for over 13 years, and is continually improved. With increasing drive sizes, time to completion of drive recovery is an important factor, as well as system usability during the process.
Users want more flexible workflows — storage in the cloud, on-premises, etc. Are your offerings reflective of that?
When data lifecycle is a concern of our customers, we consult on methods of building a storage hierarchy. There is no one-size-fits-all approach here, as every workflow, facility and engineering scope is different.
Tier 1 storage is our core product line, but we also have solutions for nearline (tier 2) and archive (tier 3). When the discussion turns to the cloud as a replacement for some of the traditional on-premises storage offerings, the complexity of the pricing structure, access model and interface becomes a gating factor. There are a lot of ways to effectively use the cloud, such as compute (AI, encoding, etc.), business continuity, workflow (WAN collaboration) or simple cold storage. These tools, when combined with a strong on-premises storage network, will enhance productivity and ensure on-time delivery of product.
mLogic’s co-founder/CEO Roger Mabon
What is the biggest trend you’ve seen in the past year in terms of storage?
In the M&E industry, high-resolution 4K/8K multi-camera shoots,
stereoscopic VR and HDR video are commonplace and are contributing to the unprecedented amounts of data being generated in today’s media productions. This trend will continue as frame rates and resolutions increase and video professionals move to shoot in these new formats to future-proof their content.
With AI, VR and machine learning, etc., our industry is even more dependent on storage. Can you talk about that?
Absolutely. In this environment, content creators must deploy storage solutions that are high capacity, high-performance and fault-tolerant. Furthermore, all of this content must be properly archived so it can be accessed well in to the future. mLogic’s mission is to provide affordable RAID and LTO tape storage solutions that fit this critical need.
How are you addressing this?
The tsunami of data being produced in today’s shoots must be properly managed. First and foremost is the need to protect the original camera files (OCF). Our high-performance mSpeed Thunderbolt 3 RAID solutions are being deployed on-set to protect these OCF. mSpeed is a desktop RAID that features plug-and-play Thunderbolt connectivity, capacities up to 168TB and RAID-6 data protection. Once the OCF is transferred to mSpeed, camera cards can be wiped and put back into production
The next step involves moving the OCF from the on-set RAID to LTO tape. Our portable mTape Thunderbolt 3 LTO solutions are used extensively by media pros to transfer OCF to LTO tape. LTO tape cartridges are shelf stable for 30+ years and cost around $10 per TB. That said, I find that many productions skip the LTO transfer and rely solely on single hard drives to store the OCF. This is a recipe for disaster as hard drives sitting on a shelf have a lifespan of only three to five years. Companies working with the likes of Netflix are required to use LTO for this very reason. Completed projects should also be offloaded from hard drives and RAIDs to LTO tape. These hard drives systems can then be put back into action for the tasks that they are designed for… editing, color correction, VFX, etc.
Can you talk about NVMe?
mLogic does not currently offer storage solutions that incorporate NVMe technology, but we do recognize numerous use cases for content creation applications. Intel is currently shipping an 8TB SSD with PCIe NVMe 3.1 x4 interface that can read/write data at 3000+ MB/second! Imagine a crazy fast and ruggedized NVMe shuttle drive for on-set dailies…
What do you do in your products to help safeguard your users data?
Our 8- and12-drive mSpeed solutions feature hardware RAID data protection. mSpeed can be configured in multiple RAID levels including RAID-6, which will protect the content stored on the unit even if two drives should fail. Our mTape solutions are specifically designed to make it easy to offload media from spinning drives and archive the content to LTO tape for long term data preservation.
Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
We recommend that you make two LTO archives of your content that are geographically separated in secure locations such as the post facility and the production facility. Our mTape Thunderbolt solutions accomplish this task.
In regards to the cloud, transferring terabytes upon terabytes of data takes an enormous amount of time and can be prohibitively expensive, especially when you need to retrieve the content. For now, cloud storage is reserved for productions with big pipes and big budgets.
OWC president Jennifer Soulé
With AI, VR and machine learning, etc., our industry is even more dependent on storage. How are you addressing this?
We’re constantly working to provide more capacity and faster performance. For spinning disk solutions, we’re making sure that we’re offering the latest sizes in ever-increasing bays. Our ThunderBay line started as a four-bay, went to a six-bay and will grow to eight-bay in 2019. With 12TB drives, that’s 96TB in a pretty workable form factor. Of course, you also need performance, and that is where our SSD solutions come in as well as integrating the latest interfaces like Thunderbolt 3. For those with greater graphics needs, we also have our Helios FX external GPU box.
Can you talk about NVME?
With our Aura Pro X, Envoy Pro EX, Express 4M2 and ThunderBlade, we’re already into NVMe and don’t see that stopping. By the end of 2019 we expect virtually all of our external Flash-based solutions will be NVMe-based rather than SATA. As the cost of Flash goes down and performance and capacity go up, we expect broader adoption as both primary storage and in secondary cache setups. 2TB drive supply will stabilize and we should see 4TB and PCIe Gen 4 will double bandwidth. Bigger, faster and cheaper is a pretty awesome combination.
What do you do in your products to help safeguard your users data?
We focus more on providing products that are compatible with different encryption schemas rather than building something in. As far as overall data protection, we’re always focused on providing the most reliable storage we can. We make sure our power supplies are over what is required to make sure insufficient power is never a factor. We test a multitude of drives in our enclosures to ensure we’re providing the best performing drives.
For our RAID solutions, we do burn-in testing to make sure all the drives are solid. Our SoftRAID technology also provides in-depth drive health monitoring so you know well in advance if a drive is failing. This is critical because many other SMART-based systems fail to detect bad drives leading to subpar system performance and corrupted data. Of course, all the hardware and software technology we put into our drives don’t do much if people don’t back up their data — so we also work with our customers to find the right solution for their use case or workflow.
Users want more flexible workflows — storage in the cloud, on premise, etc. Are your offerings reflective of that?
I definitely think we hit on flexibility within the on prem-space by offering a full range of single and multi-drive solutions, spinning disk and SSD options, portable to rack mounted that can be fully setup solutions or DIY where you can use drives you might already have. You’ll have to stay tuned on the cloud part, but we do have plans to use the cloud to expand on the data protection our drives already offer.