Alex Grossman
Vice President, Media and Entertainment
Quantum
San Jose, CA
www.quantum.com
It seems that just a few years ago, building a collaborative, high definition video production workgroup required serious consideration in every area of the infrastructure, from storage and the storage network to the facility’s connectivity, both internal and external. Networking became a challenge, as the idea of IP-based content acquisition and delivery was still new. While the move to IP felt achievable, initially it seemed no one had a lot of success implementing it. Because it was a hot subject, however, people concentrated on solving the issues surrounding it, and now it has become almost commonplace.
Facilities required remote access to work-in-process (WIP) assets and content, and many also needed connectivity for non-realtime production functions, such as production approval, viewing and transcoding, or server-based delivery. These secondary functions were most often IP-connected, and thus IT staff became good at Ethernet networking.
In the same way, post facilities struggled with changing storage needs. Most focused on the immediate need of increased capacity, which was easily managed by adding additional disk to online or primary storage. But that wasn’t enough. Often, facilities delayed dealing with changing storage requirements and put off addressing real issues. Simply adding more disk to online storage to mask bandwidth issues, or increasing WIP volume size to manage delivery needs, these facilities created increasingly-complex storage configurations. In most cases, these configurations not only reduced the efficiency of the production workflow but also had a severe impact on users’ ability to troubleshoot effectively and resolve issues quickly.
Both the lack of efficiency and increased complexity are greatly amplified as more transcoding and rendering processes are added to the workflow and as production deadlines grow tighter. Many companies now have a mix of storage systems in their workflow. They often use general-purpose disk storage, both Fibre Channel-connected and IP-connected, as well as a mix of Fibre Channel and IP clients, on different systems, employing an unorthodox or less-than-best-practice means to address all of this storage. In such scenarios, content and assets are not as well protected from loss or damage as they should be, and manual manipulation and movement of content can present some serious risk, both in terms of time efficiency and possible loss of content.
Rapid content production growth and changes in workflow have also caused many facilities to struggle to keep up. This is particularly true when existing infrastructure is either highly inefficient, or prone to failure as more complex higher-resolution workflows, such as 4K and beyond, become common.
For these reasons, it is worth revisiting specialized storage infrastructure — or at least a modern version of media production storage that is highly optimized as workflow storage. This is not the closed architecture of the past, but rather a modern open architecture based on standards. This architecture is engineered to accommodate current workflow models while anticipating future workflow needs, and it is designed to meet the special requirements of content creators and content owners.
Mastering media workflow is no small task. It is nearly impossible to eliminate the efficiency breakdowns and workflow slowdowns that result when general-purpose IOPS and database-optimized storage are fit into a media workflow. Living with a less efficient production environment is a compromise that will only get worse in the future with higher resolution and greater content demand. Content production with acquisition from many sources, and delivery to many more is not getting easier; it’s actually getting harder. Quality control during each stage of the workflow is imperative, with any manual storage movement or translation processes in each stage making it much harder to maintain quality. Therefore, it is necessary to look at each storage element in the workflow as a stage. These stages — ingest, WIP, delivery, and archive — must be examined not as just a linear process, from left to right, but as an interactive process with content moving between the stages in a round-trip fashion. Driven by metadata and user application tool requirements, this model parks content essence where it is needed, when it is needed, always maintaining an archive of the original or secondary copy. It is also important to recognize that while actions such as editing and color correction occur in realtime, actions such as rendering and transcoding are non-realtime, and that ingest and delivery are a mix of both.
With this model a post facility can better address not just the changing needs of its clients and its own internal workflow challenges, but also the on-demand, re-monetization opportunities for existing content. Facilities can accomplish all these objectives using an integrated solution approach that effectively targets the challenges of performance, scalability, capacity, and flexibility in the storage workflow.
The modern approach to addressing workflow storage requires that the storage be application-aware and that it integrate very tightly with workflow management media asset management applications. With such a solution in place, the facility’s workflow becomes a system that is driven by policies and metadata, aware of the location of all media, as well as the performance and scalability requirements needed for each process in each stage. Each stage of the workflow can be highly optimized and managed in an automated fashion, with full monitoring of each function and the related hardware and software. Removal of tedious and error-prone manual processes ensures predictability and increases workflow efficiency.
Online storage may also be kept small and manageable, used only for demanding realtime operations. This approach keeps performance highly predictable, reduces overall storage system costs, and improves manageability. Non-realtime operations can be moved to extended online storage, where the content protection is highly resilient and maintained for longer periods of time. Transcoding, rendering and archive on ingest and delivery are handled on extended online with no impact on online operations. Long-term projects and frequently-used assets also may be securely maintained there for rapid repurposing.
Extended online delivers greater content protection than traditional RAID, as well as higher capacity and lower cost per TB, making it perfect for non-realtime components of the workflow. Furthermore, extended online storage makes the long-term archive of content and assets much easier. Instead of data tape — either LTO or LTFS — being attached directly to the online storage and possibly having a negative impact on realtime performance, tape is attached to extended online storage. The media asset manager allows users to set policies that automatically archive content.
Of course, any modern workflow must be ready for the cloud. Dealing with cloud architectures can be difficult and confusing, and latencies can be hours or days. Visibility and integration into facility-based workflow is key to a successful cloud deployment, and this is an area in which a media-optimized specialized storage solution excels, making it possible to extend archive or even workflow to the cloud.
No matter what its production workflow looks like, any production facility can benefit from a specialized media-optimized storage infrastructure. The benefits of integration and automation can immediately improve the efficiency of workflow and add predictability while opening up new capabilities for today and the future. After all, the end product is what matters, and any solution that can eliminate management headaches and potential problems can only help to enable individual creativity.