How to transform your facility into a multimedia powerhouse
Bruce Devlin
Issue: October 1, 2011

How to transform your facility into a multimedia powerhouse

As content owners face new challenges in providing content for a growing number of distribution platforms, they are looking for cost-effective systems to help manage this change. In this article, Bruce Devlin, AmberFin's CTO identifies the challenges faced by media organizations who are tasked with creating once and repurposing often. He looks at how the latest technology can help them streamline their work while generating more revenue.

Today, video content is being produced and consumed in more ways and on more devices than ever before. Movies are shot on film and video and distributed on DVD and high-definition Blu-ray Discs. These same movies are screened in theaters on state-of the-art projectors (both 2D and 3D), and watched on PCs and tablets. Movie trailers, promotional videos and commercials are shown in taxis, airplanes, on point-of-purchase displays, on giant screens in airports and sports arenas, and on the side of high-rise buildings. Newscasts, YouTube videos and other user-generated video content are routinely watched on cell phones and other mobile devices.  

While video content is undeniably increasing overall, content creators are finding that the revenue potential for per distribution channel is generally shrinking. To maintain or surpass their revenue levels, they must move from their traditional one or two primary revenue streams to a multi-channel distribution model, where many workflows are required to achieve the same or greater revenue levels as one workflow could just ten years ago. Moreover, each additional distribution platform will generally increase the quantity and complexity of their workflows, posing new problems for operations to overcome. 

Content creators and distributors face a daunting alphabet soup of options when choosing which video formats best suit their specific production (P2, HDCAM SR, XDCAM, SI, Red, Alexa, etc), post production (SD, HD, 2K, 4K, etc.), and archival needs (native, compressed, on DVD, Blu-ray etc.).  And they face an equally stupefying list of options when it comes to delivery formats, with an ever-growing list of mostly competing, incompatible format choices for broadcast playout, mobile devices, the Internet, digital television, and other end-user destinations. 

The process of converting one format to another typically involves taking the highest possible image and audio quality and modifying it to accommodate the characteristics of various delivery platforms.  Transcoding media files involves many complex tasks and steps, and not all algorithms are created equal. For instance, video-enabled cell phones and other mobile devices are much more bandwidth-limited than broadband Internet networks, which means they require special picture processing to maintain the entertainment experience. Because screen sizes of phones and mobile devices are smaller, graphic overlays that would look great on-air may disappear on the screen of an iPod without careful image processing; and soundtracks with a wide dynamic range suitable for theatrical release may go from barely audible to painfully loud on ear buds without careful audio processing.  

High quality transcoding feeds file-based content delivery pipeline

Because of the multitude of formats and the inherent complexities of converting from one to another, high-quality transcoding is fast becoming the lynchpin of efficient file-based content delivery pipelines. 

File based transcoding workflows comprise ingest, transcoding, quality control/review and creation of deliverables. Since transcoding involves multiple stages, performing these processes as separate stages with separate systems and even separate operators can be highly inefficient. 

Wherever possible, these processes need to be streamlined — ideally using a single operator and system. And the result has to be faultless. That means maintaining quality at the ingest stage. If quality is lost during ingest, it can’t be recovered later. It also means outputting the right content, in the right format, ensuring correct aspect ratio scaling, without technical errors like black or corrupted frames, audio mutes and clicks, wrong video or audio levels. If you are creating a master copy, for example, for a library and later it is found to be wrong, it will be prohibitively expensive to fix all the downstream copies and sometimes impossible to re-create the master copy. Maintaining ingest quality and identifying potential problems early (and wherever possible automatically) is crucial.

When it comes to choosing the most appropriate technology to achieve these tasks, once again media companies face a variety of options, but few are ideally suited to the complex demands and required flexibility of file-based workflows.

Traditional hardware video baseband format and standards converters are not designed for file-based workflows. They lack the modern tools necessary like transcoding and rewrapping, and have limited metadata support. They are typically simple input/output devices with no timeline control or clip annotation. They are designed to perform a limited range of tasks, sit idle between jobs, tie you down into a single vendor's roadmap, and are impossible to dynamically deploy.  

Low-cost, standalone ingest/transcode systems do not perform optimally for the complex workflow needs of media organizations. This is largely due to their unsophisticated image processing and the fact that they do not typically feature quality-control functions. Achieving high levels of trust in their outputs can be inefficient because of the amount of manpower and system time wasted by performing manual QC and watching hours of content — time during which the ingest/transcode system can’t be doing its real job, and they are potentially very expensive because faults can be missed. The temptation of course is to cut corners and only perform a quick QC spot check or no QC at all. But this can be disastrous if faults aren’t spotted as it can mean a complete re-work to do the job correctly. Worse still, if subsequent copies are made from a master with faults, all subsequent copies may need to be re-made. Putting a financial value on QC is always difficult as it involves putting a financial value on the risk of failure. We can learn from other industries such as manufacturing where they have shown that cutting operational costs and improving operational efficiencies is nearly always a better decision than shaving a few cents off the capital cost of new hardware and software.

Open standards and a level playing field

What the industry needs is an open standards, future-proof software-based platform that can effectively digitize and transform new and archived content, so it can deliver the best quality pictures at smaller file sizes across multiple delivery platforms. 

Content owners need a solution that can digitize new and archived media, breaking the bond between editing systems and storage, and enabling collaborative editing and content production, while ensuring quality throughout the process.  

Customers ask me every day how to achieve this, and it was these needs that drove the development of AmberFin's iCR software. With iCR, we developed a solution that covers almost every aspect of commercial content production, from ingest through to delivery in one single unified user interface. This frees up costly video and audio post production tools — and the creative staff who operate them — from ‘heavy lifting’ tasks such as bulk tape ingest or multi version delivery. iCR’s range of template tools and QC tools also frees up staff from repetitive labor-intensive manual processes and reduces expensive mistakes caused by operator error. The new Unified Quality Control features in iCR combine automatic and operator controlled QC tools that bring new levels of trust and confidence in the quality of media assets.   

To see how AmberFin iCR can help revitalize your media operations, visit us at www.amberfin.com.

Bruce Devlin is AmberFin's CTO. He holds several patents in the field of compression and files and has written international standards as well as contributed to books on MPEG and file formats. He is co-author of the MXF File Format Specification and an active contributor to the work of the SMPTE and the AMWA. Devlin is a fellow SMPTE (Society of Motion Picture and Television Engineers).