Flash Memory Summit HDR/UHD/4K panel
August 2016 – This year, the Flash Memory Summit featured an entire panel track that was focusing on Media and Entertainment. One of the panels discussed the growing use of flash memory for HDR, UHD and 4K content capture and editing. The panel was moderated by Lidia Paulinska, Editor in Chief of Bright Blue Innovation Intl, and host of the talk show Bright Blue Innovation. The panel featured Neil Smith of Lunaforge, Alex Grossman of Symply and Chris Haeffner of Other World Computing. The panel also covered the topic of 360degree cameras.
Neil from Lumaforge discussed how Moore’s Law is being applied to M&E. 80-90% of the M&E Content is now digital, which has driven high performance storage to be a key for both capture and playback. The dominant technology for Flash in this area is as part of an SSD. However, until the new generation of announced products actually gets to the marketplace, the SSDs have a capacity issue for professional level video capture and editing. In many cases, Flash is just enhancing the software defined storage by being an early tier in the configuration.
Alex from Symply discussed the multiuser workflow challenges. The higher resolution and larger file size from the deeper color definition is creating an increased complexity for the workflow. The content is big and more complicated to deal with both for the content creation side, and the distribution side. This put pressure on the storage and the networks to be able to deliver the content point to point as needed by the workflow. The content is also being manipulated by transcoders which are in the middle of the real time path, so latency and throughput are issues, as well as capacity for managing the original content and it multiple versions. These high throughput and low latency tasks are key to the media asset management and are the primary points that are utilizing Flash memory. The workflows are built around the ability to locate, manipulate and move content, and flash is the storage of choice for in-application working memory.
Finally Chris from OWC was discussing the video storage classification issues. Storage for archive, editing, playout and distribution are not the same. M&E has different file and data structures so HD, 4K, 8K, HR, 360 video and VR are not all the same. As a result, all SSDs are not the same and do not all work with video in the workflow. There is a disconnect between the storage community in IT, those in the CDN (content delivery network) world and the video industry. While flash was quickly adopted at the start of the digital camera conversion era with the Compact Flash Association, SD Card association and the camera vendors private Flash memory packs, it has not spawned such standardization in storage tiering at the network level.
One of the capacity drivers for Flash is the ability to capture 360degree camera information and VR images. The size is quite large and typically 3x-4x standard frame information. The M&E space is challenged on addressing this, as VR is currently being driven by the gaming marketplace NOT the story based content marketplace. VR is highly CGI driven rather than capture and playout, it focuses on real time render for display rather than storing the full display info on a disk.
The need for high throughput to drive these higher resolution and deeper color application will bring Flash to a higher use model as more distributed players are created, however, the bulk of data at rest and archive will still be rotating media (HDD & tape).