VidTrans19 - Making IP Video Production a Reality,
Annual Conference & Exposition
February 26 - 28, 2019
Los Angeles, California
Synopses of Presentations (alphabetical by presenter)
Innovative Networking/Video Transport: Who's Checking?
Sergio Ammirata, Ph.D. - DVEO
Streaming video travels a torturous path from provider to consumer. It may originate at a content provider. It may pass to a rented cloud to be processed into multiple adaptive bit rate streams, formatted into incompatible stream formats, and then incompatible DRMs applied. Then forwarded to a CDN's cloud, distributed to regional clouds, where finally, it is forwarded to the consumer.
How do we best do quality assurance on so many moving parts?
We foresee several trends: (1) a move of stream analysis tools to the cloud, where they can touch the stream at multiple stages. (2) a proliferation of analysis views of the individual, simultaneous components of ABR/CMAF streams. (3) hierarchical and simplified summary/alerts to pull the most urgent facts from the proliferation. (4) increasing complexity of the quality assurance task will probably mean increased costs of labor. And finally, (5) we think that remote tools will be an increasingly mandatory requirement for the quality assurance process, insofar as pieces of the process shall be distributed among different parts of the cloud.
We will also discuss how AI rules will enhance the stream analysis process.
HEVC 422 10-bit Ultra Low Delay CODEC
Kevin Ancelin - VITEC
High Efficiency Video Coding or HEVC has been a well established ITU - ISO/IEC standardized CODEC since January, 2013. HEVC is perfectly positioned for use with its Main 10 422 10-bit ultra low delay profile to deliver contribution caliber performance. In this presentation, we will outline HEVC and the tools available to deliver contribution Video Quality (VQ) at half the bit compared to AVC and significantly lower when compared to J2K.
Applications of the ST 2110-41 Fast Metadata Standard
Paul Briscoe – Televisionary Consulting/Evertz
The ST 2110-41 Fast Metadata (FMX) Standard brings a new level of metadata capability to IP-based systems. Offering time-specific delivery using the PTP- and RTP-based timing mechanisms of the 2110 suite and payload-agnostic transport, many new applications are readily accommodated. This presentation will provide a brief overview of the general operation of FMX, and will then look at three specific applications. The first is the transport of stream-descriptive metadata, a prime use for FMX, under development as ST 2110-42. The second application looks at its use in transport of the new SMTPE TLX Time Label, and finally a third example is an innovative application outside the strict media space. In addition to these examples, several general proposed use cases are reviewed as a look toward future applications.
Military Grade Security for Video over IP
Jed Deame - Nextera Video
In this era of heightened security concerns about content protection, it becomes increasingly important to look closely at all aspects of securing video content on switched IP networks. Whether for a military command and control center or a broadcast distribution plant, it is crucial to ensure that your valuable content is protected against unauthorized access. This presentation will look at risk areas in current video over IP systems and present example solutions from military grade essence encryption to access control as proposed by the NMOS API Security working group. Details of the Transport Layer Security (TLS) and various cipher suites will be presented and turnkey solutions will be demonstrated.
AMWA NMOS Interoperable Security project
Thomas Edwards - Fox
AMWA NMOS APIs such as IS-04 “Discovery & Registration API”, IS-05 “Device Connection Management API”, IS-06 “NMOS Network Control”, and IS-07 “NMOS Event & Tally API” provide much important higher-level functionality to ST 2110 IP media installations. However, these APIs were originally developed without a security model. The AMWA NMOS Interoperable Security project seeks to specify mechanisms for security, authentication, and authorization of NMOS APIs, in a fashion that is not only secure, but also meets the requirements of broadcast plants and does so in a way that is interoperable between multiple vendors. This group has already published drafts of specifications on GitHub, and these are being tested in workshops before being elevated to AMWA best current practices (BCP).
Update on AMWA IS-06
Thomas Edwards - Fox
AMWA NMOS IS-06 is a common network control API between broadcast controllers and software defined networking (SDN) controllers, designed to meet the needs of IP-based live networked media. It allows for the discovery of network and endpoint topology by the broadcast controller, supports authorization of endpoint senders, receivers and flows, and the creation, modification and deletion flows. In essence, it lets a broadcast controller “build the reliable pipes” across a network fabric for IP media. This update will discuss the current publication of IS-06, and also future work on items such as allowing for multiple broadcast controllers which is about to commence.
Game, set and match: Streaming Australia's Grand Slam tennis event
Jim Ewaskiew - IBM
Sportradar delivers content for more than 40,000 live events each year, including the NBA, NHL, International Tennis Federation and NASCAR, for leading sports and digital companies like Turner Sports, Twitter and Facebook.
To meet fast-growing demand for its Live Channel service, Sportradar needed a cost-efficient way to ensure fast and reliable transmission of live video content with encryption from Australia's 2019 preeminent tennis tournament across the world. It needed a fail-safe solution that could overcome low latency and packet loss over global networks, as well as maintain the content's quality.
This session will explore how Sportradar integrated Aspera's FASPStream technology into existing workflows for Australia's annual tennis event enabling them to meet key production requirements and reduce costs. He'll discuss Sportradar's experiences in streaming live content in near real-time and take a look at the lessons they learned in streaming broadcast-quality video content over commodity IP networks.
CONTINENT-WIDE IP REMOTE PRODUCTION
Erling Hedkvist – LAWO
On March 10th, 2018, NEP Australia ushered in a new era for live sports broadcasting: the Andrews Hub in Sydney produced its first live broadcast for Fox Sports Australia, an A-League soccer match in Perth, at 4,000 kilometers from Sydney. Uncompressed SMPTE ST-2110 was used for the IP transport of video, audio and control signals. The bulk of the production crew was in Sydney, only the cameras were in Perth. NEP's "Anyone, Anywhere" philosophy for the new Andrews Hub in Sydney and Melbourne means that while the traditional outside broadcast workflow involving commentators, camera operators and everything viewers see out in the field still takes place on location, the production team works out of a facility several thousand kilometers away from the venue.
Care was taken to providing a uniform, familiar user experience for all operators involved. The network is fully redundant, fully resilient and fully monitored for guaranteed uptime.
Only a month after, NEP Australia and Telstra Broadcast Services announced that they had delivered the world's first remote production across the Pacific. It involved 30 HD camera feeds in Los Angeles, with the production taking place in Sydney.
Why virtualized media processing is the real key to truly remote media production
David Herfert – Aperi
Since IP-driven technologies have been marketed to the media industry, theyâ€™ve been touted as the enabler of much more efficient, and less costly, production workflows. In this presentation, Aperi's David Herfert will explain how by deploying an FPGA-driven virtualized media processing workflow, content creators are able to achieve this goal by producing live programming completely remotely.
The presentation will explain the benefits of embracing service-oriented architecture (SOA) software design and virtualizing media functions which are dynamically piped between one another to create aggregate workflows. It will detail how Microservers (SDMPs) and their virtualized functions are effectively "strung together" within a single A1105 or between multiple racks using IP.
While CPU- or GPU-based applications are increasingly available to the market, David will explain how building video production and transport networks in this way with FPGA, can provide content producers with a truly remote production infrastructure.
Venturing with ST2110 into uncharted territories – How a technology carried out of the studio-centric environment is now going to help save lives.
Philippe Lemonnier – B-Com
The presentation explains the process of taking the basing principles of TR03 (yes, it began right there, before we even knew it would become ST2110), and introducing them into the DICOM (Digital Imaging and Communications in Medicine) standard community. It gave birth to the DICOM-RTV standard proposal, that uses ST2110 technology as a bearer service for medical imaging & data streams of various natures. Some significant work has been necessary to accommodate the vast spectrum of metadata that structures the medical space. The presentation will include disclosure of a world premiere event that happened mid-January 2019 with the first live surgery ever performed using this innovative approach.
An Overview of Compressed Video Transport Protocols over IP
Ciro Noronha, Ph.D. – Cobalt Digital
When setting up a compressed video transport over IP, the system designer has a number of available protocol choices. There is no single â€œbestâ€ answer for the protocol selection â€“ the most appropriate choice is a function of the networking environment, system requirements, and target decoders. Identifying this choice can be a challenge.
This talk presents a survey of the various standard IP transport protocols currently used for compressed video transport, and offers guidance on where to best apply each of them. We start by discussing the requirements for video transport over an IP network, and explore the fundamental latency/reliability tradeoff. We then proceed to a discussion of the various transport protocols, including UDP, RTP, SMPTE 2022 FEC, HTTP, RTMP, HLS, and the new VSF RIST, and indicate where they fit in the latency/reliability tradeoff. We also discuss what types of receivers/decoders are typically used for each protocol.
Professional Adaptive Video Transport presentation
Adi Rozenberg – VideoFlow
For years the question of how can we trust the IP network has been a big question for organization, Video and IT groups. For the last 15 years and more companies have presented solutions to overcome packet loss and link failover techniques. But the question of how do you adapt your transport to the network conditions has alluded companies.
This presentation will outline 3 techniques to allow the addition of transport adaptive capability to a professional equipment to create a contribution and distribution application using a Sender receiver eco system that streams and probes the network behavior in real time and changes the transport in reaction to bandwidth availability or error rate.
The presentation will include the following topics:
- Basic problem.
- Point to point algorithm behavior
- Contribution application and ECO system for contribution
- Distribution to multiple destination with support for individual adaptation for SPTS distribution
- Distribution to multiple destination with support for individual adaptation for MPTS distribution
The presentation will include slides on the ECO system and live captures.
Uncompressed UHD over WAN and the shift to 100Gbps long
Alexander Sandström – Net Insight
Bandwidth demands are continuously increasing for contribution and production because of higher quality expectations and more content being produced. The shift to UHD is one strong driver for increased capacity needs, both in production facilities and in wide area networks. In fixed facilities where bandwidth is relatively cheap, UHD capable infrastructure has been the norm for some time. But in Wide Area Networks bandwidth is still expensive in comparison, and because of that UHD solutions for the WAN have for a number of years been all about compression. But today we see more and more broadcasters looking at 100Gbps long distance connectivity for uncompressed UHD contribution and uncompressed UHD remote production links from for example live sport venues.
But in addition to just more bandwidth, what are other design considerations for Wide Area Networks built for uncompressed UHD? And what are the challenges?
Our experience from designing several uncompressed UHD on 100Gbps solutions lately shows that key challenges are related to combining IP and SDI, mixing UHD and HD, sharing capacity between live video and file transfers as well as properly isolating services.