JPEG XS Workshop – Use cases for a low-latency lightweight image coding system
La Jolla, CA, USA – February 23rd, 2016 – 9-13h
Today’s industrial applications often imply transport and storage of uncompressed images and video. This is for instance the case in video links (SMPTE Serial Digital Interface), IP transport (SMPTE 2022 5/6 & proprietary uncompressed RTPs), Ethernet transport (IEEE/AVB), and memory buffers. In this context, a low-latency lightweight coding system allows to increase resolution and frame rate while assuring visual quality and keeping power and bandwidth within a reasonable budget.
Hence, the JPEG Committee has launched a new activity called JPEG XS aiming at standardizing such low-latency lightweight coding system and providing a highly interoperable solution.
Since the JPEG Committee intends to interact closely with actors in this domain, a workshop is organised on February 23rd, 2016 during the WG1 meeting in La Jolla, CA, USA. The workshop will be targeted on understanding industry, user, and policy needs in terms of technology, requirements and supported functionalities.
09h00 - Registration
9h30 - Touradj Ebrahimi (JPEG Convenor, EPFL), “JPEG XS - Introduction and Scope”
Prof. Touradj Ebrahimi received his M.Sc. and Ph.D., both in Electrical Engineering, from the Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland, in 1989 and 1992 respectively. In 1993, he was a research engineer at the Corporate Research Laboratories of Sony Corporation in Tokyo, where he conducted research on advanced video compression techniques for storage applications. In 1994, he served as a research consultant at AT&T Bell Laboratories working on very low bitrate video coding. He is currently Professor at EPFL heading its Multimedia Signal Processing Group. He was also adjunct Professor with the Center of Quantifiable Quality of Service at Norwegian University of Science and Technology (NTNU) from 2008 to 2012. Since 2014, he is convener of the JPEG committee.
09h45 - Chuck Meyer (CTO - Production with Grass Valley), “Scaling UHD live production workflow with mezzanine compression”
Ultra High Definition Television (UHDTV) bandwidth requirements are exceeding the capabilities of traditional serial video transports. HEVC technology is being deployed in smart TVs and next generation set top boxes as a way to deliver UHD to the home with the least bandwidth. Using this infrastructure, UHD TV provides compelling picture quality which scales across screen size and delivery method. It is certainly poised to be the successor to HDTV. Live content production is essential to this transition. The challenge of managing variable input data rates combined with variable output formats, requires a workflow which scales from todays 3 Gbps infrastructure to the 96 Gbps capacity of tomorrows facility networks. Mezzanine compression is one clear option for providing this scalability. Four to one mezzanine compression provides an optimal balance between workflow and scalability. Additional requirements for low latency, low power, and minimal computing resources which are essential to enable affordable workflows can also be met with this technique providing an excellent solution for live production workflows of the future.
Mr. Meyer is the Chief Technology Officer - Production with Grass Valley where he is actively involved with IP technology for live production applications. He is involved in developing future IP based standards for the industry as part of the Video Services Forum and is actively participating with the Joint Task Force on Network Interoperability supported by SMPTE, VSF and the EBU. He holds 28 patents in the areas of IC design, opto-electronics, consumer products and packaging with focus on baseband data transport, signal conditioning, timing, routing and switching. He holds BSCS and MSCS degrees from the University of California at Berkeley where he was a graduate fellow.
10h15 - Jim DeFilippis (TMS Consulting), “Use cases and requirements of a mezzanine compression in live video production and post-production”
This presentation will review the use cases for mezzanine compression in video production and post production. We will explore the impact to the areas of acquisition (cameras), recording/playback, distribution and routing, image processing, graphics and chroma keying. Discussion of SDI vs. 10G IP interconnection and mezzanine compression. Discussion of the requirements for mezzanine compression (low latency, multi-pass concatenation and low complexity/memory requirements). Insight into extending mezzanine into the file based workflow of video production (proxy, NLE, desktop editing/search).
Jim DeFilippis, one of the world’s foremost authorities on advanced broadcast media technologies and at the forefront of the newest developments in emerging media technologies such as immersive 3D audio, UHDTV, High Dynamic Range (HDR) and High Frame Rate (HFR). Jim has worked the delivery of video and audio content to home viewers as well as mobile and OTT delivery of content.
Jim is a Fellow of the SMPTE, awarded the David Sarnoff Medal in 2012, has received two technical Emmys for his work on the ATSC as well as MPEG splicing. He is a member of the AES as well as the IEEE. Jim has worked on six Olympics for the Host Broadcaster as well as for FOX Television and ABC TV and Radio.
10h45 - Gary Sullivan (HEVC Co-Chairman, Microsoft), “HEVC latency and complexity : where it stands and what can be reached “
This talk will discuss the latency and complexity considerations for the HEVC video coding standard, toward clarifying its potential for use in high-quality ultra-low-delay applications – i.e., applications with an end-to-end latency in the range of one frame or less. Since HEVC has state-of-the-art compression capability and is likely to be supported in many products – including support in low-cost, low-power custom silicon devices – it is an important candidate technology to consider for a broad range of applications. The talk will discuss the technical design elements of HEVC that are most relevant to ultra-low-delay usage with high picture quality, and will explore some potential ways that the end-to-end delay can be minimized – with or without requiring modifications of the existing HEVC standard.
Gary J. Sullivan has been a longstanding Chairman or Co-Chairman of various video and image coding standardization activities in the ITU-T Video Coding Experts Group, ISO/IEC Moving Picture Experts Group, and ISO/IEC Joint Photographic Experts Group, and their joint collaborative teams, since 1996, including leading the standardization projects for H.263+, AVC, HEVC, and JPEG XR. He was also the originator and lead designer of the DirectX Video Acceleration video decoding feature of the Microsoft Windows operating system. He is currently a Video/Image Technology Architect with the Corporate Standardization Group, Microsoft Corporation, Redmond, WA, USA. His research interests include image and video compression, rate-distortion optimization, motion estimation and compensation, scalar and vector quantization, and loss-resilient video coding. Dr. Sullivan is a Fellow of the SPIE and IEEE. He has received the IEEE Masaru Ibuka Consumer Electronics Award, the IEEE Consumer Electronics Engineering Excellence Award, the INCITS Technical Excellence Award, the IMTC Leadership Award, and the University of Louisville J. B. Speed Professional Award in Engineering. The team efforts he has led have been recognized by an ATAS Primetime Emmy Engineering Award and a NATAS Technology and Engineering Emmy Award.
11h15 - Alexandre Willème (Researcher, UCL, Belgium), “Overview of existing standards for low-latency and lightweight compression, benchmarking tools and results”
Opentestbench is an open-source framework to assess the performances of image compression schemes. It turns out to be an easy and modular solution to compare the efficiency of several intra-frame codecs in terms of single and multiple generation quality preservation and error robustness. As part of this presentation the main features of opentestbench will be explained and the results obtained on the JPEG-XS anchors will be compared and discussed.
Alexandre Willème received his B.S. and M.Eng. degrees in electrical engineering from the Université catholique de Louvain (UCL, Belgium) in 2015. As part of his master thesis, he undertook a High-Level Synthesis hardware implementation on FPGA of the DSC (Display Stream Compression) lightweight video codec. Currently preparing a PhD, he works on several topics related to video compression in the Benoit Macq’s image processing group at UCL.
11h45 - Panel Discussion
12h30 – Wrap up and JPEG XS roadmap
12h45 - End
Participating in the workshop is free but registration is required.
- Antonin Descampe (intoPIX, Belgium)
- Gaël Rouvroy (intoPIX, Belgium)
- Joachim Keinert (Fraunhofer, Germany)
- Peter Schelkens (VUB – iMinds, Belgium)
- Walt Husak (Dolby, US)