360-Degree Video Streaming Day

Map Unavailable

Date(s) - 30/10/2018
10:15 - 17:30

IMT Atlantique (Campus de Rennes -- ex-Telecom Breatgne)


You are invited to attend a great day on Virtual Reality (VR) and more precisely 360-Degree Video Streaming. It is free!

More information? contact Gwendal Simon (gwendal@adobe.com)

Morning PhD Defense
10h15 — Xavier Corbillon (IMT Atlantique) — Enabling the Next Generation Interactive Video Streaming

Afternoon Seminar
15h00 — Laura Toni (Univ College London) — Spherical clustering of users navigating 360° content
15h35 — Vincent Charvillat (ENSEEIHT) — DASH for 3D networked virtual environment
16h10 — Miska Hannuksela (Nokia Tech) — Viewport-dependent 360° video streaming
16h45 — Patrick Le Callet (Univ Nantes) — TBD


10h15 — Xavier Corbillon — Enabling the Next Generation Interactive Video Streaming

In this dissertation I present my contributions to enable the streaming of highly immersive 360° videos over the Internet. The contributions can be gathered into six contributions and into three main topics. First we propose new streaming architectures. Our goal is to stay as close as possible as existing HTTP Adaptive Streaming (HAS) architecture. We propose first a viewport-adaptive streaming architecture where 360° video are encoded with a Quality Emphasized Region (QER) in order to stream video with high quality in the direction user is predicted to look in a few seconds while keeping traditional download throughput adaptation from HAS. Then we propose an extension to this streaming architecture to stream a next generation of 360° video, denoted as Multi-ViewPoint (MVP) omnidirectional video, where users can not only perform rotational movement inside the content but also predefined translational movements. Secondly we perform theoretical studies. We study the relationship between the spherical pixel density and viewport distortion observed by users. We propose an extension to Facebook offset cube-map projection. We present a theoretical model to compute the optimal way to distribute the bit-rate inside an 360° video, based on viewing statistics, to satisfy a majority of customer. Finally we propose practical tools to manipulate and study 360° videos. First we developed a modular open-source C++ software, named 360Transformations, to manipulate projected omnidirectional videos, extract viewports and compute objective quality metrics. Finally we recorded an openly available head-movement dataset of users watching in a free-of-task way 360° videos.

15h00 — Laura Toni — Spherical clustering of users navigating 360-degree content

In this talk, we first provide a brief overview of our current researches on user-centric (view-port based) communication for 360-degree video. Then, we present more in details our research on analysis and prediction of users’ behavior when interacting with the 360-degree content. Specifically we describe a clique-based clustering methodology to identify clusters that are actually meaningful within the VR context.

Laura Toni is Lecturer in Learning & Signal Processing at the Electronic and Electrical Engineering Department at University College London. Before this, she was at EPFL and UCSD for her postdoc and she graduated from University of Bologna. Her major contributions are in the area of coding and streaming technologies, machine learning for immersive communications, decision-making strategies under uncertainty, and large-scale signal processing.

15h35 — Vincent Charvillat — DASH for 3D networked virtual environment

DASH is now a widely deployed standard for streaming video content due to its simplicity, scalability, and ease of deployment. We explore the use of DASH for a different type of media content — networked virtual environment (NVE), with different properties and requirements. We organize a polygon soup with textures into a structure that is compatible with DASH MPD (Media Presentation Description), with a minimal set of view-independent metadata for the client to make intelligent decisions about what data to download at which resolution. We also present a DASH-based NVE client that uses a view-dependent and network dependent utility metric to decide what to download, based only on the information in the MPD file. We show that DASH can be used on NVE for 3D content streaming. Our work opens up the possibility of using DASH for highly interactive applications, beyond its current use in video streaming.

Vincent Charvillat received the Ph.D. degree in Computer Science from the National Polytechnic Institute of Toulouse in 1997. He is currently a full professor at the University of Toulouse, IRIT research lab, ENSEEIHT Eng. School and an associate member of IPAL laboratory, Singapore. Vincent Charvillat is the head of REVA research team at ENSEEIHT. His main research interests are visual processing and multimedia applications. Current topics of research include visual object processing, visual compositing, visual content interactive streaming, crowdsourcing in multimedia.

16h10 — Miska Hannuksela — Viewport-dependent 360° video streaming

The talk categorizes methods that have been proposed for viewport-dependent 360-degree video streaming and provides results from recent works by Nokia Technologies. The MPEG Omnidirectional MediA Format (OMAF) and its relation to the presented viewport-dependent streaming methods are discussed.

Miska Hannuksela is Nokia Bell Labs Fellow and the Head of Video Research in Nokia Technologies. He has published more than 160 conference and journal papers and hundreds of standardization contributions. He is or has been an editor in several video and systems standards, including H.264/AVC, H.265/HEVC, High Efficiency Image File Format (HEIF), ISO Base Media File Format, and Omnidirectional Media Format.

16h45 — Patrick Le Callet — TBD

Patrick Le Callet is currently full professor at Ecole polytechnique de l’université de Nantes. He was also a student at the Ecole Normale Superieure de Cachan where he sat the “Aggrégation” (credentialing exam) in electronics of the French National Education. He joined the Image & Video Communication group at CNRS IRCCyN in 1997. Since 2006, he is the head of this group. He is mostly engaged in research dealing with the application of human vision modeling in image and video processing. His current centers of interest are 3D image and video quality assessment, watermarking techniques and visual attention modeling and applications.