| CARVIEW |
Select Language
HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://360pi.github.io/eccv18/
access-control-allow-origin: *
strict-transport-security: max-age=31556952
expires: Tue, 30 Dec 2025 06:48:00 GMT
cache-control: max-age=600
x-proxy-cache: MISS
x-github-request-id: 2B4D:2C10E1:9BED84:AF1BBC:69537348
accept-ranges: bytes
age: 0
date: Tue, 30 Dec 2025 06:38:00 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210089-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767076680.224861,VS0,VE220
vary: Accept-Encoding
x-fastly-request-id: 2028d7f36667ef1e05bafdd33b21f0566cf5fcb0
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Thu, 24 Oct 2019 05:22:55 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"5db1352f-8dd9"
expires: Tue, 30 Dec 2025 06:48:00 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 1E7E:2B0FD4:9BF101:AF1C35:69537348
accept-ranges: bytes
age: 0
date: Tue, 30 Dec 2025 06:38:00 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210089-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767076680.474091,VS0,VE219
vary: Accept-Encoding
x-fastly-request-id: 53409be746c767e337883751a75a8f664312a167
content-length: 8276
360PI – 2018 ECCV WORKSHOP
360° Perception and Interaction
September 9th @ ECCV Workshop 2018 in Munich, Germany
INTRODUCTION
360° camera is a core building block of the Virtual Reality (VR) and Augmented
Reality (AR) technology that bridges the real and digital world. It allows us
to build the virtual environments for VR/AR applications from the real world
easily by capturing the entire visual world surrounding the camera
simultaneously. With the rapid growth of VR/AR technology, the availability
and popularity of 360° camera are also growing faster than ever. Many camera
manufacturers introduce new 360° camera models, both professional and
consumer-level, in the past few years. At the same time, content sharing sites
like YouTube and Facebook enable their support for 360° images and videos, and
content creators such as the news and movie industry start to exploit and
deliver the new media. People now create, share, and watch 360° content in our
everyday life just like any other media, and the amount of 360° content is
increasing rapidly.
Despite the popularity of 360° content, the new media remains relatively
unexplored from various aspects.
The difference between 360° images and traditional images introduces many new
challenges and opportunities, and the research community has just started to
explore them.
We believe that a workshop for research centering around 360° content can greatly boost the research in the field and that this is the right time for the workshop. The rapidly growing number of 360° content incurs an unprecedented need of technologies to handle the new media, yet we still don’t have a satisfactory solution to present, process or even encode this new format. Researchers from various communities including computer vision, HCI, multimedia, computer graphic and machine learning are working are working on or interested in 360° related topic together. This will provide a forum to discuss the current progress in the field and fost in this field with overlapping directions independently. A major goal of this workshop will be to bring researchers thater collaboration. It will also provide a good introduction for researchers that are interested and want to start their research in the field.
We believe that a workshop for research centering around 360° content can greatly boost the research in the field and that this is the right time for the workshop. The rapidly growing number of 360° content incurs an unprecedented need of technologies to handle the new media, yet we still don’t have a satisfactory solution to present, process or even encode this new format. Researchers from various communities including computer vision, HCI, multimedia, computer graphic and machine learning are working are working on or interested in 360° related topic together. This will provide a forum to discuss the current progress in the field and fost in this field with overlapping directions independently. A major goal of this workshop will be to bring researchers thater collaboration. It will also provide a good introduction for researchers that are interested and want to start their research in the field.
SUBMISSION
Note to authors: the posters have to be in potrait mode. The poster boards are 1.20x1.00 meters and
cannot hold landscape posters.
For the poster session, we invite submissions of maximum 4 pages extended abstract including reference describing relevant work that is unpublished, recently published, or presented in the main conference which allows participants to share research ideas related to 360° vision. The abstract should follow the ECCV format (c.f. main conference authors guidelines). Papers will be selected based on relevance, significance and novelty of results, technical merit, and clarity of presentation. These submissions will be reviewed single-blindly by our program committee. The accepted extended abstract are invited to present in the poster session of the workshop by one of the authors.
All the papers should be submitted using CMT website: https://cmt3.research.microsoft.com/360PI2018/.
The topic should be related to 360° content, including but not limited to:
For the poster session, we invite submissions of maximum 4 pages extended abstract including reference describing relevant work that is unpublished, recently published, or presented in the main conference which allows participants to share research ideas related to 360° vision. The abstract should follow the ECCV format (c.f. main conference authors guidelines). Papers will be selected based on relevance, significance and novelty of results, technical merit, and clarity of presentation. These submissions will be reviewed single-blindly by our program committee. The accepted extended abstract are invited to present in the poster session of the workshop by one of the authors.
All the papers should be submitted using CMT website: https://cmt3.research.microsoft.com/360PI2018/.
The topic should be related to 360° content, including but not limited to:
- User attention / saliency prediction in 360° video
- Improving 360° video display
- 360° video stabilization
- 360° video summarization
- Learning visual recognition model in 360° content (e.g., object detection, semantic segmentation, etc.)
- Learning CNN for spherical data
- Visual features for 360° imagery
- Depth and surface normal prediction using 360° images
- Indoor localization / mapping using 360° camera
- Robot navigation using 360° camera
- Telepresence using 360° camera
- Smart TV system for 360° videos
- Video editing tool for 360° videos
- Projection model for 360° imagery
- 360° specific video compression
- 360° video streaming
- 3D reconstruction using 360° camera
- 360° camera model
- Novel applications for 360° imagery
- 360° image/video/audio dataset
IMPORTANT DATES
| Paper submission deadline: | July 27th, 2018 (CMT website) |
| Notification to Authors: | August 6th, 2018 |
| Workshop date: | September 9th, 2018 (afternoon) |
WORKSHOP PROGRAM
Please be aware that the workshops are held at TU München and not at the main conference venue.
Directions to TU München can be found here.
| September 9th Half day, PM | Munich, Germany - TU München, Audimax 0980 | ||
| Time: | Description: | ||
| 13:20 pm - 13:30 pm | Opening remark | ||
| 13:30 pm - 14:00 pm | Invited Talk | Speaker: Steve Seitz Title: VR Video |
|
| 14:00 pm - 14:30 pm | Invited Talk | Speaker: Marc Pollefeys Title: 360 Video for Robot Navigation |
|
| 14:30 pm - 15:00 pm | Coffee break | ||
| 15:00 pm - 15:30 pm | Invited Talk | Speaker: Aaron Hertzmann Title: VR Video Editing Tool |
|
| 15:30 pm - 16:00 pm | Invited Talk | Speaker: Shannon Chen Title: Measurable 360 |
|
| 16:00 pm - 17:30 pm | Posters | Poster Lists | |
POSTER LISTS
| Title: | Authors: | ||
| Saliency Detection in 360°Videos | Ziheng Zhang, Yanyu Xu, Jingyi Yu, Shenghua Gao | ||
| Gaze Prediction in Dynamic 360°Immersive Video | Yanyu Xu, Yanbing Dong, Junru Wu, Zhengzhong Sun, Zhiru Shi, Jingyi Yu, Shenghua Gao | ||
| Self-Supervised Learning of Depth and Camera Motion from 360°Videos | Fu-En Wang, Hou-Ning Hu, Hsien-Tzu Cheng, Juan-Ting Lin, Shang-Ta Yang, Meng-Li Shih, James Hung-Kuo Chu, Min Sun | ||
| A Memory Network Approach for Story-based Temporal Summarization of 360° Videos | Sangho Lee,Jinyoung Sung,Youngjae Yu, Gunhee Kim | ||
| Deep Learning-based Human Detection on Fisheye Images | Hsueh-Ming Hang, Shao-Yi Wang | ||
| Eliminating the Blind Spot: Adapting 3D Object Detection and Monocular Depth Estimation to 360° Panoramic Imagery | Gregoire Payen de La Garanderie, Amir Atapour-Abarghouei, Toby Breckon | ||
| Towards 360° Show-and-Tell | Shih-Han Chou, Yi-Chun Chen, Cheng Sun, Kuo-Hao Zeng, Ching Ju Cheng, Jianlong Fu, Min Sun | ||
| 360D: A dataset and baseline for dense depth estimation from 360 images | Antonis Karakottas, Nikolaos Zioulis, Dimitrios Zarpalas, Petros Daras | ||
| Binocular Spherical Stereo Camera Disparity Map Estimation and 3D View-synthesis | Hsueh-Ming Hang, Tung-Ting Chiang, Wen-Hsiao Peng | ||
| PathGAN: Visual Scanpath Prediction with Generative Adversarial Networks | Marc Assens Reina, Kevin McGuinness, Xavier Giro-i-Nieto, Noel O'Connor | ||
| Labeling Panoramas With Spherical Hourglass Networks | Carlos Esteves, Kostas Daniilidis, Ameesh Makadia | ||
| The Effect of Motion Parallax and Binocular Stereopsis on Visual Comfort and Size Perception in Virtual Reality | Jayant Thatte, Bernd Girod | ||
INVITED SPEAKERS
| Steve Seitz is a Professor at University of Washington. His research focuses on computer vision and computer graphics. He was twice awarded the Marr Prize and has received an NSF Career Award, an ONR Young Investigator Award, and an Alfred P.,He is also a Director at Google and has led the development of Google Jump Camera and other VR projects. His webpage is at: https://homes.cs.washington.edu/~seitz/ | |
| Marc Pollefeys is a Full Professor and Head of the Institute for Visual Computing of the Dept. of Computer Science of ETH Zurich. He is known for his work in 3D computer vision, robotics, graphics and machine learning problems. He has been the first to develop a software pipeline to automatically turn photographs into 3D models. His personal webpage is at: https://www.inf.ethz.ch/personal/marc.pollefeys/index.html | |
| Aaron Hertzmann is a Principal Scientist at Adobe Research. He is an ACM Distinguished Scientist and IEEE Senior Member and holds courtesy faculty appointments at University of Washington and University of Toronto. His research interest spans over computer graphics and computer vision. Also, his recent works in virtual reality user interfaces study in-headset VR video editing and review. His personal webpage is at: https://www.dgp.toronto.edu/~hertzman/index.html | |
| Shannon Chen is a Research Scientist on the 360 Media team at Facebook. He is a contributor to the open-sourced Transform360 project on GitHub and the inventor of gravitational predictor (G-predictor) and pyramid projection in dynamic streaming. He is now contributing to dynamic streaming for Oculus Video and 360 Facebook videos. His personal webpage is at: https://research.fb.com/people/chen-shannon/ |