


default search action
16th ECCV 2020: Glasgow, UK - Volume 11
- Andrea Vedaldi
, Horst Bischof
, Thomas Brox
, Jan-Michael Frahm:
Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XI. Lecture Notes in Computer Science 12356, Springer 2020, ISBN 978-3-030-58620-1 - Zhongang Cai, Junzhe Zhang, Daxuan Ren, Cunjun Yu, Haiyu Zhao, Shuai Yi, Chai Kiat Yeo
, Chen Change Loy:
MessyTable: Instance Association in Multiple Camera Views. 1-16 - Anyi Rao
, Jiaze Wang, Linning Xu, Xuekun Jiang, Qingqiu Huang, Bolei Zhou, Dahua Lin:
A Unified Framework for Shot Type Classification Based on Subject Centric Lens. 17-34 - Samuel Albanie, Gül Varol, Liliane Momeni, Triantafyllos Afouras, Joon Son Chung, Neil Fox
, Andrew Zisserman:
BSL-1K: Scaling Up Co-articulated Sign Language Recognition Using Mouthing Cues. 35-53 - Neng Qian, Jiayi Wang
, Franziska Mueller, Florian Bernard, Vladislav Golyanik, Christian Theobalt:
HTML: A Parametric Hand Texture Model for 3D Hand Reconstruction and Personalization. 54-71 - Zhongdao Wang
, Jingwei Zhang, Liang Zheng, Yixuan Liu, Yifan Sun, Yali Li, Shengjin Wang:
CycAs: Self-supervised Cycle Association for Learning Re-identifiable Descriptions. 72-88 - Xihui Liu
, Zhe Lin
, Jianming Zhang, Handong Zhao, Quan Tran, Xiaogang Wang, Hongsheng Li
:
Open-Edit: Open-Domain Image Manipulation with Open-Vocabulary Instructions. 89-106 - Zhongdao Wang
, Liang Zheng, Yixuan Liu, Yali Li, Shengjin Wang:
Towards Real-Time Multi-Object Tracking. 107-122 - Jian Liang
, Yunbo Wang
, Dapeng Hu, Ran He
, Jiashi Feng
:
A Balanced and Uncertainty-Aware Approach for Partial Domain Adaptation. 123-140 - Yang Li
, Shichao Kan
, Zhihai He
:
Unsupervised Deep Metric Learning with Transformed Attention Consistency and Contrastive Clustering Loss. 141-157 - Ali Athar, Sabarinath Mahadevan, Aljosa Osep, Laura Leal-Taixé, Bastian Leibe:
STEm-Seg: Spatio-Temporal Embeddings for Instance Segmentation in Videos. 158-177 - Jingwei Xu, Huazhe Xu, Bingbing Ni, Xiaokang Yang, Xiaolong Wang, Trevor Darrell:
Hierarchical Style-Based Networks for Motion Synthesis. 178-194 - Benjamin Biggs, Oliver Boyne, James Charles, Andrew W. Fitzgibbon, Roberto Cipolla:
Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop. 195-211 - Vishwanath A. Sindagi, Rajeev Yasarla, Deepak Babu Sam, R. Venkatesh Babu
, Vishal M. Patel:
Learning to Count in the Crowd from Limited Labeled Data. 212-229 - Hongyuan Du, Linjun Li, Bo Liu, Nuno Vasconcelos
:
SPOT: Selective Point Cloud Voting for Better Proposal in Point Cloud Object Detection. 230-247 - Jonathan R. Williford
, Brandon B. May
, Jeffrey Byrne
:
Explainable Face Recognition. 248-263 - Hieu Le, Dimitris Samaras:
From Shadow Segmentation to Shadow Removal. 264-281 - Seong Hyeon Park
, Gyubok Lee, Jimin Seo, Manoj Bhat, Minseok Kang, Jonathan Francis, Ashwin R. Jadhav, Paul Pu Liang, Louis-Philippe Morency:
Diverse and Admissible Trajectory Forecasting Through Multimodal Context Understanding. 282-298 - Marek Kowalski, Stephan J. Garbin, Virginia Estellers, Tadas Baltrusaitis, Matthew Johnson
, Jamie Shotton:
CONFIG: Controllable Neural Face Image Generation. 299-315 - Rui Zhu
, Xingyi Yang
, Yannick Hold-Geoffroy
, Federico Perazzi
, Jonathan Eisenmann
, Kalyan Sunkavalli
, Manmohan Chandraker
:
Single View Metrology in the Wild. 316-333 - Chien-Yi Chang
, De-An Huang
, Danfei Xu
, Ehsan Adeli
, Li Fei-Fei
, Juan Carlos Niebles
:
Procedure Planning in Instructional Videos. 334-350 - Ningning Ma
, Xiangyu Zhang
, Jian Sun
:
Funnel Activation for Visual Recognition. 351-368 - Shuyang Gu, Jianmin Bao, Dong Chen, Fang Wen:
GIQA: Generated Image Quality Assessment. 369-385 - Sayna Ebrahimi, Franziska Meier, Roberto Calandra
, Trevor Darrell, Marcus Rohrbach:
Adversarial Continual Learning. 386-402 - Peng Su, Kun Wang, Xingyu Zeng, Shixiang Tang, Dapeng Chen, Di Qiu, Xiaogang Wang:
Adapting Object Detectors with Conditional Domain Normalization. 403-419 - Tianjiao Li, Jun Liu
, Wei Zhang, Lingyu Duan:
HARD-Net: Hardness-AwaRe Discrimination Network for 3D Early Activity Prediction. 420-436 - Lokender Tiwari
, Pan Ji
, Quoc-Huy Tran
, Bingbing Zhuang
, Saket Anand
, Manmohan Chandraker
:
Pseudo RGB-D for Self-improving Monocular SLAM and Depth Prediction. 437-455 - Shengcai Liao
, Ling Shao:
Interpretable and Generalizable Person Re-identification with Query-Adaptive Convolution and Temporal Lifting. 456-474 - Tongyao Pang, Yuhui Quan, Hui Ji
:
Self-supervised Bayesian Deep Learning for Image Recovery with Applications to Compressive Sensing. 475-491 - Jian Wang, Xiang Long, Yuan Gao, Errui Ding, Shilei Wen:
Graph-PCNN: Two Stage Human Pose Estimation with Graph Pose Refinement. 492-508 - Minchul Shin
:
Semi-supervised Learning with a Teacher-Student Network for Generalized Attribute Prediction. 509-525 - Fang Zhao, Shengcai Liao
, Guo-Sen Xie
, Jian Zhao, Kaihao Zhang, Ling Shao:
Unsupervised Domain Adaptation with Noise Resistible Mutual-Training for Person Re-identification. 526-544 - Dahlia Urbach
, Yizhak Ben-Shabat
, Michael Lindenbaum:
DPDist: Comparing Point Clouds Using Deep Point Cloud Distance. 545-560 - Xiaokang Chen
, Kwan-Yee Lin
, Jingbo Wang
, Wayne Wu
, Chen Qian
, Hongsheng Li
, Gang Zeng
:
Bi-directional Cross-Modality Feature Propagation with Separation-and-Aggregation Gate for RGB-D Semantic Segmentation. 561-577 - Zhijian Liu, Zhanghao Wu, Chuang Gan, Ligeng Zhu, Song Han:
DataMix: Efficient Privacy-Preserving Edge-Cloud Inference. 578-595 - Kripasindhu Sarkar, Dushyant Mehta, Weipeng Xu, Vladislav Golyanik, Christian Theobalt:
Neural Re-rendering of Humans from a Single Image. 596-613 - Filippo Aleotti, Fabio Tosi, Li Zhang, Matteo Poggi, Stefano Mattoccia
:
Reversing the Cycle: Self-supervised Deep Stereo Through Enhanced Monocular Distillation. 614-632 - Jinjin Gu
, Haoming Cai, Haoyu Chen, Xiaoxing Ye, Jimmy S. Ren, Chao Dong:
PIPAL: A Large-Scale Image Quality Assessment Dataset for Perceptual Image Restoration. 633-651 - Bryan A. Plummer, Mariya I. Vasileva, Vitali Petsiuk, Kate Saenko
, David A. Forsyth:
Why Do These Match? Explaining the Behavior of Image Similarity Models. 652-669 - Xuanhong Chen, Bingbing Ni, Naiyuan Liu, Ziang Liu, Yiliu Jiang, Loc Truong, Qi Tian:
CooGAN: A Memory-Efficient Framework for High-Resolution Facial Attribute Editing. 670-686 - Ben Saunders
, Necati Cihan Camgöz, Richard Bowden
:
Progressive Transformers for End-to-End Sign Language Production. 687-705 - Minghui Liao
, Guan Pang
, Jing Huang
, Tal Hassner
, Xiang Bai
:
Mask TextSpotter v3: Segmentation Proposal Network for Robust Scene Text Spotting. 706-722 - Daniel Barath, Michal Polic, Wolfgang Förstner, Torsten Sattler, Tomás Pajdla, Zuzana Kukelova:
Making Affine Correspondences Work in Camera Geometry Computation. 723-740 - Jiankang Deng, Jia Guo, Tongliang Liu
, Mingming Gong, Stefanos Zafeiriou:
Sub-center ArcFace: Boosting Face Recognition by Large-Scale Noisy Web Faces. 741-757 - Chuang Gan, Deng Huang, Peihao Chen, Joshua B. Tenenbaum, Antonio Torralba:
Foley Music: Learning to Generate Music from Videos. 758-775 - Yonglong Tian, Dilip Krishnan, Phillip Isola:
Contrastive Multiview Coding. 776-794 - Yingwei Li, Song Bai, Cihang Xie
, Zhenyu Liao, Xiaohui Shen, Alan L. Yuille
:
Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses. 795-813

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.