default search action
Shizhe Chen
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c62]Shizhe Chen, Ricardo Garcia, Ivan Laptev, Cordelia Schmid:
SUGAR : Pre-training 3D Visual Representations for Robotics. CVPR 2024: 18049-18060 - [i41]Shizhe Chen, Ricardo Garcia, Ivan Laptev, Cordelia Schmid:
SUGAR: Pre-training 3D Visual Representations for Robotics. CoRR abs/2404.01491 (2024) - [i40]Qingrong He, Kejun Lin, Shizhe Chen, Anwen Hu, Qin Jin:
Think-Program-reCtify: 3D Situated Reasoning with Large Language Models. CoRR abs/2404.14705 (2024) - [i39]Zerui Chen, Shizhe Chen, Cordelia Schmid, Ivan Laptev:
ViViDex: Learning Vision-based Dexterous Manipulation from Human Videos. CoRR abs/2404.15709 (2024) - [i38]Shiyu Li, Yang Tang, Shizhe Chen, Xi Chen:
Conan-embedding: General Text Embedding with More and Better Negative Samples. CoRR abs/2408.15710 (2024) - [i37]Ricardo Garcia, Shizhe Chen, Cordelia Schmid:
Towards Generalizable Vision-Language Robotic Manipulation: A Benchmark and LLM-guided 3D Policy. CoRR abs/2410.01345 (2024) - 2023
- [j6]Chao Du, Shuang Zhao, Qiuyu Wang, Bin Jia, Mingzhe Zhao, Li Zhang, Liqin Cui, Shizhe Chen, Xiao Deng:
A Seawater Salinity Sensor Based on Optimized Long Period Fiber Grating in the Dispersion Turning Point. Sensors 23(9): 4435 (2023) - [c61]Anwen Hu, Shizhe Chen, Liang Zhang, Qin Jin:
InfoMetIC: An Informative Metric for Reference-free Image Caption Evaluation. ACL (1) 2023: 3171-3185 - [c60]Shizhe Chen, Ricardo Garcia, Cordelia Schmid, Ivan Laptev:
PolarNet: 3D Point Clouds for Language-Guided Robotic Manipulation. CoRL 2023: 1761-1781 - [c59]Zerui Chen, Shizhe Chen, Cordelia Schmid, Ivan Laptev:
gSDF: Geometry-Driven Signed Distance Functions for 3D Hand-Object Reconstruction. CVPR 2023: 12890-12900 - [c58]Anwen Hu, Shizhe Chen, Liang Zhang, Qin Jin:
Explore and Tell: Embodied Visual Captioning in 3D Environments. ICCV 2023: 2482-2491 - [c57]Ricardo Garcia, Robin Strudel, Shizhe Chen, Etienne Arlaud, Ivan Laptev, Cordelia Schmid:
Robust Visual Sim-to-Real Transfer for Robotic Manipulation. IROS 2023: 992-999 - [c56]Shizhe Chen, Thomas Chabal, Ivan Laptev, Cordelia Schmid:
Object Goal Navigation with Recursive Implicit Maps. IROS 2023: 7089-7096 - [c55]Xu Gu, Yuchong Sun, Feiyue Ni, Shizhe Chen, Xihua Wang, Ruihua Song, Boyuan Li, Xiang Cao:
TeViS: Translating Text Synopses to Video Storyboards. ACM Multimedia 2023: 4968-4979 - [i36]Xu Gu, Yuchong Sun, Feiyue Ni, Shizhe Chen, Ruihua Song, Boyuan Li, Xiang Cao:
Translating Text Synopses to Video Storyboards. CoRR abs/2301.00135 (2023) - [i35]Zerui Chen, Shizhe Chen, Cordelia Schmid, Ivan Laptev:
gSDF: Geometry-Driven Signed Distance Functions for 3D Hand-Object Reconstruction. CoRR abs/2304.11970 (2023) - [i34]Anwen Hu, Shizhe Chen, Liang Zhang, Qin Jin:
InfoMetIC: An Informative Metric for Reference-free Image Caption Evaluation. CoRR abs/2305.06002 (2023) - [i33]Ricardo Garcia, Robin Strudel, Shizhe Chen, Etienne Arlaud, Ivan Laptev, Cordelia Schmid:
Robust Visual Sim-to-Real Transfer for Robotic Manipulation. CoRR abs/2307.15320 (2023) - [i32]Shizhe Chen, Thomas Chabal, Ivan Laptev, Cordelia Schmid:
Object Goal Navigation with Recursive Implicit Maps. CoRR abs/2308.05602 (2023) - [i31]Anwen Hu, Shizhe Chen, Liang Zhang, Qin Jin:
Explore and Tell: Embodied Visual Captioning in 3D Environments. CoRR abs/2308.10447 (2023) - [i30]Shizhe Chen, Ricardo Garcia, Cordelia Schmid, Ivan Laptev:
PolarNet: 3D Point Clouds for Language-Guided Robotic Manipulation. CoRR abs/2309.15596 (2023) - 2022
- [j5]Yuqing Song, Shizhe Chen, Qin Jin, Wei Luo, Jun Xie, Fei Huang:
Enhancing Neural Machine Translation With Dual-Side Multimodal Awareness. IEEE Trans. Multim. 24: 3013-3024 (2022) - [c54]Pierre-Louis Guhur, Shizhe Chen, Ricardo Garcia, Makarand Tapaswi, Ivan Laptev, Cordelia Schmid:
Instruction-driven history-aware policies for robotic manipulations. CoRL 2022: 175-187 - [c53]Shizhe Chen, Pierre-Louis Guhur, Makarand Tapaswi, Cordelia Schmid, Ivan Laptev:
Think Global, Act Local: Dual-scale Graph Transformer for Vision-and-Language Navigation. CVPR 2022: 16516-16526 - [c52]Sipeng Zheng, Shizhe Chen, Qin Jin:
VRDFormer: End-to-End Video Visual Relation Detection with Transformers. CVPR 2022: 18814-18824 - [c51]Sipeng Zheng, Shizhe Chen, Qin Jin:
Few-Shot Action Recognition with Hierarchical Matching and Contrastive Learning. ECCV (4) 2022: 297-313 - [c50]Shizhe Chen, Pierre-Louis Guhur, Makarand Tapaswi, Cordelia Schmid, Ivan Laptev:
Learning from Unlabeled 3D Environments for Vision-and-Language Navigation. ECCV (39) 2022: 638-655 - [c49]Shizhe Chen, Pierre-Louis Guhur, Makarand Tapaswi, Cordelia Schmid, Ivan Laptev:
Language Conditioned Spatial Relation Reasoning for 3D Object Grounding. NeurIPS 2022 - [i29]Shizhe Chen, Pierre-Louis Guhur, Makarand Tapaswi, Cordelia Schmid, Ivan Laptev:
Think Global, Act Local: Dual-scale Graph Transformer for Vision-and-Language Navigation. CoRR abs/2202.11742 (2022) - [i28]Shizhe Chen, Pierre-Louis Guhur, Makarand Tapaswi, Cordelia Schmid, Ivan Laptev:
Learning from Unlabeled 3D Environments for Vision-and-Language Navigation. CoRR abs/2208.11781 (2022) - [i27]Pierre-Louis Guhur, Shizhe Chen, Ricardo Garcia, Makarand Tapaswi, Ivan Laptev, Cordelia Schmid:
Instruction-driven history-aware policies for robotic manipulations. CoRR abs/2209.04899 (2022) - [i26]Shizhe Chen, Pierre-Louis Guhur, Makarand Tapaswi, Cordelia Schmid, Ivan Laptev:
Language Conditioned Spatial Relation Reasoning for 3D Object Grounding. CoRR abs/2211.09646 (2022) - 2021
- [j4]Miaomiao Song, Shixuan Liu, Wenqing Li, Shizhe Chen, Wenwen Li, Keke Zhang, Dingfeng Yu, Lin Liu, Xiaoyan Wang:
A Continuous Space Location Model and a Particle Swarm Optimization-Based Heuristic Algorithm for Maximizing the Allocation of Ocean-Moored Buoys. IEEE Access 9: 32249-32262 (2021) - [j3]Hui Chai, Shixuan Liu, Xianglong Yang, Xiaozheng Wan, Shizhe Chen, Jiming Zhang, Yushang Wu, Liang Zheng, Qiang Zhao:
Development of Capacitive Rain Gauge for Marine Environment. J. Sensors 2021: 6639668:1-6639668:8 (2021) - [c48]Chaorui Deng, Shizhe Chen, Da Chen, Yuan He, Qi Wu:
Sketch, Ground, and Refine: Top-Down Dense Video Captioning. CVPR 2021: 234-243 - [c47]Yuqing Song, Shizhe Chen, Qin Jin:
Towards Diverse Paragraph Captioning for Untrimmed Videos. CVPR 2021: 11245-11254 - [c46]Pierre-Louis Guhur, Makarand Tapaswi, Shizhe Chen, Ivan Laptev, Cordelia Schmid:
Airbert: In-domain Pretraining for Vision-and-Language Navigation. ICCV 2021: 1614-1623 - [c45]Shizhe Chen, Dong Huang:
Elaborative Rehearsal for Zero-shot Action Recognition. ICCV 2021: 13618-13627 - [c44]Bei Liu, Jianlong Fu, Shizhe Chen, Qin Jin, Alexander G. Hauptmann, Yong Rui:
MMPT'21: International Joint Workshop on Multi-Modal Pre-Training for Multimedia Understanding. ICMR 2021: 694-695 - [c43]Yuqing Song, Shizhe Chen, Qin Jin, Wei Luo, Jun Xie, Fei Huang:
Product-oriented Machine Translation with Cross-modal Cross-lingual Pre-training. ACM Multimedia 2021: 2843-2852 - [c42]Anwen Hu, Shizhe Chen, Qin Jin:
Question-controlled Text-aware Image Captioning. ACM Multimedia 2021: 3097-3105 - [c41]Shizhe Chen, Pierre-Louis Guhur, Cordelia Schmid, Ivan Laptev:
History Aware Multimodal Transformer for Vision-and-Language Navigation. NeurIPS 2021: 5834-5847 - [e1]Bei Liu, Jianlong Fu, Shizhe Chen, Qin Jin, Alexander G. Hauptmann, Yong Rui:
MMPT@ICMR2021: Proceedings of the 2021 Workshop on Multi-Modal Pre-Training for Multimedia Understanding, Taipei, Taiwan, August 21, 2021. ACM 2021, ISBN 978-1-4503-8530-5 [contents] - [i25]Yuqi Huo, Manli Zhang, Guangzhen Liu, Haoyu Lu, Yizhao Gao, Guoxing Yang, Jingyuan Wen, Heng Zhang, Baogui Xu, Weihao Zheng, Zongzheng Xi, Yueqian Yang, Anwen Hu, Jinming Zhao, Ruichen Li, Yida Zhao, Liang Zhang, Yuqing Song, Xin Hong, Wanqing Cui, Dan Yang Hou, Yingyan Li, Junyi Li, Peiyu Liu, Zheng Gong, Chuhao Jin, Yuchong Sun, Shizhe Chen, Zhiwu Lu, Zhicheng Dou, Qin Jin, Yanyan Lan, Wayne Xin Zhao, Ruihua Song, Ji-Rong Wen:
WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training. CoRR abs/2103.06561 (2021) - [i24]Yuqing Song, Shizhe Chen, Qin Jin:
Towards Diverse Paragraph Captioning for Untrimmed Videos. CoRR abs/2105.14477 (2021) - [i23]Ludan Ruan, Jieting Chen, Yuqing Song, Shizhe Chen, Qin Jin:
Team RUC_AIM3 Technical Report at ActivityNet 2021: Entities Object Localization. CoRR abs/2106.06138 (2021) - [i22]Anwen Hu, Shizhe Chen, Qin Jin:
ICECAP: Information Concentrated Entity-aware Image Captioning. CoRR abs/2108.02050 (2021) - [i21]Anwen Hu, Shizhe Chen, Qin Jin:
Question-controlled Text-aware Image Captioning. CoRR abs/2108.02059 (2021) - [i20]Shizhe Chen, Dong Huang:
Elaborative Rehearsal for Zero-shot Action Recognition. CoRR abs/2108.02833 (2021) - [i19]Pierre-Louis Guhur, Makarand Tapaswi, Shizhe Chen, Ivan Laptev, Cordelia Schmid:
Airbert: In-domain Pretraining for Vision-and-Language Navigation. CoRR abs/2108.09105 (2021) - [i18]Yuqing Song, Shizhe Chen, Qin Jin, Wei Luo, Jun Xie, Fei Huang:
Product-oriented Machine Translation with Cross-modal Cross-lingual Pre-training. CoRR abs/2108.11119 (2021) - [i17]Shizhe Chen, Pierre-Louis Guhur, Cordelia Schmid, Ivan Laptev:
History Aware Multimodal Transformer for Vision-and-Language Navigation. CoRR abs/2110.13309 (2021) - 2020
- [c40]Shizhe Chen, Qin Jin, Peng Wang, Qi Wu:
Say As You Wish: Fine-Grained Control of Image Caption Generation With Abstract Scene Graphs. CVPR 2020: 9959-9968 - [c39]Shizhe Chen, Yida Zhao, Qin Jin, Qi Wu:
Fine-Grained Video-Text Retrieval With Hierarchical Graph Reasoning. CVPR 2020: 10635-10644 - [c38]Sipeng Zheng, Shizhe Chen, Qin Jin:
Skeleton-Based Interactive Graph Network For Human Object Interaction Detection. ICME 2020: 1-6 - [c37]Anwen Hu, Shizhe Chen, Qin Jin:
ICECAP: Information Concentrated Entity-aware Image Captioning. ACM Multimedia 2020: 4217-4225 - [c36]Yida Zhao, Yuqing Song, Shizhe Chen, Qin Jin:
RUC_AIM3 at TRECVID 2020: Ad-hoc Video Search & Video to Text Description. TRECVID 2020 - [i16]Shizhe Chen, Qin Jin, Peng Wang, Qi Wu:
Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs. CoRR abs/2003.00387 (2020) - [i15]Shizhe Chen, Yida Zhao, Qin Jin, Qi Wu:
Fine-grained Video-Text Retrieval with Hierarchical Graph Reasoning. CoRR abs/2003.00392 (2020) - [i14]Shizhe Chen, Weiying Wang, Ludan Ruan, Linli Yao, Qin Jin:
YouMakeup VQA Challenge: Towards Fine-grained Action Understanding in Domain-Specific Videos. CoRR abs/2004.05573 (2020) - [i13]Yuqing Song, Shizhe Chen, Yida Zhao, Qin Jin:
Team RUC_AIM3 Technical Report at Activitynet 2020 Task 2: Exploring Sequential Events Detection for Dense Video Captioning. CoRR abs/2006.07896 (2020) - [i12]Yinzheng Gu, Yihan Pan, Shizhe Chen:
2nd Place Solution to ECCV 2020 VIPriors Object Detection Challenge. CoRR abs/2007.08849 (2020) - [i11]Samuel Albanie, Yang Liu, Arsha Nagrani, Antoine Miech, Ernesto Coto, Ivan Laptev, Rahul Sukthankar, Bernard Ghanem, Andrew Zisserman, Valentin Gabeur, Chen Sun, Karteek Alahari, Cordelia Schmid, Shizhe Chen, Yida Zhao, Qin Jin, Kaixu Cui, Hui Liu, Chen Wang, Yudong Jiang, Xiaoshuai Hao:
The End-of-End-to-End: A Video Understanding Pentathlon Challenge (2020). CoRR abs/2008.00744 (2020)
2010 – 2019
- 2019
- [j2]Shizhe Chen, Qin Jin, Jia Chen, Alexander G. Hauptmann:
Generating Video Descriptions With Latent Topic Guidance. IEEE Trans. Multim. 21(9): 2407-2418 (2019) - [c35]Shizhe Chen, Qin Jin, Alexander G. Hauptmann:
Unsupervised Bilingual Lexicon Induction from Mono-Lingual Multimodal Data. AAAI 2019: 8207-8214 - [c34]Jingjun Liang, Shizhe Chen, Qin Jin:
Semi-supervised Multimodal Emotion Recognition with Improved Wasserstein GANs. APSIPA 2019: 695-703 - [c33]Weiying Wang, Yongcheng Wang, Shizhe Chen, Qin Jin:
YouMakeup: A Large-Scale Domain-Specific Multimodal Dataset for Fine-Grained Semantic Comprehension. EMNLP/IJCNLP (1) 2019: 5132-5142 - [c32]Jingjun Liang, Shizhe Chen, Jinming Zhao, Qin Jin, Haibo Liu, Li Lu:
Cross-culture Multimodal Emotion Recognition with Adversarial Learning. ICASSP 2019: 4000-4004 - [c31]Shizhe Chen, Qin Jin, Jianlong Fu:
From Words to Sentences: A Progressive Learning Approach for Zero-resource Machine Translation with Visual Pivots. IJCAI 2019: 4932-4938 - [c30]Jinming Zhao, Shizhe Chen, Jingjun Liang, Qin Jin:
Speech Emotion Recognition in Dyadic Dialogues with Attentive Interaction Modeling. INTERSPEECH 2019: 1671-1675 - [c29]Jinming Zhao, Ruichen Li, Jingjun Liang, Shizhe Chen, Qin Jin:
Adversarial Domain Adaption for Multi-Cultural Dimensional Emotion Recognition in Dyadic Interactions. AVEC@MM 2019: 37-45 - [c28]Sipeng Zheng, Shizhe Chen, Qin Jin:
Visual Relation Detection with Multi-Level Attention. ACM Multimedia 2019: 121-129 - [c27]Yuqing Song, Shizhe Chen, Yida Zhao, Qin Jin:
Unpaired Cross-lingual Image Caption Generation with Self-Supervised Rewards. ACM Multimedia 2019: 784-792 - [c26]Shizhe Chen, Bei Liu, Jianlong Fu, Ruihua Song, Qin Jin, Pingping Lin, Xiaoyu Qi, Chunting Wang, Jin Zhou:
Neural Storyboard Artist: Visualizing Stories with Coherent Image Sequences. ACM Multimedia 2019: 2236-2244 - [c25]Sipeng Zheng, Xiangyu Chen, Shizhe Chen, Qin Jin:
Relation Understanding in Videos. ACM Multimedia 2019: 2662-2666 - [c24]Yuqing Song, Yida Zhao, Shizhe Chen, Qin Jin:
RUC_AIM3 at TRECVID 2019: Video to Text. TRECVID 2019 - [i10]Shizhe Chen, Qin Jin, Alexander G. Hauptmann:
Unsupervised Bilingual Lexicon Induction from Mono-lingual Multimodal Data. CoRR abs/1906.00378 (2019) - [i9]Shizhe Chen, Qin Jin, Jianlong Fu:
From Words to Sentences: A Progressive Learning Approach for Zero-resource Machine Translation with Visual Pivots. CoRR abs/1906.00872 (2019) - [i8]Shizhe Chen, Yuqing Song, Yida Zhao, Qin Jin, Zhaoyang Zeng, Bei Liu, Jianlong Fu, Alexander G. Hauptmann:
Activitynet 2019 Task 3: Exploring Contexts for Dense Captioning Events in Videos. CoRR abs/1907.05092 (2019) - [i7]Yuqing Song, Shizhe Chen, Yida Zhao, Qin Jin:
Unpaired Cross-lingual Image Caption Generation with Self-Supervised Rewards. CoRR abs/1908.05407 (2019) - [i6]Shizhe Chen, Yida Zhao, Yuqing Song, Qin Jin, Qi Wu:
Integrating Temporal and Spatial Attentions for VATEX Video Captioning Challenge 2019. CoRR abs/1910.06737 (2019) - [i5]Shizhe Chen, Bei Liu, Jianlong Fu, Ruihua Song, Qin Jin, Pingping Lin, Xiaoyu Qi, Chunting Wang, Jin Zhou:
Neural Storyboard Artist: Visualizing Stories with Coherent Image Sequences. CoRR abs/1911.10460 (2019) - 2018
- [c23]Shuai Wang, Weiying Wang, Shizhe Chen, Qin Jin:
RUC at MediaEval 2018: Visual and Textual Features Exploration for Predicting Media Memorability. MediaEval 2018 - [c22]Shizhe Chen, Jia Chen, Qin Jin, Alexander G. Hauptmann:
Class-aware Self-Attention for Audio Event Recognition. ICMR 2018: 28-36 - [c21]Jinming Zhao, Ruichen Li, Shizhe Chen, Qin Jin:
Multi-modal Multi-cultural Dimensional Continues Emotion Recognition in Dyadic Interactions. AVEC@MM 2018: 65-72 - [c20]Xiaozhu Lin, Qin Jin, Shizhe Chen, Yuqing Song, Yida Zhao:
iMakeup: Makeup Instructional Video Dataset for Fine-Grained Dense Video Captioning. PCM (3) 2018: 78-88 - [c19]Jinming Zhao, Shizhe Chen, Qin Jin:
Multimodal Dimensional and Continuous Emotion Recognition in Dyadic Video Interactions. PCM (1) 2018: 301-312 - [c18]Jia Chen, Shizhe Chen, Qin Jin, Alexander G. Hauptmann, Po-Yao Huang, Junwei Liang, Vaibhav, Xiaojun Chang, Jiang Liu, Ting-Yao Hu, Wenhe Liu, Wei Ke, Wayner Barrios, Haroon Idrees, Donghyun Yoo, Yaser Sheikh, Ruslan Salakhutdinov, Kris Kitani, Dong Huang:
Informedia @ TRECVID 2018: Ad-hoc Video Search, Video to Text Description, Activities in Extended video. TRECVID 2018 - [i4]Shizhe Chen, Yuqing Song, Yida Zhao, Jiarong Qiu, Qin Jin, Alexander G. Hauptmann:
RUC+CMU: System Report for Dense Captioning Events in Videos. CoRR abs/1806.08854 (2018) - 2017
- [c17]Xinrui Li, Shizhe Chen, Qin Jin:
Facial Action Units Detection with Multi-Features and -AUs Fusion. FG 2017: 860-865 - [c16]Shuai Wang, Wenxuan Wang, Jinming Zhao, Shizhe Chen, Qin Jin, Shilei Zhang, Yong Qin:
Emotion recognition with multimodal features and temporal models. ICMI 2017: 598-602 - [c15]Shuai Wang, Shizhe Chen, Jinming Zhao, Wenxuan Wang, Qin Jin:
RUC at MediaEval 2017: Predicting Media Interestingness Task. MediaEval 2017 - [c14]Shizhe Chen, Jia Chen, Qin Jin:
Generating Video Descriptions with Topic Guidance. ICMR 2017: 5-13 - [c13]Shizhe Chen, Qin Jin, Jinming Zhao, Shuai Wang:
Multimodal Multi-task Learning for Dimensional and Continuous Emotion Recognition. AVEC@ACM Multimedia 2017: 19-26 - [c12]Shizhe Chen, Jia Chen, Qin Jin, Alexander G. Hauptmann:
Video Captioning with Guidance of Multimodal Latent Topics. ACM Multimedia 2017: 1838-1846 - [c11]Qin Jin, Shizhe Chen, Jia Chen, Alexander G. Hauptmann:
Knowing Yourself: Improving Video Caption via In-depth Recap. ACM Multimedia 2017: 1906-1911 - [c10]Jia Chen, Junwei Liang, Jiang Liu, Shizhe Chen, Chenqiang Gao, Qin Jin, Alexander G. Hauptmann:
Informedia @ TRECVID 2017. TRECVID 2017 - [i3]Shizhe Chen, Jia Chen, Qin Jin:
Generating Video Descriptions with Topic Guidance. CoRR abs/1708.09666 (2017) - [i2]Shizhe Chen, Jia Chen, Qin Jin, Alexander G. Hauptmann:
Video Captioning with Guidance of Multimodal Latent Topics. CoRR abs/1708.09667 (2017) - [i1]Shizhe Chen, Qin Jin:
Multi-modal Conditional Attention Fusion for Dimensional Emotion Prediction. CoRR abs/1709.02251 (2017) - 2016
- [c9]Shizhe Chen, Yujie Dian, Xinrui Li, Xiaozhu Lin, Qin Jin, Haibo Liu, Li Lu:
Emotion Recognition in Videos via Fusing Multimodal Features. CCPR (2) 2016: 632-644 - [c8]Shizhe Chen, Xinrui Li, Qin Jin, Shilei Zhang, Yong Qin:
Video emotion recognition in the wild based on fusion of multimodal features. ICMI 2016: 494-500 - [c7]Shizhe Chen, Yujie Dian, Qin Jin:
RUC at MediaEval 2016: Predicting Media Interestingness Task. MediaEval 2016 - [c6]Shizhe Chen, Qin Jin:
RUC at MediaEval 2016 Emotional Impact of Movies Task: Fusion of Multimodal Features. MediaEval 2016 - [c5]Shizhe Chen, Qin Jin:
Multi-modal Conditional Attention Fusion for Dimensional Emotion Prediction. ACM Multimedia 2016: 571-575 - [c4]Qin Jin, Jia Chen, Shizhe Chen, Yifan Xiong, Alexander G. Hauptmann:
Describing Videos using Multi-modal Fusion. ACM Multimedia 2016: 1087-1091 - 2015
- [j1]Qin Jin, Shizhe Chen, Xirong Li, Gang Yang, Jieping Xu:
基于声学特征的语言情感识别 (Speech Emotion Recognition Based on Acoustic Features). 计算机科学 42(9): 24-28 (2015) - [c3]Qin Jin, Chengxin Li, Shizhe Chen, Huimin Wu:
Speech emotion recognition with acoustic and lexical features. ICASSP 2015: 4749-4753 - [c2]Shizhe Chen, Qin Jin:
Multi-modal Dimensional Emotion Recognition using Recurrent Neural Networks. AVEC@ACM Multimedia 2015: 49-56 - 2014
- [c1]Shizhe Chen, Qin Jin, Xirong Li, Gang Yang, Jieping Xu:
Speech emotion classification using acoustic features. ISCSLP 2014: 579-583
Coauthor Index
aka: Alexander G. Hauptmann
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-11 22:24 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint