default search action
7th HCOMP 2019: Stevenson, WA, USA
- Edith Law, Jennifer Wortman Vaughan:
Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2019, Stevenson, WA, USA, October 28-30, 2019. AAAI Press 2019, ISBN 978-1-57735-820-6
Technical Papers
- Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, Eric Horvitz:
Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. 2-11 - Shayan Doroudi, Ece Kamar, Emma Brunskill:
Not Everyone Writes Good Examples but Good Examples Can Come from Anywhere. 12-21 - Nikhil Garg, Lodewijk Gelauff, Sukolsak Sakshuwong, Ashish Goel:
Who Is in Your Top Three? Optimizing Learning in Elections with Many Candidates. 22-31 - Peter Hase, Chaofan Chen, Oscar Li, Cynthia Rudin:
Interpretable Image Recognition with Hierarchical Prototypes. 32-40 - Shelby Heinecke, Lev Reyzin:
Crowdsourced PAC Learning under Classification Noise. 41-49 - Sowmya Karunakaran, Rashmi Ramakrishan:
Testing Stylistic Interventions to Reduce Emotional Impact of Content Moderation Workers. 50-58 - Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel J. Gershman, Finale Doshi-Velez:
Human Evaluation of Models Built for Interpretability. 59-67 - Tong Liu, Akash Venkatachalam, Pratik Sanjay Bongale, Christopher M. Homan:
Learning to Predict Population-Level Label Distributions. 68-76 - Chris Madge, Juntao Yu, Jon Chamberlain, Udo Kruschwitz, Silviu Paun, Massimo Poesio:
Progression in a Language Annotation Game with a Purpose. 77-85 - Vikram Mohanty, Kareem Abdol-Hamid, Courtney Ebersohl, Kurt Luther:
Second Opinion: Supporting Last-Mile Person Identification with Crowdsourcing and Face Recognition. 86-96 - Mahsan Nourani, Samia Kabir, Sina Mohseni, Eric D. Ragan:
The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems. 97-105 - Jahna Otterbacher, Pinar Barlas, Styliani Kleanthous, Kyriakos Kyriakou:
How Do We Talk about Other People? Group (Un)Fairness in Natural Language Image Descriptions. 106-114 - Junwon Park, Ranjay Krishna, Pranav Khadpe, Li Fei-Fei, Michael S. Bernstein:
AI-Based Request Augmentation to Increase Crowdsourcing Participation. 115-124 - Andi Peng, Besmira Nushi, Emre Kiciman, Kori Inkpen, Siddharth Suri, Ece Kamar:
What You See Is What You Get? The Impact of Representation Criteria on Human Bias in Hiring. 125-134 - Rehab K. Qarout, Alessandro Checco, Gianluca Demartini, Kalina Bontcheva:
Platform-Related Factors in Repeatability and Reproducibility of Crowdsourcing Tasks. 135-143 - Jorge Ramírez, Marcos Báez, Fabio Casati, Boualem Benatallah:
Understanding the Impact of Text Highlighting in Crowdsourcing Tasks. 144-152 - Arijit Ray, Yi Yao, Rakesh Kumar, Ajay Divakaran, Giedrius Burachas:
Can You Explain That? Lucid Explanations Help Human-AI Collaborative Image Retrieval. 153-161 - Yan Shvartzshnaider, Noah J. Apthorpe, Nick Feamster, Helen Nissenbaum:
Going against the (Appropriate) Flow: A Contextual Integrity Approach to Privacy Policy Analysis. 162-170 - Camelia Simoiu, Chiraag Sumanth, Alok Shankar Mysore, Sharad Goel:
Studying the "Wisdom of Crowds" at Scale. 171-179 - Colin Vandenhof:
A Hybrid Approach to Identifying Unknown Unknowns of Predictive Models. 180-187 - Andrew T. Walter, Benjamin Boskin, Seth Cooper, Panagiotis Manolios:
Gamification of Loop-Invariant Discovery from Code. 188-196 - Mark E. Whiting, Grant Hugh, Michael S. Bernstein:
Fair Work: Crowd Work Minimum Wage with One Line of Code. 197-206
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.