default search action
FAccT 2022: Seoul, Korea
- FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 21 - 24, 2022. ACM 2022, ISBN 978-1-4503-9352-2
- Madalina Vlasceanu, Miroslav Dudík, Ida Momennejad:
Interdisciplinarity, Gender Diversity, and Network Structure Predict the Centrality of AI Organizations. 1-10 - Junyuan Hong, Zhangyang Wang, Jiayu Zhou:
Dynamic Privacy Budget Allocation Improves Data Efficiency of Differentially Private Gradient Descent. 11-35 - Adriane Chapman, Philip Grylls, Pamela Ugwudike, David Gammack, Jacqui Ayling:
A Data-driven analysis of the interplay between Criminological theory and predictive policing algorithms. 36-45 - Garfield Benjamin:
#FuckTheAlgorithm: algorithmic imaginaries and political resistance. 46-57 - Lukas Struppek, Dominik Hintersdorf, Daniel Neider, Kristian Kersting:
Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash. 58-69 - Priya Goyal, Adriana Romero-Soriano, Caner Hazirbas, Levent Sagun, Nicolas Usunier:
Fairness Indicators for Systematic Assessments of Visual Feature Extractors. 70-88 - Katharina Simbeck:
FAccT-Check on AI regulation: Systematic Evaluation of AI Regulation on the Example of the Legislation on the Use of AI in the Public Sector in the German Federal State of Schleswig-Holstein. 89-96 - Chiara Longoni, Andrey Fradkin, Luca Cian, Gordon Pennycook:
News from Generative Artificial Intelligence Is Believed Less. 97-106 - Nicholas Asher, Julie Hunter:
When learning becomes impossible. 107-116 - Xiuling Wang, Wendy Hui Wang:
Providing Item-side Individual Fairness for Deep Recommender Systems. 117-127 - Severin Engelmann, Chiara Ullstein, Orestis Papakyriakopoulos, Jens Grossklags:
What People Think AI Should Infer From Faces. 128-141 - Afroditi Papadaki, Natalia Martínez, Martín Bertrán, Guillermo Sapiro, Miguel R. D. Rodrigues:
Minimax Demographic Group Fairness in Federated Learning. 142-159 - Anubha Singh, Tina Park:
Automating Care: Online Food Delivery Work During the CoVID-19 Crisis in India. 160-172 - Abeba Birhane, Pratyusha Kalluri, Dallas Card, William Agnew, Ravit Dotan, Michelle Bao:
The Values Encoded in Machine Learning Research. 173-184 - Henry Fraser, Rhyle Simcock, Aaron J. Snoswell:
AI Opacity and Explainability in Tort Litigation. 185-196 - Ronen Gradwohl, Moshe Tennenholtz:
Pareto-Improving Data-Sharing✱. 197-198 - Alexandra Sasha Luccioni, Frances Corry, Hamsini Sridharan, Mike Ananny, Jason Schultz, Kate Crawford:
A Framework for Deprecating Datasets: Standardizing Documentation, Identification, and Communication. 199-212 - Nathan Kallus:
Treatment Effect Risk: Bounds and Inference. 213 - Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, Courtney Biles, Sasha Brown, Zac Kenton, Will Hawkins, Tom Stepleton, Abeba Birhane, Lisa Anne Hendricks, Laura Rimell, William Isaac, Julia Haas, Sean Legassick, Geoffrey Irving, Iason Gabriel:
Taxonomy of Risks posed by Language Models. 214-229 - Wiebke Toussaint Hutiri, Aaron Yi Ding:
Bias in Automated Speaker Recognition. 230-247 - Andrew Bell, Ian Solano-Kamaiko, Oded Nov, Julia Stoyanovich:
It's Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy. 248-266 - You Jeen Ha:
South Korean Public Value Coproduction Towards'AI for Humanity': A Synergy of Sociocultural Norms and Multistakeholder Deliberation in Bridging the Design and Implementation of National AI Ethics Guidelines. 267-277 - David Alexander Tedjopurnomo, Zhifeng Bao, Farhana Murtaza Choudhury, Hui Luo, A. Kai Qin:
Equitable Public Bus Network Optimization for Social Good: A Case Study of Singapore. 278-288 - Sandipan Sikdar, Florian Lemmerich, Markus Strohmaier:
GetFair: Generalized Fairness Tuning of Classification Models. 289-299 - Han-Yin Huang, Cynthia C. S. Liem:
Social Inclusion in Curated Contexts: Insights from Museum Practices. 300-309 - Maurice Jakesch, Zana Buçinca, Saleema Amershi, Alexandra Olteanu:
How Different Groups Prioritize Ethical Values for Responsible AI. 310-323 - Angelina Wang, Solon Barocas, Kristen Laird, Hanna M. Wallach:
Measuring Representational Harms in Image Captioning. 324-335 - Angelina Wang, Vikram V. Ramaswamy, Olga Russakovsky:
Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation. 336-349 - Jonathan Roth, Guillaume Saint-Jacques, YinYin Yu:
An Outcome Test of Discrimination for Ranked Lists. 350-356 - Smitha Milli, Luca Belli, Moritz Hardt:
Causal Inference Struggles with Agency on Online Platforms. 357-365 - Mikaela Meyer, Aaron Horowitz, Erica Marshall, Kristian Lum:
Flipping the Script on Criminal Justice Risk Assessment: An actuarial model for assessing the risk the federal sentencing system poses to defendants. 366-378 - Kristian Lum, Yunfeng Zhang, Amanda Bower:
De-biasing "bias" measurement. 379-389 - Trystan S. Goetze:
Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement. 390-400 - Benjamin Laufer, Sameer Jain, A. Feder Cooper, Jon M. Kleinberg, Hoda Heidari:
Four Years of FAccT: A Reflexive, Mixed-Methods Analysis of Research Contributions, Shortcomings, and Future Prospects. 401-426 - Anamaria Crisan, Margaret Drouhard, Jesse Vig, Nazneen Rajani:
Interactive Model Cards: A Human-Centered Approach to Model Documentation. 427-439 - Hong Shen, Leijie Wang, Wesley H. Deng, Ciell Brusse, Ronald Velgersdijk, Haiyi Zhu:
The Model Card Authoring Toolkit: Toward Community-centered, Deliberation-driven AI Design. 440-451 - William Boag, Harini Suresh, Bianca Lepe, Catherine D'Ignazio:
Tech Worker Organizing for Power and Accountability. 452-463 - Tanya Chowdhury, Razieh Rahimi, James Allan:
Equi-explanation Maps: Concise and Informative Global Summary Explanations. 464-472 - Wesley Hanwen Deng, Manish Nagireddy, Michelle Seng Ah Lee, Jatinder Singh, Zhiwei Steven Wu, Kenneth Holstein, Haiyi Zhu:
Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits. 473-484 - Youjin Kong:
Are "Intersectionally Fair" AI Algorithms Really Fair to Women of Color? A Philosophical Analysis. 485-494 - Marilyn Zhang:
Affirmative Algorithms: Relational Equality as Algorithmic Fairness. 495-507 - Konrad Kollnig, Anastasia Shuba, Max Van Kleek, Reuben Binns, Nigel Shadbolt:
Goodbye Tracking? Impact of iOS App Tracking Transparency and Privacy Labels. 508-520 - Nina Markl:
Language variation and algorithmic bias: understanding algorithmic bias in British English automatic speech recognition. 521-534 - Mireia Yurrita, Dave Murray-Rust, Agathe Balayn, Alessandro Bozzon:
Towards a multi-stakeholder value-based assessment framework for algorithmic systems. 535-563 - Serena Midha, Max L. Wilson, Sarah Sharples:
Ethical Concerns and Perceptions of Consumer Neurotechnology from Lived Experiences of Mental Workload Tracking. 564-573 - Mingzi Niu, Sampath Kannan, Aaron Roth, Rakesh Vohra:
Best vs. All: Equity and Accuracy of Standardized Test Score Reporting. 574-586 - Jessie J. Smith, Saleema Amershi, Solon Barocas, Hanna M. Wallach, Jennifer Wortman Vaughan:
REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research. 587-597 - Joseph Donia:
Normative Logics of Algorithmic Accountability. 598 - Anay Mehrotra, Bary S. R. Pradelski, Nisheeth K. Vishnoi:
Selection in the Presence of Implicit Bias: The Advantage of Intersectional Constraints. 599-609 - Hortense Fong, Vineet Kumar, Anay Mehrotra, Nisheeth K. Vishnoi:
Fairness for AUC via Feature Augmentation. 610 - Hendrik Schuff, Alon Jacovi, Heike Adel, Yoav Goldberg, Ngoc Thang Vu:
Human Interpretation of Saliency-based Explanation Over Text. 611-636 - Avijit Ghosh, Matthew Jagielski, Christo Wilson:
Subverting Fair Image Search with Generative Adversarial Perturbations. 637-650 - Jad Salem, Deven R. Desai, Swati Gupta:
Don't let Ricci v. DeStefano Hold You Back: A Bias-Aware Legal Solution to the Hiring Paradox. 651-666 - Harini Suresh, Rajiv Movva, Amelia Lee Dogan, Rahul Bhargava, Isadora Cruxen, Angeles Martinez Cuba, Guilia Taurino, Wonyoung So, Catherine D'Ignazio:
Towards Intersectional Feminist and Participatory ML: A Case Study in Supporting Feminicide Counterdata Collection. 667-678 - Chris Norval, Kristin Cornelius, Jennifer Cobbe, Jatinder Singh:
Disclosure by Design: Designing information disclosures to support meaningful transparency and accountability. 679-690 - Katrina Geddes:
The Death of the Legal Subject: How Predictive Algorithms Are (Re)constructing Legal Subjectivity. 691-701 - Harmanpreet Kaur, Eytan Adar, Eric Gilbert, Cliff Lampe:
Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory. 702-714 - Preetam Nandy, Cyrus DiCiccio, Divya Venugopalan, Heloise Logan, Kinjal Basu, Noureddine El Karoui:
Achieving Fairness via Post-Processing in Web-Scale Recommender Systems✱. 715-725 - A. Feder Cooper, Gili Vidan:
Making the Unaccountable Internet: The Changing Meaning of Accounting in the Early ARPANET. 726-742 - Andrew Hundt, William Agnew, Vicky Zeng, Severin Kacianka, Matthew C. Gombolay:
Robots Enact Malignant Stereotypes. 743-756 - Wanrong Zhang, Olga Ohrimenko, Rachel Cummings:
Attribute Privacy: Framework and Mechanisms. 757-766 - Aaron Rieke, Vincent Southerland, Dan Svirsky, Mingwei Hsu:
Imperfect Inferences: A Practical Assessment. 767-777 - Danish Contractor, Daniel McDuff, Julia Katherine Haines, Jenny Lee, Christopher Hines, Brent J. Hecht, Nicholas Vincent, Hanlin Li:
Behavioral Use Licensing for Responsible AI. 778-788 - Camille Harris, Matan Halevy, Ayanna M. Howard, Amy S. Bruckman, Diyi Yang:
Exploring the Role of Grammar and Word Choice in Bias Toward African American English (AAE) in Hate Speech Classification. 789-798 - J. D. Zamfirescu-Pereira, Jerry Chen, Emily Wen, Allison Koenecke, Nikhil Garg, Emma Pierson:
Trucks Don't Mean Trump: Diagnosing Human Error in Image Analysis. 799-813 - Zhen Dai, Yury Makarychev, Ali Vakilian:
Fair Representation Clustering with Several Protected Classes. 814-823 - Leijie Wang, Haiyi Zhu:
How are ML-Based Online Content Moderation Systems Actually Used? Studying Community Size, Local Activity, and Disparate Treatment. 824-838 - Divya Shanmugam, Fernando Diaz, Samira Shabanian, Michèle Finck, Asia Biega:
Learning to Limit Data Collection via Scaling Laws: A Computational Interpretation for the Legal Principle of Data Minimization. 839-849 - Emily Black, Manish Raghavan, Solon Barocas:
Model Multiplicity: Opportunities, Concerns, and Solutions. 850-863 - A. Feder Cooper, Emanuel Moss, Benjamin Laufer, Helen Nissenbaum:
Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning. 864-876 - Maximilian T. Fischer, Simon David Hirsbrunner, Wolfgang Jentner, Matthias Miller, Daniel A. Keim, Paula Helm:
Promoting Ethical Awareness in Communication Analysis: Investigating Potentials and Limits of Visual Analytics for Intelligence Applications. 877-889 - Bryce McLaughlin, Jann Spiess, Talia Gillis:
On the Fairness of Machine-Assisted Human Decisions. 890 - Sebastian Bordt, Michèle Finck, Eric Raidl, Ulrike von Luxburg:
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts. 891-905 - Isabel Chien, Nina Deliu, Richard E. Turner, Adrian Weller, Sofia S. Villar, Niki Kilbertus:
Multi-disciplinary fairness considerations in machine learning for clinical trials. 906-924 - Sun-ha Hong:
Prediction as Extraction of Discretion. 925-934 - Mona Sloane, Janina Zakrzewski:
German AI Start-Ups and "AI Ethics": Using A Social Practice Lens for Assessing and Implementing Socio-Technical Innovation. 935-947 - Abeba Birhane, Elayne Ruane, Thomas Laurent, Matthew S. Brown, Johnathan Flowers, Anthony Ventresque, Christopher L. Dancy:
The Forgotten Margins of AI Ethics. 948-958 - Inioluwa Deborah Raji, I. Elizabeth Kumar, Aaron Horowitz, Andrew D. Selbst:
The Fallacy of AI Functionality. 959-972 - Jaspar Pahl, Ines Rieger, Anna Möller, Thomas Wittenberg, Ute Schmid:
Female, white, 27? Bias Evaluation on Data and Algorithms for Affect Recognition in Faces. 973-987 - Wonyoung So, Pranay Lohia, Rakesh Pimplikar, Anette E. Hosoi, Catherine D'Ignazio:
Beyond Fairness: Reparative Algorithms to Address Historical Injustices of Housing Discrimination in the US. 988-1004 - Christina Lu, Jackie Kay, Kevin R. McKee:
Subverting machines, fluctuating identities: Re-learning human categorization. 1005-1015 - Jakob Mökander, Margi Sheth, David S. Watson, Luciano Floridi:
Models for Classifying AI Systems: the Switch, the Ladder, and the Matrix. 1016 - Rui-Jie Yew, Alice Xiang:
Regulating Facial Processing Technologies: Tensions Between Legal and Technical Considerations in the Application of Illinois BIPA. 1017-1027 - Yuhao Du, Stefania Ionescu, Melanie D. Sage, Kenneth Joseph:
A Data-Driven Simulation of the New York State Foster Care System. 1028-1038 - Stephen Pfohl, Yizhe Xu, Agata Foryciarz, Nikolaos Ignatiadis, Julian Genkins, Nigam Shah:
Net benefit, calibration, threshold selection, and training objectives for algorithmic fairness in healthcare. 1039-1052 - Alan Mishler, Edward H. Kennedy:
FADE: FAir Double Ensemble Learning for Observable and Counterfactual Outcomes. 1053 - Emanuele Albini, Jason Long, Danial Dervovic, Daniele Magazzeni:
Counterfactual Shapley Additive Explanations. 1054-1070 - Maarten Buyl, Christina Cociancig, Cristina Frattone, Nele Roekens:
Tackling Algorithmic Disability Discrimination in the Hiring Process: An Ethical, Legal and Technical Analysis. 1071-1082 - David S. Watson:
Rational Shapley Values. 1083-1094 - Tasfia Mashiat, Xavier Gitiaux, Huzefa Rangwala, Patrick J. Fowler, Sanmay Das:
Trade-offs between Group Fairness Metrics in Societal Resource Allocation. 1095-1105 - Ira Globus-Harris, Michael Kearns, Aaron Roth:
An Algorithmic Framework for Bias Bounties. 1106-1124 - Jerry Lin, Carolyn Chen, Marc Chmielewski, Samia Zaman, Brandon Fain:
Auditing for Gerrymandering by Identifying Disenfranchised Individuals. 1125-1135 - Lydia R. Lucchesi, Petra M. Kuhnert, Jenny L. Davis, Lexing Xie:
Smallset Timelines: A Visual Representation of Data Preprocessing Decisions. 1136-1153 - Marietjie Wilhelmina Maria Botes:
Brain Computer Interfaces and Human Rights: Brave new rights for a brave new world. 1154-1161 - Logan Stapleton, Min Hun Lee, Diana Qing, Marya Wright, Alexandra Chouldechova, Ken Holstein, Zhiwei Steven Wu, Haiyi Zhu:
Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders. 1162-1177 - Avrim Blum, Kevin Stangl, Ali Vakilian:
Multi Stage Screening: Enforcing Fairness and Maximizing Efficiency in a Pre-Existing Pipeline. 1178-1193 - Aparna Balagopalan, Haoran Zhang, Kimia Hamidieh, Thomas Hartvigsen, Frank Rudzicz, Marzyeh Ghassemi:
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations. 1194-1206 - Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, Aaron Roth, Saeed Sharifi-Malvajerdi:
Multiaccurate Proxies for Downstream Fairness. 1207-1239 - Aida Rahmattalabi, Phebe Vayanos, Kathryn Dullerud, Eric Rice:
Learning Resource Allocation Policies from Observational Data with an Application to Homeless Services Delivery. 1240-1256 - Q. Vera Liao, S. Shyam Sundar:
Designing for Responsible Trust in AI Systems: A Communication Perspective. 1257-1268 - Robert Wolfe, Aylin Caliskan:
Markedness in Visual Semantic AI. 1269-1279 - Yusuke Hirota, Yuta Nakashima, Noa Garcia:
Gender and Racial Bias in Visual Question Answering Datasets. 1280-1292 - Robert Wolfe, Mahzarin R. Banaji, Aylin Caliskan:
Evidence for Hypodescent in Visual Semantic AI. 1293-1304 - Upol Ehsan, Ranjit Singh, Jacob Metcalf, Mark O. Riedl:
The Algorithmic Imprint. 1305-1317 - Yongjie Wang, Hangwei Qian, Chunyan Miao:
DualCF: Efficient Model Extraction Attack from Counterfactual Explanations. 1318-1329 - Ruoxi Shang, K. J. Kevin Feng, Chirag Shah:
Why Am I Not Seeing It? Understanding Users' Needs for Counterfactual Explanations in Everyday Recommendations. 1330-1340 - Catalina Goanta, Thales Felipe Costa Bertaglia, Adriana Iamnitchi:
The Case for a Legal Compliance API for the Enforcement of the EU's Digital Services Act on Social Media Platforms. 1341-1349 - Patrick Schramowski, Christopher Tauchmann, Kristian Kersting:
Can Machines Help Us Answering Question 16 in Datasheets, and In Turn Reflecting on Inappropriate Content? 1350-1361 - Riccardo Fogliato, Shreya Chappidi, Matthew P. Lungren, Paul Fisher, Diane Wilson, Michael Fitzke, Mark Parkinson, Eric Horvitz, Kori Inkpen, Besmira Nushi:
Who Goes First? Influences of Human-AI Workflow on Decision Making in Clinical Imaging. 1362-1374 - Meg Young, Michael A. Katell, P. M. Krafft:
Confronting Power and Corporate Capture at the FAccT Conference. 1375-1386 - Lauren Thornton, Bran Knowles, Gordon S. Blair:
The Alchemy of Trust: The Creative Act of Designing Trustworthy Socio-Technical Systems. 1387-1398 - Francois Buet-Golfouse, Islam Utyagulov:
Towards Fair Unsupervised Learning. 1399-1409 - Daniel Susser:
Decision Time: Normative Dimensions of Algorithmic Speed. 1410-1420 - Miriam Rateike, Ayan Majumdar, Olga Mineeva, Krishna P. Gummadi, Isabel Valera:
Don't Throw it Away! The Utility of Unlabeled Data in Fair Decision Making. 1421-1433 - Marie-Therese Png:
At the Tensions of South and North: Critical Roles of Global South Stakeholders in AI Governance. 1434-1445 - Lindsay Poirier:
Accountable Data: The Politics and Pragmatics of Disclosure Datasets. 1446-1456 - Andrea Ferrario, Michele Loi:
How Explainability Contributes to Trust in AI. 1457-1466 - William Cai, Ro Encarnacion, Bobbie Chern, Sam Corbett-Davies, Miranda Bogen, Stevie Bergman, Sharad Goel:
Adaptive Sampling Strategies to Construct Equitable Training Datasets. 1467-1478 - Emily Black, Hadi Elzayn, Alexandra Chouldechova, Jacob Goldin, Daniel E. Ho:
Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax Audit Models. 1479-1503 - Terrence Neumann, Maria De-Arteaga, Sina Fazelpour:
Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms. 1504-1515 - Nic Fishman, Leif Hancox-Li:
Should attention be all we need? The epistemic and ethical implications of unification in machine learning. 1516-1527 - Goda Klumbyte, Claude Draude, Alex S. Taylor:
Critical Tools for Machine Learning: Working with Intersectional Critical Concepts in Machine Learning Systems Design. 1528-1541 - Sushant Agarwal, Amit Deshpande:
On the Power of Randomization in Fair Classification and Representation. 1542-1551 - Abdulaziz A. Almuzaini, Chidansh A. Bhatt, David M. Pennock, Vivek K. Singh:
ABCinML: Anticipatory Bias Correction in Machine Learning Applications. 1552-1560 - Kristina Irion:
Algorithms Off-limits?: If digital trade law restricts access to source code of software then accountability will suffer. 1561-1570 - Sasha Costanza-Chock, Inioluwa Deborah Raji, Joy Buolamwini:
Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. 1571-1583 - Roel Dobbe:
System Safety and Artificial Intelligence. 1584 - Pratik S. Sachdeva, Renata Barreto, Claudia von Vacano, Chris J. Kennedy:
Assessing Annotator Identity Sensitivity via Item Response Theory: A Case Study in a Hate Speech Corpus. 1585-1603 - Gauri Kambhatla, Ian Stewart, Rada Mihalcea:
Surfacing Racial Stereotypes through Identity Portrayal. 1604-1615 - Jakob Schoeffer, Niklas Kühl, Yvette Machowski:
"There Is Not Enough Information": On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making. 1616-1628 - Brian Brubach, Audrey Ballarin, Heeba Nazeer:
Characterizing Properties and Trade-offs of Centralized Delegation Mechanisms in Liquid Democracy. 1629-1638 - Kate Donahue, Alexandra Chouldechova, Krishnaram Kenthapadi:
Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness. 1639-1656 - Shikun Zhang, Yan Shvartzshnaider, Yuanyuan Feng, Helen Nissenbaum, Norman Sadeh:
Stop the Spread: A Contextual Integrity Perspective on the Appropriateness of COVID-19 Vaccination Certificates. 1657-1670 - Rebecca Ann Johnson, Simone Zhang:
What is the Bureaucratic Counterfactual? Categorical versus Algorithmic Prioritization in U.S. Social Policy. 1671-1682 - C. J. Barberan, Sina Alemmohammad, Naiming Liu, Randall Balestriero, Richard G. Baraniuk:
NeuroView-RNN: It's About Time. 1683-1697 - Ziwei Wu, Jingrui He:
Fairness-aware Model-agnostic Positive and Unlabeled Learning. 1698-1708 - McKane Andrus, Sarah Villeneuve:
Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection in the Pursuit of Fairness. 1709-1721 - J. Nathan Matias, Eric Pennington, Zenobia Chan:
Testing Concerns about Technology's Behavioral Impacts with N-of-one Trials. 1722-1732 - Rediet Abebe, Moritz Hardt, Angela Jin, John Miller, Ludwig Schmidt, Rebecca Wexler:
Adversarial Scrutiny of Evidentiary Statistical Software. 1733-1746 - Deep Ganguli, Danny Hernandez, Liane Lovitt, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Nelson Elhage, Sheer El Showk, Stanislav Fort, Zac Hatfield-Dodds, Tom Henighan, Scott Johnston, Andy Jones, Nicholas Joseph, Jackson Kernian, Shauna Kravec, Ben Mann, Neel Nanda, Kamal Ndousse, Catherine Olsson, Daniela Amodei, Tom B. Brown, Jared Kaplan, Sam McCandlish, Christopher Olah, Dario Amodei, Jack Clark:
Predictability and Surprise in Large Generative Models. 1747-1764 - Lydia Reader, Pegah Nokhiz, Cathleen Power, Neal Patwari, Suresh Venkatasubramanian, Sorelle A. Friedler:
Models for understanding and quantifying feedback in societal systems. 1765-1775 - Mahima Pushkarna, Andrew Zaldivar, Oddur Kjartansson:
Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI. 1776-1826 - Lesia Semenova, Cynthia Rudin, Ronald Parr:
On the Existence of Simpler Machine Learning Models. 1827-1858 - Ben Hutchinson, Negar Rostamzadeh, Christina Greer, Katherine A. Heller, Vinodkumar Prabhakaran:
Evaluation Gaps in Machine Learning Practice. 1859-1876 - Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole DeCario, Will Buchanan:
Measuring the Carbon Intensity of AI in Cloud Instances. 1877-1894 - Neel Patel, Reza Shokri, Yair Zick:
Model Explanations with Differential Privacy. 1895-1904 - Przemyslaw A. Grabowicz, Nicholas Perello, Aarshee Mishra:
Marrying Fairness and Explainability in Supervised Learning. 1905-1916 - Divya Ramesh, Vaishnav Kameswaran, Ding Wang, Nithya Sambasivan:
'We can't find fault with a friend': The Mediation of Accountability on Instant Loan Platforms in India. 1917-1928 - Gourab K. Patro, Lorenzo Porcaro, Laura Mitchell, Qiuyue Zhang, Meike Zehlike, Nikhil Garg:
Fair ranking: a critical review, challenges, and future directions. 1929-1942 - Negar Rostamzadeh, Diana Mincu, Subhrajit Roy, Andrew Smart, Lauren Wilcox, Mahima Pushkarna, Jessica Schrouff, Razvan Amironesei, Nyalleng Moorosi, Katherine A. Heller:
Healthsheet: Development of a Transparency Artifact for Health Datasets. 1943-1961 - Greg d'Eon, Jason d'Eon, James R. Wright, Kevin Leyton-Brown:
The Spotlight: A General Method for Discovering Systematic Errors in Deep Learning Models. 1962-1981 - Ben Gansky, Sean McDonald:
CounterFAccTual: How FAccT Undermines Its Organizing Principles. 1982-1992 - Michael Carl Tschantz:
What is Proxy Discrimination? 1993-2003 - Cyrus Cousins:
Uncertainty and the Social Planner's Problem: Why Sample Complexity Matters. 2004-2015 - Nikita Mehandru, Samantha Robertson, Niloufar Salehi:
Reliable and Safe Use of Machine Translation in Medical Settings. 2016-2025 - Michele Loi, Christoph Heitz:
Is calibration a fairness requirement?: An argument from the point of view of moral philosophy and decision theory. 2026-2034 - David Gray Widder, Dawn Nafus, Laura Dabbish, James D. Herbsleb:
Limits and Possibilities for "Ethical AI" in Open Source: A Study of Deepfakes. 2035-2046 - Carolyn Ashurst, Emmie Hine, Paul Sedille, Alexis Carlier:
AI Ethics Statements: Analysis and Lessons Learnt from NeurIPS Broader Impact Statements. 2047-2056 - Carolyn Ashurst, Solon Barocas, Rosie Campbell, Deborah Raji:
Disentangling the Components of Ethical Research in Machine Learning. 2057-2068 - Karen Boyd:
Designing Up with Value-Sensitive Design: Building a Field Guide for Ethical ML Development. 2069-2082 - Hannah Devinney, Jenny Björklund, Henrik Björklund:
Theories of "Gender" in NLP Bias Research. 2083-2102 - Gabriel Lima, Nina Grgic-Hlaca, Jin Keun Jeong, Meeyoung Cha:
The Conflict Between Explainable and Accountable Decision-Making Algorithms. 2103-2113 - Tim Draws, David La Barbera, Michael Soprano, Kevin Roitero, Davide Ceolin, Alessandro Checco, Stefano Mizzaro:
The Effects of Crowd Worker Biases in Fact-Checking Tasks. 2114-2124 - Ulrike Kuhl, André Artelt, Barbara Hammer:
Keep Your Friends Close and Your Counterfactuals Closer: Improved Learning From Closest Rather Than Plausible Counterfactual Explanations in an Abstract Setting. 2125-2137 - Kristen M. Scott, Sonja Mei Wang, Milagros Miceli, Pieter Delobelle, Karolina Sztandar-Sztanderska, Bettina Berendt:
Algorithmic Tools in Public Employment Services: Towards a Jobseeker-Centric Perspective. 2138-2148 - Amr Sharaf, Hal Daumé III, Renkun Ni:
Promoting Fairness in Learned Models by Learning to Active Learn under Parity Constraints. 2149-2156 - Nicolas Usunier, Virginie Do, Elvis Dohmatob:
Fast online ranking with fairness of exposure. 2157-2167 - Karl-Emil Kjær Bilstrup, Magnus Høholt Kaspersen, Ira Assent, Simon Enni, Marianne Graves Petersen:
From Demo to Design in Teaching Machine Learning. 2168-2178 - Pola Schwöbel, Peter Remmers:
The Long Arc of Fairness: Formalisations and Ethical Discourse. 2179-2188 - Camila Laranjeira da Silva, João Macedo, Sandra Avila, Jefersson A. dos Santos:
Seeing without Looking: Analysis Pipeline for Child Sexual Abuse Datasets. 2189-2205 - Yacine Jernite, Huu Nguyen, Stella Biderman, Anna Rogers, Maraim Masoud, Valentin Danchev, Samson Tan, Alexandra Sasha Luccioni, Nishant Subramani, Isaac Johnson, Gérard Dupont, Jesse Dodge, Kyle Lo, Zeerak Talat, Dragomir R. Radev, Aaron Gokaslan, Somaieh Nikpoor, Peter Henderson, Rishi Bommasani, Margaret Mitchell:
Data Governance in the Age of Large-Scale Data-Driven Language Technology. 2206-2222 - Samantha Robertson, Mark Díaz:
Understanding and Being Understood: User Strategies for Identifying and Recovering From Mistranslations in Machine Translation-Mediated Chat. 2223-2238 - Timo Speith:
A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods. 2239-2250 - Juniper L. Lovato, Antoine Allard, Randall Harp, Jeremiah Onaolapo, Laurent Hébert-Dufresne:
Limits of Individual Consent and Models of Distributed Consent in Online Social Networks. 2251-2262 - Azin Ghazimatin, Matthäus Kleindessner, Chris Russell, Ziawasch Abedjan, Jacek Golebiowski:
Measuring Fairness of Rankings under Noisy Sensitive Information. 2263-2279 - Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, Florian Tramèr:
What Does it Mean for a Language Model to Preserve Privacy? 2280-2292 - Eleonora Viganò, Corinna Hertweck, Christoph Heitz, Michele Loi:
People are not coins: Morally distinct types of predictions necessitate different fairness constraints. 2293-2301 - Ioannis Pastaltzidis, Nikolaos Dimitriou, Katherine Quezada-Tavarez, Stergios Aidinlis, Thomas Marquenie, Agata Gurzawska, Dimitrios Tzovaras:
Data augmentation for fairness-aware machine learning: Preventing algorithmic bias in law enforcement systems. 2302-2314 - Joachim Baumann, Anikó Hannák, Christoph Heitz:
Enforcing Group Fairness in Algorithmic Decision Making: Utility Maximization Under Sufficiency. 2315-2326 - Efrén Cruz Cortés, Sarah Rajtmajer, Debashis Ghosh:
Locality of Technical Objects and the Role of Structural Interventions for Systemic Change. 2327-2341 - Mark Díaz, Ian D. Kivlichan, Rachel Rosen, Dylan K. Baker, Razvan Amironesei, Vinodkumar Prabhakaran, Emily Denton:
CrowdWorkSheets: Accounting for Individual and Collective Identities Underlying Crowdsourced Dataset Annotation. 2342-2351
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.