default search action
AEQUITAS@ECAI 2024: Santiago de Compostela, Spain
- Roberta Calegari, Virginia Dignum, Barry O'Sullivan:
Proceedings of the 2nd Workshop on Fairness and Bias in AI co-located with 27th European Conference on Artificial Intelligence (ECAI 2024), Santiago de Compostela, Spain, October 20th, 2024. CEUR Workshop Proceedings 3808, CEUR-WS.org 2024 - Bastiaan Bruinsma, Annika Fredén, Kajsa Hansson, Moa Johansson, Pasko Kisic-Merino, Denitsa Saynova:
Setting the AI Agenda - Evidence from Sweden in the ChatGPT Era. - Jingyu Hu, Jun Hong, Mengnan Du, Weiru Liu:
ProxiMix: Enhancing Fairness with Proximity Samples in Subgroups. - Lea Cohausz, Jakob Kappenberger, Heiner Stuckenschmidt:
Combining Fairness and Causal Graphs to Advance Both. - Moisés Santos, André C. P. L. F. de Carvalho, Carlos Soares:
Enhancing Algorithm Performance Understanding through tsMorph: Generating Semi-Synthetic Time Series for Robust Forecasting Evaluation. - Mayra Russo, Maria-Esther Vidal:
Leveraging Ontologies to Document Bias in Data. - Manh Khoi Duong, Stefan Conrad:
Measuring and Mitigating Bias for Tabular Datasets with Multiple Protected Attributes. - Guillermo Villate-Castillo, Borja Sanz, Javier Del Ser:
Mitigating Toxicity in Dialogue Agents through Adversarial Reinforcement Learning. - Matteo Magnini, Giovanni Ciatto, Roberta Calegari, Andrea Omicini:
Enforcing Fairness via Constraint Injection with FaUCI. - Gabriele La Malfa, Jie M. Zhang, Michael Luck, Elizabeth Black:
Using Protected Attributes to Consider Fairness in Multi-Agent Systems. - Aashutosh Ganesh, Mirela Popa, Daan Odijk, Nava Tintarev:
Does spatio-temporal information benefit the video summarization task? - Luca Giuliani, Eleonora Misino, Roberta Calegari, Michele Lombardi:
Long-Term Fairness Strategies in Ranking with Continuous Sensitive Attributes. - Camilla Quaresmini, Giuseppe Primiero:
Data Quality Dimensions for Fair AI. - Federico Sabbatini, Roberta Calegari:
Unmasking the Shadows: Leveraging Symbolic Knowledge Extraction to Discover Biases and Unfairness in Opaque Predictive Models. - Thierry Poibeau:
Bias, Subjectivity and Norm in Large Language Models. - Marion Bartl, Susan Leavy:
From Inclusive Language to Inclusive AI: A Proof-of-Concept Study into Pre-Trained Models. - Md. Fahim Sikder, Resmi Ramachandranpillai, Daniel de Leng, Fredrik Heintz:
FairX: A comprehensive benchmarking tool for model analysis using fairness, utility, and explainability. - Angel S. Marrero, Gustavo A. Marrero, Carlos Bethencourt, Liam James, Roberta Calegari:
AI-fairness and equality of opportunity: a case study on educational achievement.
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.