


default search action
AISafety@IJCAI 2019: Macao, China
- Huáscar Espinoza, Han Yu, Xiaowei Huang, Freddy Lécué, Cynthia Chen, José Hernández-Orallo, Seán Ó hÉigeartaigh, Richard Mallah:
Proceedings of the Workshop on Artificial Intelligence Safety 2019 co-located with the 28th International Joint Conference on Artificial Intelligence, AISafety@IJCAI 2019, Macao, China, August 11-12, 2019. CEUR Workshop Proceedings 2419, CEUR-WS.org 2019
Invited Talk
- John A. McDermid, Yan Jia, Ibrahim Habli:
Towards a Framework for Safety Assurance of Autonomous Systems.
Session 1: Safe Learning
- Hossein Aboutalebi, Doina Precup, Tibor Schuster:
Learning Modular Safe Policies in the Bandit Setting with Application to Adaptive Clinical Trials. - Andrea Loreggia, Nicholas Mattei, Francesca Rossi, Kristen Brent Venable:
Metric Learning for Value Alignment.
Session 2: Reinforcement Learning Safety
- Victoria Krakovna, Laurent Orseau, Miljan Martic, Shane Legg:
Penalizing Side Effects using Stepwise Relative Reachability. - Alexander Matt Turner, Dylan Hadfield-Menell, Prasad Tadepalli:
Conservative Agency. - Jason Mancuso, Tomasz Kisielewski, David Lindner, Alok Singh:
Detecting Spiky Corruption in Markov Decision Processes. - Tom Everitt, Ramana Kumar, Victoria Krakovna, Shane Legg:
Modeling AGI Safety Frameworks with Causal Influence Diagrams.
Session 3: Safe Autonomous Vehicles
- Mesut Ozdag, Sunny Raj, Steven Lawrence Fernandes, Alvaro Velasquez, Laura Pullum, Sumit Kumar Jha:
On the Susceptibility of Deep Neural Networks to Natural Perturbations. - Maximilian Henne, Adrian Schwaiger, Gereon Weiss:
Managing Uncertainty of AI-based Perception for Autonomous Systems. - Lukas Heinzmann, Sina Shafaei, Mohd Hafeez Osman, Christoph Segler, Alois C. Knoll:
A Framework for Safety Violation Identification and Assessment in Autonomous Driving.
Session 4: AI Value Alignment, Ethics and Bias
- Andrea Aler Tubella, Virginia Dignum:
The Glass Box Approach: Verifying Contextual Adherence to Values. - Nadisha-Marie Aliman, Leon Kester:
Requisite Variety in Ethical Utility Functions for AI Value Alignment. - Holly Wilson, Andreas Theodorou:
Slam the Brakes: Perceptions of Moral Decisions in Driving Dilemmas. - Ramya Srinivasan, Ajay Chander:
Understanding Bias in Datasets using Topological Data Analysis.
Poster Papers
- Qi Zhang, Edmund H. Durfee, Satinder Singh:
Computational Strategies for the Trustworthy Pursuit and the Safe Modeling of Probabilistic Maintenance Commitments. - Arushi Majha, Sayan Sarkar, Davide Zagami:
Categorizing Wireheading in Partially Embedded Agents. - Vahid Behzadan, William H. Hsu:
Adversarial Exploitation of Policy Imitation. - Muhammad Aurangzeb Ahmad, Carly Eckert, Ankur Teredesai:
The Challenge of Imputation in Explainable Artificial Intelligence Models. - Franz Wotawa:
On the Importance of System Testing for Assuring Safety of AI Systems. - Bart Bussmann, Jacqueline Heinerman, Joel Lehman:
Towards Empathic Deep Q-Learning. - Vahid Behzadan, William H. Hsu:
Watermarking of DRL Policies with Sequential Triggers.

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.