default search action
17th AISec@CCS 2024: Salt Lake City, UT, USA
- Maura Pintor, Xinyun Chen, Matthew Jagielski:
Proceedings of the 2024 Workshop on Artificial Intelligence and Security, AISec 2024, Salt Lake City, UT, USA, October 14-18, 2024. ACM 2024, ISBN 979-8-4007-1228-9 - Maor Biton Dor, Yisroel Mirsky:
Efficient Model Extraction via Boundary Sampling. 1-11 - Ryan Swope, Amol Khanna, Philip Doldo, Saptarshi Roy, Edward Raff:
Feature Selection from Differentially Private Correlations. 12-23 - Meenatchi Sundaram Muthu Selva Annamalai:
It's Our Loss: No Privacy Amplification for Hidden State DP-SGD With Non-Convex Loss. 24-30 - Nadav Gat, Mahmood Sharif:
Harmful Bias: A General Label-Leakage Attack on Federated Learning from Bias Gradients. 31-41 - Camila Roa, Maria Mahbub, Sudarshan Srinivasan, Edmon Begoli, Amir Sadovnik:
Semantic Stealth: Crafting Covert Adversarial Patches for Sentiment Classifiers Using Large Language Models. 42-52 - Jiankai Jin, Olga Ohrimenko, Benjamin I. P. Rubinstein:
Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness. 53-64 - Yuxuan Zhu, Michael Mandulak, Kerui Wu, George M. Slota, Yuseok Jeon, Ka-Ho Chow, Lei Yu:
On the Robustness of Graph Reduction Against GNN Backdoor. 65-76 - Qi Zhao, Christian Wressnegger:
Adversarially Robust Anti-Backdoor Learning. 77-88 - Dario Pasquini, Martin Strohmeier, Carmela Troncoso:
Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks. 89-100 - Leo Hyun Park, Jaeuk Kim, Myung Gyo Oh, Jaewoo Park, Taekyoung Kwon:
Adversarial Feature Alignment: Balancing Robustness and Accuracy in Deep Learning via Adversarial Training. 101-112 - Zebin Yun, Achi-Or Weingarten, Eyal Ronen, Mahmood Sharif:
The Ultimate Combo: Boosting Adversarial Example Transferability by Composing Data Augmentations. 113-124 - Behrad Tajalli, Stefanos Koffas, Gorka Abad, Stjepan Picek:
ELMs Under Siege: A Study on Backdoor Attacks on Extreme Learning Machines. 125-136 - Coen Schoof, Stefanos Koffas, Mauro Conti, Stjepan Picek:
EmoBack: Backdoor Attacks Against Speaker Identification Using Emotional Prosody. 137-148 - Giovanni Apruzzese, Aurore Fass, Fabio Pierazzi:
When Adversarial Perturbations meet Concept Drift: An Exploratory Analysis on ML-NIDS. 149-160 - Christian Bungartz, Felix Boes, Michael Meier, Marc Ohm:
Towards Robust, Explainable, and Privacy-Friendly Sybil Detection. 161-172 - Shashwat Kumar, Francis Hahn, Stuart Millar, Xinming Ou:
Using LLM Embeddings with Similarity Search for Botnet TLS Certificate Detection. 173-183 - Alberto Castagnaro, Mauro Conti, Luca Pajola:
Offensive AI: Enhancing Directory Brute-forcing Attack with the Use of Language Models. 184-195 - Sayed Erfan Arefin, Abdul Serwadda:
Music to My Ears: Turning GPU Sounds into Intellectual Property Gold. 196-207
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.