default search action
Jinyuan Jia 0001
Person information
- affiliation: Penn State University, State College, PA, USA
- affiliation (former): University of Illinois Urbana-Champaign, IL, USA
- affiliation (former): Duke University, USA
Other persons with the same name
- Jinyuan Jia — disambiguation page
- Jinyuan Jia 0002 — Tongji University, Shanghai, China
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j3]Lingyu Du, Jinyuan Jia, Xucong Zhang, Guohao Lan:
PrivateGaze: Preserving User Privacy in Black-box Mobile Gaze Tracking Services. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 8(3): 99:1-99:28 (2024) - [c57]Hangfan Zhang, Zhimeng Guo, Huaisheng Zhu, Bochuan Cao, Lu Lin, Jinyuan Jia, Jinghui Chen, Dinghao Wu:
Jailbreak Open-Sourced Large Language Models via Enforced Decoding. ACL (1) 2024: 5475-5493 - [c56]Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bill Yuchen Lin, Radha Poovendran:
SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding. ACL (1) 2024: 5587-5605 - [c55]Fengqing Jiang, Zhangchen Xu, Luyao Niu, Boxin Wang, Jinyuan Jia, Bo Li, Radha Poovendran:
POSTER: Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications. AsiaCCS 2024 - [c54]Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Radha Poovendran:
Poster: Brave: Byzantine-Resilient and Privacy-Preserving Peer-to-Peer Federated Learning. AsiaCCS 2024 - [c53]Yuxin Yang, Qiang Li, Jinyuan Jia, Yuan Hong, Binghui Wang:
Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses. CCS 2024: 2829-2843 - [c52]Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
Data Poisoning Based Backdoor Attacks to Contrastive Learning. CVPR 2024: 24357-24366 - [c51]Yanting Wang, Hongye Fu, Wei Zou, Jinyuan Jia:
MMCert: Provable Defense Against Adversarial Attacks to Multi-Modal Models. CVPR 2024: 24655-24664 - [c50]Yuan Xiao, Shiqing Ma, Juan Zhai, Chunrong Fang, Jinyuan Jia, Zhenyu Chen:
Towards General Robustness Verification of MaxPool-Based Convolutional Neural Networks via Tightening Linear Approximation. CVPR 2024: 24766-24775 - [c49]Zhengyuan Jiang, Moyang Guo, Yuepeng Hu, Jinyuan Jia, Neil Zhenqiang Gong:
Certifiably Robust Image Watermark. ECCV (77) 2024: 427-443 - [c48]Zaishuo Xia, Han Yang, Binghui Wang, Jinyuan Jia:
GNNCert: Deterministic Certification of Graph Neural Networks against Adversarial Perturbations. ICLR 2024 - [c47]Jiate Li, Meng Pang, Yun Dong, Jinyuan Jia, Binghui Wang:
Graph Neural Network Explanations are Fragile. ICML 2024 - [c46]Zhuowen Yuan, Wenbo Guo, Jinyuan Jia, Bo Li, Dawn Song:
SHINE: Shielding Backdoors in Deep Reinforcement Learning. ICML 2024 - [c45]Hengzhi Pei, Jinyuan Jia, Wenbo Guo, Bo Li, Dawn Song:
TextGuard: Provable Defense against Backdoor Attacks on Text Classification. NDSS 2024 - [c44]Hongbin Liu, Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong:
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning. SP (Workshops) 2024: 144-156 - [c43]Yanting Wang, Wei Zou, Jinyuan Jia:
FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models. SP 2024: 2939-2957 - [c42]Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, Neil Zhenqiang Gong:
Formalizing and Benchmarking Prompt Injection Attacks and Defenses. USENIX Security Symposium 2024 - [c41]Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bo Li, Radha Poovendran:
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning. USENIX Security Symposium 2024 - [i56]Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Radha Poovendran:
Brave: Byzantine-Resilient and Privacy-Preserving Peer-to-Peer Federated Learning. CoRR abs/2401.05562 (2024) - [i55]Wenjie Qu, Dong Yin, Zixin He, Wei Zou, Tianyang Tao, Jinyuan Jia, Jiaheng Zhang:
Provably Robust Multi-bit Watermarking for AI-generated Text via Error Correction Code. CoRR abs/2401.16820 (2024) - [i54]Wei Zou, Runpeng Geng, Binghui Wang, Jinyuan Jia:
PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models. CoRR abs/2402.07867 (2024) - [i53]Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bill Yuchen Lin, Radha Poovendran:
SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding. CoRR abs/2402.08983 (2024) - [i52]Yanting Wang, Hongye Fu, Wei Zou, Jinyuan Jia:
MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models. CoRR abs/2403.19080 (2024) - [i51]Yanting Wang, Wei Zou, Jinyuan Jia:
FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models. CoRR abs/2404.08631 (2024) - [i50]Yuzhou Nie, Yanting Wang, Jinyuan Jia, Michael J. De Lucia, Nathaniel D. Bastian, Wenbo Guo, Dawn Song:
TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation Models. CoRR abs/2405.16783 (2024) - [i49]Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bo Li, Radha Poovendran:
ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning. CoRR abs/2405.20975 (2024) - [i48]Yuan Xiao, Shiqing Ma, Juan Zhai, Chunrong Fang, Jinyuan Jia, Zhenyu Chen:
Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation. CoRR abs/2406.00699 (2024) - [i47]Jiate Li, Meng Pang, Yun Dong, Jinyuan Jia, Binghui Wang:
Graph Neural Network Explanations are Fragile. CoRR abs/2406.03193 (2024) - [i46]Zhengyuan Jiang, Moyang Guo, Yuepeng Hu, Jinyuan Jia, Neil Zhenqiang Gong:
Certifiably Robust Image Watermark. CoRR abs/2407.04086 (2024) - [i45]Lingyu Du, Jinyuan Jia, Xucong Zhang, Guohao Lan:
PrivateGaze: Preserving User Privacy in Black-box Mobile Gaze Tracking Services. CoRR abs/2408.00950 (2024) - [i44]Yupei Liu, Yuqi Jia, Jinyuan Jia, Neil Zhenqiang Gong:
Evaluating Large Language Model based Personal Information Extraction and Countermeasures. CoRR abs/2408.07291 (2024) - 2023
- [c40]Jinghuai Zhang, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong:
PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees. CVPR 2023: 9496-9505 - [c39]Hangfan Zhang, Jinghui Chen, Lu Lin, Jinyuan Jia, Dinghao Wu:
Graph Contrastive Backdoor Attacks. ICML 2023: 40888-40910 - [c38]Hanting Ye, Guohao Lan, Jinyuan Jia, Qing Wang:
Screen Perturbation: Adversarial Attack and Defense on Under-Screen Camera. MobiCom 2023: 64:1-64:16 - [c37]Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong:
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service. NDSS 2023 - [c36]Bochuan Cao, Changjiang Li, Ting Wang, Jinyuan Jia, Bo Li, Jinghui Chen:
IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI. NeurIPS 2023 - [c35]Jinyuan Jia, Zhuowen Yuan, Dinuka Sahabandu, Luyao Niu, Arezoo Rajabi, Bhaskar Ramasubramanian, Bo Li, Radha Poovendran:
FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning. NeurIPS 2023 - [c34]Hangfan Zhang, Jinyuan Jia, Jinghui Chen, Lu Lin, Dinghao Wu:
A3FL: Adversarially Adaptive Backdoor Attacks to Federated Learning. NeurIPS 2023 - [c33]Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, Neil Zhenqiang Gong:
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information. SP 2023: 1366-1383 - [c32]Jinyuan Jia, Yupei Liu, Yuepeng Hu, Neil Zhenqiang Gong:
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks. USENIX Security Symposium 2023: 1703-1720 - [i43]Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong:
REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service. CoRR abs/2301.02905 (2023) - [i42]Jinghuai Zhang, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong:
PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees. CoRR abs/2303.01959 (2023) - [i41]Jinyuan Jia, Yupei Liu, Yuepeng Hu, Neil Zhenqiang Gong:
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks. CoRR abs/2303.14601 (2023) - [i40]Hangfan Zhang, Zhimeng Guo, Huaisheng Zhu, Bochuan Cao, Lu Lin, Jinyuan Jia, Jinghui Chen, Dinghao Wu:
On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused? CoRR abs/2310.01581 (2023) - [i39]Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, Neil Zhenqiang Gong:
Prompt Injection Attacks and Defenses in LLM-Integrated Applications. CoRR abs/2310.12815 (2023) - [i38]Bochuan Cao, Changjiang Li, Ting Wang, Jinyuan Jia, Bo Li, Jinghui Chen:
IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI. CoRR abs/2310.19248 (2023) - [i37]Hengzhi Pei, Jinyuan Jia, Wenbo Guo, Bo Li, Dawn Song:
TextGuard: Provable Defense against Backdoor Attacks on Text Classification. CoRR abs/2311.11225 (2023) - [i36]Fengqing Jiang, Zhangchen Xu, Luyao Niu, Boxin Wang, Jinyuan Jia, Bo Li, Radha Poovendran:
Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications. CoRR abs/2311.16153 (2023) - 2022
- [b1]Jinyuan Jia:
Privacy Protection via Adversarial Examples. Duke University, Durham, NC, USA, 2022 - [j2]Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, Neil Zhenqiang Gong:
FLCert: Provably Secure Federated Learning Against Poisoning Attacks. IEEE Trans. Inf. Forensics Secur. 17: 3691-3705 (2022) - [c31]Jinyuan Jia, Yupei Liu, Xiaoyu Cao, Neil Zhenqiang Gong:
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks. AAAI 2022: 9575-9583 - [c30]Yupei Liu, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong:
StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning. CCS 2022: 2115-2128 - [c29]Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong:
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations. ICLR 2022 - [c28]Aritra Ray, Jinyuan Jia, Sohini Saha, Jayeeta Chaudhuri, Neil Zhenqiang Gong, Krishnendu Chakrabarty:
Deep Neural Network Piration without Accuracy Loss. ICMLA 2022: 1032-1038 - [c27]Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients. KDD 2022: 2545-2555 - [c26]Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong:
MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples. NeurIPS 2022 - [c25]Jinyuan Jia, Yupei Liu, Neil Zhenqiang Gong:
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning. SP 2022: 2043-2059 - [c24]Yongji Wu, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data. USENIX Security Symposium 2022: 519-536 - [c23]Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning. USENIX Security Symposium 2022: 3629-3645 - [i35]Yupei Liu, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong:
StolenEncoder: Stealing Pre-trained Encoders. CoRR abs/2201.05889 (2022) - [i34]Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning. CoRR abs/2205.06401 (2022) - [i33]Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients. CoRR abs/2207.09209 (2022) - [i32]Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, Neil Zhenqiang Gong:
FLCert: Provably Secure Federated Learning against Poisoning Attacks. CoRR abs/2210.00584 (2022) - [i31]Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong:
MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples. CoRR abs/2210.01111 (2022) - [i30]Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, Neil Zhenqiang Gong:
FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information. CoRR abs/2210.10936 (2022) - [i29]Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning. CoRR abs/2211.08229 (2022) - [i28]Hongbin Liu, Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong:
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning. CoRR abs/2212.03334 (2022) - 2021
- [c22]Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Provably Secure Federated Learning against Malicious Clients. AAAI 2021: 6885-6893 - [c21]Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong:
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks. AAAI 2021: 7961-7969 - [c20]Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong:
Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks. AAAI 2021: 10093-10101 - [c19]Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong:
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes. AsiaCCS 2021: 2-13 - [c18]Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary. AsiaCCS 2021: 14-25 - [c17]Hongbin Liu, Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong:
EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning. CCS 2021: 2081-2095 - [c16]Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
PointGuard: Provably Robust 3D Point Cloud Classification. CVPR 2021: 6186-6195 - [c15]Jinyuan Jia, Zheng Dong, Jie Li, Jack W. Stokes:
Detection Of Malicious DNS and Web Servers using Graph-Based Approaches. ICASSP 2021: 2625-2629 - [c14]Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
On the Intrinsic Differential Privacy of Bagging. IJCAI 2021: 2730-2736 - [c13]Binghui Wang, Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong:
Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation. KDD 2021: 1645-1653 - [c12]Zaixi Zhang, Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong:
Backdoor Attacks to Graph Neural Networks. SACMAT 2021: 15-26 - [c11]Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Data Poisoning Attacks to Local Differential Privacy Protocols. USENIX Security Symposium 2021: 947-964 - [c10]Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, Yang Zhang:
Stealing Links from Graph Neural Networks. USENIX Security Symposium 2021: 2669-2686 - [i27]Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Provably Secure Federated Learning against Malicious Clients. CoRR abs/2102.01854 (2021) - [i26]Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
PointGuard: Provably Robust 3D Point Cloud Classification. CoRR abs/2103.03046 (2021) - [i25]Jinyuan Jia, Yupei Liu, Neil Zhenqiang Gong:
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning. CoRR abs/2108.00352 (2021) - [i24]Hongbin Liu, Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong:
EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning. CoRR abs/2108.11023 (2021) - [i23]Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong:
10 Security and Privacy Problems in Self-Supervised Learning. CoRR abs/2110.15444 (2021) - [i22]Yongji Wu, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data. CoRR abs/2111.11534 (2021) - 2020
- [c9]Jinyuan Jia, Xiaoyu Cao, Binghui Wang, Neil Zhenqiang Gong:
Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing. ICLR 2020 - [c8]Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. USENIX Security Symposium 2020: 1605-1622 - [c7]Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Neil Zhenqiang Gong:
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing. WWW 2020: 2718-2724 - [p1]Jinyuan Jia, Neil Zhenqiang Gong:
Defending Against Machine Learning Based Inference Attacks via Adversarial Examples: Opportunities and Challenges. Adaptive Autonomous Secure Cyber Systems 2020: 23-40 - [i21]Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Neil Zhenqiang Gong:
Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing. CoRR abs/2002.03421 (2020) - [i20]Binghui Wang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
On Certifying Robustness against Backdoor Attacks via Randomized Smoothing. CoRR abs/2002.11750 (2020) - [i19]Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, Yang Zhang:
Stealing Links from Graph Neural Networks. CoRR abs/2005.02131 (2020) - [i18]Zaixi Zhang, Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong:
Backdoor Attacks to Graph Neural Networks. CoRR abs/2006.11165 (2020) - [i17]Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong:
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks. CoRR abs/2008.04495 (2020) - [i16]Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong:
On the Intrinsic Differential Privacy of Bagging. CoRR abs/2008.09845 (2020) - [i15]Binghui Wang, Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong:
Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation. CoRR abs/2008.10715 (2020) - [i14]Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong:
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes. CoRR abs/2010.13751 (2020) - [i13]Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong:
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations. CoRR abs/2011.07633 (2020) - [i12]Jinyuan Jia, Xiaoyu Cao, Neil Zhenqiang Gong:
Certified Robustness of Nearest Neighbors against Data Poisoning Attacks. CoRR abs/2012.03765 (2020) - [i11]Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong:
Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks. CoRR abs/2012.13085 (2020)
2010 – 2019
- 2019
- [j1]Binghui Wang, Jinyuan Jia, Le Zhang, Neil Zhenqiang Gong:
Structure-Based Sybil Detection in Social Networks via Local Rule-Based Propagation. IEEE Trans. Netw. Sci. Eng. 6(3): 523-537 (2019) - [c6]Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, Neil Zhenqiang Gong:
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples. CCS 2019: 259-274 - [c5]Jinyuan Jia, Neil Zhenqiang Gong:
Calibrate: Frequency Estimation and Heavy Hitter Identification with Local Differential Privacy via Incorporating Prior Knowledge. INFOCOM 2019: 2008-2016 - [c4]Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong:
Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation. NDSS 2019 - [i10]Jinyuan Jia, Neil Zhenqiang Gong:
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges. CoRR abs/1909.08526 (2019) - [i9]Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, Neil Zhenqiang Gong:
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples. CoRR abs/1909.10594 (2019) - [i8]Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
IPGuard: Protecting the Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary. CoRR abs/1910.12903 (2019) - [i7]Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Data Poisoning Attacks to Local Differential Privacy Protocols. CoRR abs/1911.02046 (2019) - [i6]Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong:
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. CoRR abs/1911.11815 (2019) - [i5]Jinyuan Jia, Xiaoyu Cao, Binghui Wang, Neil Zhenqiang Gong:
Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing. CoRR abs/1912.09899 (2019) - 2018
- [c3]Jinyuan Jia, Neil Zhenqiang Gong:
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning. USENIX Security Symposium 2018: 513-529 - [i4]Binghui Wang, Jinyuan Jia, Le Zhang, Neil Zhenqiang Gong:
Structure-based Sybil Detection in Social Networks via Local Rule-based Propagation. CoRR abs/1803.04321 (2018) - [i3]Jinyuan Jia, Neil Zhenqiang Gong:
AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning. CoRR abs/1805.04810 (2018) - [i2]Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong:
Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation. CoRR abs/1812.01661 (2018) - [i1]Jinyuan Jia, Neil Zhenqiang Gong:
Calibrate: Frequency Estimation and Heavy Hitter Identification with Local Differential Privacy via Incorporating Prior Knowledge. CoRR abs/1812.02055 (2018) - 2017
- [c2]Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong:
Random Walk Based Fake Account Detection in Online Social Networks. DSN 2017: 273-284 - [c1]Jinyuan Jia, Binghui Wang, Le Zhang, Neil Zhenqiang Gong:
AttriInfer: Inferring User Attributes in Online Social Networks Using Markov Random Fields. WWW 2017: 1561-1569
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-26 01:53 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint