Paper
13 October 2022 Label-only membership inference attack based on the transferability of adversarial sample
Xiaowei Hao, Dong Zhang, Hanwei Wu, Jing Duan, Long An, Xiu Liu
Author Affiliations +
Proceedings Volume 12287, International Conference on Cloud Computing, Performance Computing, and Deep Learning (CCPCDL 2022); 122870D (2022) https://doi.org/10.1117/12.2640751
Event: International Conference on Cloud Computing, Performance Computing, and Deep Learning (CCPCDL 2022), 2022, Wuhan, China
Abstract
Machine learning (ML) has been widely used in various scenes, such as face recognition, automatic driving and image recognition and other fields. However, in these applications, attackers usually exploit various vulnerabilities in machine learning models to mine private information, including model training information and private data information. The attacks against machine learning training data mainly include membership inference attack (MIA). The attacker of membership inference attack will judge whether the data sample is in the training data set of the target model given the data sample and the target model. Most of the existing member inference attacks are based on confidence attacks. Researchers found that if the confidence returned is hidden, they can effectively defend against these attacks. Therefore, the latest research proposed label-only attack, but this attack requires hundreds of queries for a sample, and abnormal query accesses are easy to be found by the target model. In this paper, we propose a member inference attack based on the transferability of adversarial samples. The adversarial samples generated under different models of the same task have certain mobility characteristics, so as to use the shadow model to generate adversarial samples for attack and reduce the attack cost. The evaluation shows that under the restriction of fewer queries, only the label output will still reveal private information, and the performance of the member inference attack based on the transferability of adversarial samples is better than that based on confidence in most cases.
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Xiaowei Hao, Dong Zhang, Hanwei Wu, Jing Duan, Long An, and Xiu Liu "Label-only membership inference attack based on the transferability of adversarial sample", Proc. SPIE 12287, International Conference on Cloud Computing, Performance Computing, and Deep Learning (CCPCDL 2022), 122870D (13 October 2022); https://doi.org/10.1117/12.2640751
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Statistical modeling

Machine learning

Binary data

Defense and security

Distance measurement

Facial recognition systems

Back to Top