Membership inference attacks in deep learning refer to inferring whether a given sample belongs to the training dataset of a target model. Due to the presence of privacy-sensitive information in the training dataset, defending against membership inference attacks is crucial for privacy protection. This paper begins by defining membership inference attacks and elucidating the underlying reasons causing such attacks. Subsequently, existing defense algorithms are comprehensively reviewed. Finally, a novel defense mechanism is proposed, delineating the defensive approach adopted in this paper. Compared to state-of-the-art defenses against membership inference attacks, this method offers superior trade-offs between preserving member privacy and maintaining model utility. Detailed explanations of the employed techniques are provided to facilitate a better understanding of membership inference attacks and their defenses, thereby furnishing valuable insights for mitigating privacy risks in training datasets and striking a balance between model utility and privacy security.
Defense method for membership inference attacks based on diffusion model and mixed samples. Journal of Guangzhou University(Natural Science Edition). 2024, 23(5): 76-75
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
参考文献
[14]DhariwalP,NicholA. Diffusion models beat GAN son image synthesis[EB/OL]. (20210511)[20231212].2021:arXiv:2105.05233. http://arxiv. org/abs/2105.05233.