1、Recent Progresses in Transfer-based Attack for Image RecognitionXiaosen WangHuawei Singularity Security LabPreliminaries1Gradient-based Attacks2Input Transformation-based Attacks3Model-related Attacks4Advanced Objec-tiveFunctions5Further Discussion&Conclusion6CONTENTS DNNs are everywhere in our life
2、!DNNsImage ClassificationObject DetectionAutonomous DrivingMedical DiagnosticsFacial Scan PaymentVoice RecognitionPreliminaries Adversarial examples are indistinguishable from legitimate ones by adding small perturbations,but lead to incorrect model prediction.PreliminariesGoodfellow et al.Explainin
3、g and Harnessing Adversarial Examples.ICLR 2015.Wei et al.Adversarial Sticker:A Stealthy Attack Method in the Physical World.TPAMI 2022.Eykholt et al.Robust Physical-World Attacks on Deep Learning Visual Classification.CVPR 2018.Adversarial examples bring a huge threats to AI applications.Preliminar
4、ies How to generate Adversarial examples?Training a Network:min!,$,;.Generating Adversarial Example:max|(!#|)*+,-,;.Untargeted attack:The victim model predicts the generated adversarial example into any incorrect categories.Targeted attack:The victim model predicts the generated adversarial example
5、into a specific category.:Training dataset(%):Loss function:Clean input:Ground-truth label$%&:Adversarial examplePreliminariesWhite-box Attack:The attacker could access any information of victim model,e.g.,architecture,weights,gradients,etc.Black-box Attack:The attacker could access limited informat
6、ion of victim model.Score-based Attack:The attacker could obtain the prediction probability.Decision-based Attack:The attacker could obtain the prediction label.Transfer-based Attack:The adversarial examples generated on one model could mislead other victim models.!#$%!#$%PreliminariesTransfer-based