1、1Security for Machine LearningSecurity for Machine Learning Integrity Training Deployment/Prediction Confidentiality Users:private training and testing data Service providers:confidential algorithms,models,and hyperparameters2Training a Machine Learning ModelTrainingdatasetAlgorithms for learning hy
2、perparametersHyperparametersAlgorithms for learning modelparametersModel parameters3Compromising Integrity at TrainingTrainingdatasetAlgorithms for learning hyperparametersHyperparametersAlgorithms for learning modelparametersModel parameters4Model with bad prediction accuracy Trojaned modelPoisonin
3、g training dataMalicious algorithmGoal:Approach:Recommender Systems are Vulnerable toTraining Data Poisoning Attacks Recommender system is an important component of Internet Videos,products,news,etc.Common belief:recommend users items matching their interests Our work:injecting fake training data to
4、 make recommendations as an attacker desires5Guolei Yang,Neil Zhenqiang Gong,and Ying Cai.“Fake Co-visitation Injection Attacks to Recommender Systems”.In NDSS,2017Minghong Fang,Guolei Yang,Neil Zhenqiang Gong,and Jia Liu.“Poisoning Attacks to Graph-Based Recommender Systems”.In ACSAC,2018Co-visitat
5、ion Recommender Systems Key idea:Items that are frequently visited together in the past are likely to be visited together in the future6Video AVideo BViewVideo AVideo BIn the pastViewRecommendvideosShowCo-visitation Recommender Systems7Our Attacks Goal:Promoting a target item Injecting fake co-visit
6、ations between a target item and some carefully selected items The target item will appear in their recommendation lists Can attack YouTube,Amazon,eBay,LinkedIn,etc.8Security for Machine Learning Integrity Training Deployment/Prediction:adversarial examples Confidentiality Users:private training and