1、Lessons from red teaming 100 generative AI products Authored by:Microsoft AI Red TeamAuthorsBlake Bullwinkel,Amanda Minnich,Shiven Chawla,Gary Lopez,Martin Pouliot,Whitney Maxwell,Joris de Gruyter,Katherine Pratt,Saphir Qi,Nina Chikanov,Roman Lutz,Raja Sekhar Rao Dheekonda,Bolor-Erdene Jagdagdorj,Eu
2、genia Kim,Justin Song,Keegan Hines,Daniel Jones,Giorgio Severi,Richard Lundeen,Sam Vaughan,Victoria Westerhoff,Pete Bryan,Ram Shankar Siva Kumar,Yonatan Zunger,Chang Kawaguchi,Mark Russinovich2Lessons from red teaming 100 generative AI productsTable of contents304Abstract07Red teaming operations09Ca
3、se study#1 Jailbreaking a vision language model to generate hazardous content12Lesson 4 Automation can help cover more of the risk landscape05Introduction08Lesson 1 Understand what the system can do and where it is applied10Lesson 3 AI red teaming is not safety benchmarking12Lesson 5 The human eleme
4、nt of AI red teaming is crucial05AI threat model ontology 08Lesson 2 You dont have to compute gradients to break an AI system 11Case study#2 Assessing how an LLM could be used to automate scams13Case study#3 Evaluating how a chatbot responds to a user in distressLessons from red teaming 100 generati
5、ve AI products14Case study#4 Probing a text-to-image generator for gender bias14Lesson 6 Responsible AI harms are pervasive but difficult to measure15Lesson 7 LLMs amplify existing security risks and introduce new ones 16Case study#5 SSRF in a video-processing GenAI application17Lesson 8 The work of
6、 securing AI systems will never be complete18ConclusionAbstractIn recent years,AI red teaming has emerged as a practice for probing the safety and security of generative AI systems.Due to the nascency of the field,there are many open questions about how red teaming operations should be conducted.Bas