1、New Security Explorations in the Era of LLMsLyutoon IIE&Nu1LContent LLM Security Security of LLM-Integrated Ecosystem LLM+Security=?LLM SecurityHow Security Manifests in LLMs?LLMs can generate misleading or harmful content.source:Belgian Man Commits Suicide After AI Chatbot Urges Him To Sacrifice Hi
2、mself For Climate Change( Attacks:Jailbreak LLM in the“Jail”LLMs are not inherently safe.Content safety is imposed through specialized fine-tuning processes.LLM Jailbreak Attackers may craft special prompt sequences to bypass the safety-alignment.Existing Attacks:Jailbreak How to Jailbreak LLMs?GCG
3、GPTFuzz DRA Why DRA?My work on USENIX Security24:D Identified the bias in LLM safety-alignment Blackbox attack Still works on ChatGPT 4o,4,3.5,4o-mini DRA:https:/ Making Them Ask and Answer:Jailbreaking Large Language Models in Few Queries via Disguise and Reconstruction https:/www.usenix.org/confer
4、ence/usenixsecurity24/presentation/liu-tong Existing Attacks:Prompt Leaking LLMs System Prompt Leaking Prompt leaking represents an attack that asks the model to show its own(system)prompt.Sensitive information IP Copyright Existing Attacks:Prompt Injection Taken from Learning Prompt website:https:/
5、learnprompting.org/docs/prompt_hacking/injectionTaken from paper:Prompt Injection attack against LLM-integrated Applications Prompt Injection Inspired by SQL injection.Affect:Manipulate models output Maybe RCE!(Talk it later)Security of LLM-Integrated EcosystemLLM-integrated System Taken from Learni
6、ng Prompt website:https:/learnprompting.org/docs/prompt_hacking/injectionLLM-integrated Frameworks:Toolkit or abstractions to interact with LLMs easily for some tasks.LLM-integrated Apps:Apps built upon LLM-integrated frameworks Question:Is this system safe?Answer:Definitely not!Motivation Example:L