《美国安全与新兴技术中心:2024人工智能生成代码的网络安全风险研究报告(英文版)(41页).pdf》由会员分享,可在线阅读,更多相关《美国安全与新兴技术中心:2024人工智能生成代码的网络安全风险研究报告(英文版)(41页).pdf(41页珍藏版)》请在三个皮匠报告上搜索。
1、Issue BriefNovember 2024Cybersecurity Risks of AI-Generated CodeAuthorsJessica JiJenny JunMaggie WuRebecca GellesCybersecurity Risks of AI-Generated CodeAuthorsJessica JiJenny JunMaggie WuRebecca GellesCenter for Security and Emerging Technology|1 Executive Summary Recent developments have improved
2、the ability of large language models(LLMs)and other AI systems to generate computer code.While this is promising for the field of software development,these models can also pose direct and indirect cybersecurity risks.In this paper,we identify three broad categories of risk associated with AI code g
3、eneration models:1)models generating insecure code,2)models themselves being vulnerable to attack and manipulation,and 3)downstream cybersecurity impacts such as feedback loops in training future AI systems.Existing research has shown that,under experimental conditions,AI code generation models freq
4、uently output insecure code.However,the process of evaluating the security of AI-generated code is highly complex and contains many interdependent variables.To further explore the risk of insecure AI-written code,we evaluated generated code from five LLMs.Each model was given the same set of prompts
5、,which were designed to test likely scenarios where buggy or insecure code might be produced.Our evaluation results show that almost half of the code snippets produced by these five different models contain bugs that are often impactful and could potentially lead to malicious exploitation.These resu
6、lts are limited to the narrow scope of our evaluation,but we hope they can contribute to the larger body of research surrounding the impacts of AI code generation models.Given both code generation models current utility and the likelihood that their capabilities will continue to improve,it is import