《CSET:2025人工智能危害形成机制:基于 AI翻车事故的经验教训研究报告(英文版)(36页).pdf》由会员分享,可在线阅读,更多相关《CSET:2025人工智能危害形成机制:基于 AI翻车事故的经验教训研究报告(英文版)(36页).pdf(36页珍藏版)》请在三个皮匠报告上搜索。
1、Issue BriefOctober 2025The Mechanisms of AI HarmLessons Learned from AI IncidentsAuthorMia HoffmannThe Mechanisms of AI HarmLessons Learned from AI IncidentsAuthorMia Hoffmann Center for Security and Emerging Technology|1 Executive Summary With recent advancements in artificial intelligenceparticula
2、rly,powerful generative modelsprivate and public sector actors have heralded the benefits of incorporating AI more prominently into our daily lives.Frequently cited benefits include increased productivity,efficiency,and personalization.However,the harm caused by AI remains to be more fully understoo
3、d.As a result of wider AI deployment and use,the number of AI harm incidents has surged in recent years,suggesting that current approaches to harm prevention may be falling short.This report argues that this is due to a limited understanding of how AI risks materialize in practice.Leveraging AI inci
4、dent reports from the AI Incident Database,it analyzes how AI deployment results in harm and identifies six key mechanisms that describe this process(Table 1).Table 1:The Six AI Harm Mechanisms Intentional Harm Unintentional Harm Harm by design AI misuse Attacks on AI systems AI failures Failures of
5、 human oversight Integration harm A review of AI incidents associated with these mechanisms leads to several key takeaways that should inform AI governance approaches in the future.1.A one-size-fits-all approach to harm prevention will fall short.This report illustrates the diverse pathways to AI ha
6、rm and the wide range of actors involved.Effective mitigation requires an equally diverse response strategy that includes sociotechnical approaches.Adopting model-based approaches alone could especially neglect integration harms and failures of human oversight.2.To date,risk of harm correlates only