1、Policy BriefJuly 2024Enabling Principles for AI GovernanceAuthorsOwen J.DanielsDewey Murdick Center for Security and Emerging Technology|1 Introduction The question of how to govern artificial intelligence(AI)is rightfully top of mind for U.S.lawmakers and policymakers alike.Strides in the developme
2、nt of high-powered large language models(LLMs)like ChatGPT/GPT-4o,Claude,Gemini,and Microsoft Copilot have demonstrated the potentially transformative impact that AI could have on society,replete with opportunities and risks.At the same time,international partners in Europe and competitors like Chin
3、a are taking their own steps toward AI governance.1 In the United States and abroad,public analyses and speculation about AIs potential impact generally lie along a spectrum ranging from utopian at one endAI as enormously beneficial for societyto dystopian on the otheran existential risk that could
4、lead to the end of humanityand many nuanced positions in between.LLMs grabbed public attention in 2023 and sparked concern about AI risks,but other models and applications,such as prediction models,natural language processing(NLP)tools,and autonomous navigation systems,could also lead to myriad harm
5、s and benefits today.Challenges include discriminatory model outputs based on bad or skewed input data,risks from AI-enabled military weapon systems,as well as accidents with AI-enabled autonomous systems.Given AIs multifaceted potential,in the United States,a flexible approach to AI governance offe
6、rs the most likely path to success.The different development trajectories,risks,and harms from various AI systems make the prospect of a one-size-fits-all regulatory approach implausible,if not impossible.Regulators should begin to build strength through the heavy lifting of addressing todays challe