DeepSeek:2024年DeepSeek-V2模型技术报告:经济、高效的混合专家语言模型(英文版)(52页).pdf

编号:599691 PDF  中文版  DOCX 52页 1.51MB 下载积分:VIP专享
下载报告请您先登录!

DeepSeek:2024年DeepSeek-V2模型技术报告:经济、高效的混合专家语言模型(英文版)(52页).pdf

1、DeepSeek-V2:A Strong,Economical,and EfficientMixture-of-Experts Language ModelDeepSeek-AIAbstractWe present DeepSeek-V2,a strong Mixture-of-Experts(MoE)language model characterized byeconomical training and efficient inference.It comprises 236B total parameters,of which 21Bare activated for each tok

2、en,and supports a context length of 128K tokens.DeepSeek-V2 adoptsinnovative architectures including Multi-head Latent Attention(MLA)and DeepSeekMoE.MLA guarantees efficient inference through significantly compressing the Key-Value(KV)cacheinto a latent vector,while DeepSeekMoE enables training stro

3、ng models at an economicalcost through sparse computation.Compared with DeepSeek 67B,DeepSeek-V2 achievessignificantly stronger performance,and meanwhile saves 42.5%of training costs,reduces theKV cache by 93.3%,and boosts the maximum generation throughput to 5.76 times.We pretrainDeepSeek-V2 on a h

4、igh-quality and multi-source corpus consisting of 8.1T tokens,and furtherperform Supervised Fine-Tuning(SFT)and Reinforcement Learning(RL)to fully unlock itspotential.Evaluation results show that,even with only 21B activated parameters,DeepSeek-V2and its chat versions still achieve top-tier performa

5、nce among open-source models.The modelcheckpoints are available athttps:/ Parameters(Billions)556065707580Performance(MMLU)DeepSeek-V2DeepSeek 67BLLaMA 1 33BLLaMA 1 65BLLaMA 2 13BLLaMA 2 34BLLaMA 2 70BLLaMA 3 8BLLaMA 3 70BMistral 7BMixtral 8x7BMixtral 8x22BCommand RCommand R+Grok-1DBRXQwen1.5 32BQwe

6、n1.5 72BLLaMA 1 FamilyLLaMA 2 FamilyLLaMA 3 FamilyMixtral FamilyCommand R FamilyQwen1.5 Family(a)050100150200250300DeepSeek-V2DeepSeek 67Bsaving 42.5%oftraining costsTraining Costs(K GPU Hours/T Tokens)0100200300400DeepSeek-V2DeepSeek 67Breducing KV cache by 93.3%KV Cache for Generation(KB/Token)010

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(DeepSeek:2024年DeepSeek-V2模型技术报告:经济、高效的混合专家语言模型(英文版)(52页).pdf)为本站 (白日梦派对) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
折叠