1、DeepSeek-V2:A Strong,Economical,and EfficientMixture-of-Experts Language ModelDeepSeek-AIAbstractWe present DeepSeek-V2,a strong Mixture-of-Experts(MoE)language model characterized byeconomical training and efficient inference.It comprises 236B total parameters,of which 21Bare activated for each tok
2、en,and supports a context length of 128K tokens.DeepSeek-V2 adoptsinnovative architectures including Multi-head Latent Attention(MLA)and DeepSeekMoE.MLA guarantees efficient inference through significantly compressing the Key-Value(KV)cacheinto a latent vector,while DeepSeekMoE enables training stro
3、ng models at an economicalcost through sparse computation.Compared with DeepSeek 67B,DeepSeek-V2 achievessignificantly stronger performance,and meanwhile saves 42.5%of training costs,reduces theKV cache by 93.3%,and boosts the maximum generation throughput to 5.76 times.We pretrainDeepSeek-V2 on a h
4、igh-quality and multi-source corpus consisting of 8.1T tokens,and furtherperform Supervised Fine-Tuning(SFT)and Reinforcement Learning(RL)to fully unlock itspotential.Evaluation results show that,even with only 21B activated parameters,DeepSeek-V2and its chat versions still achieve top-tier performa
5、nce among open-source models.The modelcheckpoints are available athttps:/ Parameters(Billions)556065707580Performance(MMLU)DeepSeek-V2DeepSeek 67BLLaMA 1 33BLLaMA 1 65BLLaMA 2 13BLLaMA 2 34BLLaMA 2 70BLLaMA 3 8BLLaMA 3 70BMistral 7BMixtral 8x7BMixtral 8x22BCommand RCommand R+Grok-1DBRXQwen1.5 32BQwe
6、n1.5 72BLLaMA 1 FamilyLLaMA 2 FamilyLLaMA 3 FamilyMixtral FamilyCommand R FamilyQwen1.5 Family(a)050100150200250300DeepSeek-V2DeepSeek 67Bsaving 42.5%oftraining costsTraining Costs(K GPU Hours/T Tokens)0100200300400DeepSeek-V2DeepSeek 67Breducing KV cache by 93.3%KV Cache for Generation(KB/Token)010