《DeepSeek Coder V2技术报告(英文版)(19页).pdf》由会员分享,可在线阅读,更多相关《DeepSeek Coder V2技术报告(英文版)(19页).pdf(19页珍藏版)》请在三个皮匠报告上搜索。
1、DeepSeek-Coder-V2:Breaking the Barrier of Closed-SourceModels in Code IntelligenceQihao Zhu*,Daya Guo*,Zhihong Shao*,Dejian Yang*,Peiyi Wang,Runxin Xu,Y.WuYukun Li,Huazuo Gao,Shirong Ma,Wangding Zeng,Xiao Bi,Zihui Gu,Hanwei Xu,Damai DaiKai Dong,Liyue Zhang,Yishi Piao,Zhibin Gou,Zhenda Xie,Zhewen Hao
2、,Bingxuan WangJunxiao Song,Deli Chen,Xin Xie,Kang Guan,Yuxiang You,Aixin Liu,Qiushi Du,Wenjun GaoXuan Lu,Qinyu Chen,Yaohui Wang,Chengqi Deng,Jiashi Li,Chenggang ZhaoChong Ruan,Fuli Luo,Wenfeng LiangDeepSeek-AIhttps:/ present DeepSeek-Coder-V2,an open-source Mixture-of-Experts(MoE)code languagemodel
3、that achieves performance comparable to GPT4-Turbo in code-specific tasks.Specifically,DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2with additional 6 trillion tokens.Through this continued pre-training,DeepSeek-Coder-V2substantially enhances the coding and m
4、athematical reasoning capabilities of DeepSeek-V2,while maintaining comparable performance in general language tasks.Compared to DeepSeek-Coder-33B,DeepSeek-Coder-V2 demonstrates significant advancements in various aspects ofcode-related tasks,as well as reasoning and general capabilities.Additional
5、ly,DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338,while extending the contextlength from 16K to 128K.In standard benchmark evaluations,DeepSeek-Coder-V2 achievessuperior performance compared to closed-source models such as GPT4-Turbo,Claude 3 Opus,and Gemini 1.5 Pro i
6、n coding and math benchmarks.HumanEvalMBPP+MATHGSM8K5060708090100Accuracy(%)90.276.275.794.988.272.273.493.783.574.667.790.884.972.060.195.081.769.050.493.081.168.2AiderLiveCodeBenchSWE-Bench0102030405060708073.743.412.763.945.718.357.134.118.768.434.611.749.228.751.131.02.7DeepSeek-Coder-V2GPT-4-Tu