《DeepSeek Math技术报告(英文版)(30页).pdf》由会员分享,可在线阅读,更多相关《DeepSeek Math技术报告(英文版)(30页).pdf(30页珍藏版)》请在三个皮匠报告上搜索。
1、DeepSeekMath:Pushing the Limits of MathematicalReasoning in Open Language ModelsZhihong Shao1,2,Peiyi Wang1,3,Qihao Zhu1,3,Runxin Xu1,Junxiao Song1Xiao Bi1,Haowei Zhang1,Mingchuan Zhang1,Y.K.Li1,Y.Wu1,Daya Guo11DeepSeek-AI,2Tsinghua University,3Peking Universityzhihongshao,wangpeiyi,zhuqh,https:/ re
2、asoning poses a significant challenge for language models due to its complexand structured nature.In this paper,we introduce DeepSeekMath 7B,which continues pre-training DeepSeek-Coder-Base-v1.5 7B with 120B math-related tokens sourced from CommonCrawl,together with natural language and code data.De
3、epSeekMath 7B has achieved animpressive score of 51.7%on the competition-level MATH benchmark without relying onexternal toolkits and voting techniques,approaching the performance level of Gemini-Ultraand GPT-4.Self-consistency over 64 samples from DeepSeekMath 7B achieves 60.9%on MATH.The mathemati
4、cal reasoning capability of DeepSeekMath is attributed to two key factors:First,we harness the significant potential of publicly available web data through a meticulouslyengineered data selection pipeline.Second,we introduce Group Relative Policy Optimization(GRPO),a variant of Proximal Policy Optim
5、ization(PPO),that enhances mathematical reasoningabilities while concurrently optimizing the memory usage of PPO.Figure 1|Top1 accuracy of open-source models on the competition-level MATH benchmark(Hendrycks et al.,2021)without the use of external toolkits and voting techniques.Core contributors.Wor
6、k done during internship at DeepSeek-AI.arXiv:2402.03300v3 cs.CL 27 Apr 20241.IntroductionLarge language models(LLM)have revolutionized the approach to mathematical reasoningin artificial intelligence,spurring significant advancements in both the quantitative reasoningbenchmark(Hendrycks et al.,2021