1、A Technical Primer on DeepSeekOverview.2First Take Summary.3Claim.4DeepSeek Models.5Architecture.6DeepSeek-R1 Reinforcement Learning with Human Feedback.7 Pipeline.7 Reinforcement Learning.8Distillation and Smaller Models.9Computation.11 Cost.11 Scheduling.11 Combination of Efforts.11Performance.12A
2、ssessment of Technical Claims.14 Training Costs.14 Benchmarks.14Allegations of Data Theft and the Bigger Picture.15Conclusion.16Authors.16Table of ContentsCopyright 2025 Booz Allen Hamilton Inc.11 https:/ Overview DeepSeek is a China-based AI startup that has led a well-funded effort to develop adva
3、nced large language models(LLM)using a large team(100+)of experienced developers.Public interest stems from their newest models being released for free with what the company claims is performance comparable to OpenAI,Anthropic,and Meta LLMs at a fraction of the price and training time.“DeepSeek”is c
4、onflated with multiple algorithms of the same namesake,but it is the DeepSeek-R1 LLMa 671B modelthat is the focus of media attention.It has been trained with a multi-stage pipeline of Reinforcement Learning(RL),Supervised Fine-Tuning(SFT),and possibly distillation methods to learn from a larger teac
5、her model.The cost to train DeepSeek is publicized as$6 million,which is derived from the older,DeepSeek-V3 base model.It is not easy to verify the cost,and,at face value,it likely is a snapshot of a single,pristine training run.Their paper makes this explicit,but it has been overlooked in reactions
6、 that fail to account for significant experimentation,prior development,and infrastructure costs.Their training process applies a variety of artificial intelligence(AI),optimization,and hardware innovations derived from non-DeepSeek published research to train an LLM with less computational infrastr