大语言模型分布式训练时的量化分析与最佳实践以 GPT-175B 为例.pdf

编号:155323 PDF 39页 14.99MB 下载积分:VIP专享
下载报告请您先登录!

大语言模型分布式训练时的量化分析与最佳实践以 GPT-175B 为例.pdf

1、IN-DEPTH ANALYSIS OF THE PERFORMANCE FOR GPT3颜子杰NVIDIALLM TRAINING TECHSLARGER MODEL IS THE TRENDCHALLENGES FOR TRAINING LARGE MODELCompute cost Lower bound of each iteration:(Refer to https:/arxiv.org/abs/2104.04473)B:batch size,S:sequence length,l:transformer layer numberh:hidden size,V:vocabulary

2、 size 2150 ZettaFLOPs(175B with 1.5T tokens)1 ZettaFLOP=1024 ExaFLOPs96!(1+6+16)Challenges 128 DGX A100,trained in 170 120 days.(about 50%computing efficiency)High Compute CostsCHALLENGES FOR TRAINING LARGE MODELMemory costs(Mixed Precision,Native implementation)Model States(Total:3.5TB)Parameter:35

3、0GB(175B*2Bytes)Gradient:350GB Optimizer:2800GB Activation:?Challenges Model can not fit in single GPU or even single GPU server.(35p A100 80G)Model parallelism is a MUST across multi nodesHigh Memory CostsCHALLENGES FOR TRAINING LARGE MODEL Model can not fit in single GPU or even single GPU server.

4、(35p A100-80G)Extremely huge computing power:about 16K A100*days computing.(Not considering efficiency)What we need:An efficient framework with model parallel Careful co-design of software and system7NeMo and Megatron-LM is NVIDIAs FW for efficiently training the worlds largest transformer-based mod

5、els.Train transformer models with billions of parametersAchieve high utilization and scaling to thousands of GPUs7NeMo and MEGATRONOVERVIEW OF LARGE TRANSFORMER TRAINING TECHNIQUES Parallelisms:Pipeline Parallelism Tensor Parallelism Sequence Parallelism Expert Parallelism Memory Optimizations:Distr

6、ibuted optimizer (DeepSpeed ZeRO-1)Checkpoint activations Selective activation checkpointing Others:FP16/BF16 training,optimized kernels,etc Communication overlapping for PP and TPBlue for Megatron v2 featuresGreen for Megatron v3 new featuresTHE DISTRIBUTED TRAINING OF GPT-3 MODELModel Parallelism

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(大语言模型分布式训练时的量化分析与最佳实践以 GPT-175B 为例.pdf)为本站 (张5G) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
折叠