《大模型分布式训练的第四种境.pdf》由会员分享,可在线阅读,更多相关《大模型分布式训练的第四种境.pdf(29页珍藏版)》请在三个皮匠报告上搜索。
1、DataFunSummitDataFunSummit#20232023大模型分布式训练的第四大模型分布式训练的第四种种境界境界段石石-壁仞科技-技术专家历史历史背景背景分布式训练分布式训练挑战挑战分布式训练技术分布式训练技术介绍介绍未来未来挑战挑战目录目录 CONTENTCONTENTDataFunSummitDataFunSummit#202320230101历史历史背景背景Large Language ModelsLLM InfraDataFunSummitDataFunSummit#202320230202分布式训练分布式训练挑战挑战LLM need Huge FLOPSTransfor
2、mer FLOPs Equation:https:/ number of parameters;D:the number of tokens that model is train on;LLMLLM#parametersparameters#tokenstokensFLOPSFLOPSGPT31.75*10113*10113.15*1023LLM-65B6.5*10101.4*10125.46*1023PaLM5.4*10117.8*10112.53*1024Model Parameters vs.MemoryGap become huge2T Load Params2T Load Para
3、ms618G Load Params618G Load Params5G Load Params5G Load Params358M Load Params358M Load ParamsGap become hugeDistributed ML System2.532.53*10241024/312312*10121012/86400=254year254year.NetworkData LoaderT6ND/(*#GPU*R)2048 GPUs:46/R dayshttps:/ ClusterForwardBackwardOptimizerWeight=F(Weight,Grad).拓扑计
4、算硬件传输介质DataFunSummitDataFunSummit#202320230303分布式训练技术分布式训练技术体系体系Brief Histroy 20122016DistBeliefDistBelief;Parameter ServerParameter Serverlimulimu;BosenBosen;GeePSGeePS;2018TF TF allreduceallreduce baidubaidu;H Horovodorovod;DDPDDP;Compute Graph and PlacementCompute Graph and PlacementTransformerTr
5、ansformer及其变种;及其变种;流水线并行;流水线并行;大规模模型并行;大规模模型并行;2020Large language model wit FSL;Large language model wit FSL;PaLMPaLM:PathWayGooglePathWayGoogle;CLIPOpenAICLIPOpenAI,连接图与文;,连接图与文;Ilya SutskeverZeRO-DP Data Parallelism Family1 S.Rajbhandari,J.Rasley,O.Ruwase,and Y.He,“ZeRO:Memory Optimizations Toward
6、 Training Trillion Parameter Models.”arXiv,May 13,2020.doi:10.48550/arXiv.1910.02054;Recompute1 T.Chen,B.Xu,C.Zhang,and C.Guestrin,“Training Deep Nets with Sublinear Memory Cost.”arXiv,Apr.22,2016.Accessed:Jun.12,2023.Online.Available:http:/arxiv.org/abs/1604.06174Offload Memory/NVME1 S.Rajbhandari,