Whale:统一多种并行化策略的分布式深度学习框架.pdf

编号:29537 PDF 19页 1.72MB 下载积分:VIP专享
下载报告请您先登录!

Whale:统一多种并行化策略的分布式深度学习框架.pdf

1、Alibaba Cloud | Q2SWorldwide Cloud Services PartnerWhaleA Unified Distributed Training FrameworkAngWangwangangwaalibaba-PAI, Alibaba Cloud15/12/2020WW.ALIBABACLOUD.COM#page#Motivation销阳Models are getting largerWebText Vaidaton PerplexyMemory(GB)ParametersM)20000050345M775M2.5B8.317500080401500003230

2、100000162085000010110001500340251170112P4P100V100A100EpochModels are getting largerLarger models lead to betterModel size grows far beyondresults with lower validationand more complexupgrading of hardwareperplexities1https/de#page#MotivationData parallelism becomes less optimal for lots of distribut

3、ed workloadsGrads AReduceData ParallelCDP) is widely used in distributed trainingas it is simple and easy to implement.DP is not always optimal for every distributed trainingworkloadsNecessary to find an efficient parallel strategy that canGPUOGPU1make full use of the resources and speedup the train

4、ingDistribute the trainingworkload wiith data paralleliism#page#期MotivationData parallelism becomes less optimal for lots of distributed workloadsBO3中中中EgBertLargeE.g.VGG16E.g.T5GPT-3Some layers contribute most ofDifficult to increase the batch sizein aModel size is far larger than thesingle GPU dev

5、ice due to the limitation ofparameters but have asmall proportionmemory size of a single GPUthe GPU device memory capacity.device.of computation such as FC layers inVGG16.Large weight size,long communicationUnable to train the model unlessHard to overlap computation withtime lead to poor scalability

6、.adopting the model parallelismcommunication. lead to poor scalabilityeras-for-beqinners-ow.ai/2019/05/21/167211https:ed-transformer/#page#阿里云Q9Distributed Model Training ApproachGPU1Grads AIReduceGPUOGPU1GPUOGPU1GPUOData ParallelismPipeline ParallelismOperator ShardingHybrid Parallelism#page#WhaleA

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(Whale:统一多种并行化策略的分布式深度学习框架.pdf)为本站 (X-iao) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
折叠