unlocking-llm-performance-with-ebpf-optimizing-training-and-inference-pipelines-chuan-hui-ebpfji-xi-llmxia-daep-xiao-zhen-relia-fa-qiu-yang-xiang-yunshan-networks-inc-1.pdf

编号:627390 PDF 37页 8.08MB 下载积分:VIP专享
下载报告请您先登录!

unlocking-llm-performance-with-ebpf-optimizing-training-and-inference-pipelines-chuan-hui-ebpfji-xi-llmxia-daep-xiao-zhen-relia-fa-qiu-yang-xiang-yunshan-networks-inc-1.pdf

1、Unlocking LLM Performance with :Optimizing Training and Inference PipelinesYang Xiang,Yunshan NetworksOutline1.1.BackgroundBackground:Challenges in Training and Inference Efficiency2.2.Status QuoStatus Quo:Issues with Traditional Solutions and Tools3.3.ApproachApproach:Building Zero-Code Observabili

2、ty with eBPF4.4.Practical CasePractical Case:Full-Stack Profiling and D-Tracing in PyTorchLLM Training:High Costs andLow EfficiencyGPT-4GPT-4Llama-3.1Llama-3.1Size1.8T405BGPU25K A10016K H100Days9010054MFU32%36%38%43%Everything We Know About GPT-4-Klu.aiGPT4-All Details LeakedThe Llama 3 Herd of Mode

3、lsTraining duration:MonthsGPU count:10KGPU MFU:40%Annual GPU failure rate:6%-11%Model size:Trillions of params148/54*365/16384=6%(148+72+19+17+6+6)/54*365/16384=11%Causes of Training InefficiencyBeyond FailuresGPU KernelsMemory copyNetwork transmission Yanjie Gao(Microsoft Research)et al,ACM ICSE 20

4、24,An Empirical Study on Low GPU Utilization of Deep Learning Jobs.Yanjie Gao(Microsoft Research)et al,ACM ICSE 2023,An Empirical Study on Quality Issues of Deep Learning Platform.How can you determine if your training job has these inefficiencies?LLM Inference:High Costs and LatencyLlamaLlama8B8B70

5、B70B405B405BFP3236GB267GB1.48TB1.48TBFP1620GB135GB758GBINT812GB70GB382GBINT48GB37GB193GBLLM Memory RequirementsLLM Inference Performance Engineering:Best Practices80GB:1 GPU640GB:1 Node x 8 GPU1.28TB:2 Node x 8 GPUTime To First Token(TTFT)Time Per Output Token(TPOT)Model Bandwidth Utilization(MBU)Fe

6、wer GPUs?With fewer GPUs,each GPU needs to load more model parameters.More GPUs?With more GPUs,collective communication becomes more complex,and memory fragmentation increases.No Silver Bullet Observability is the Prerequisite for Optimization.Challenges in Troubleshooting Memory Consumption during

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(unlocking-llm-performance-with-ebpf-optimizing-training-and-inference-pipelines-chuan-hui-ebpfji-xi-llmxia-daep-xiao-zhen-relia-fa-qiu-yang-xiang-yunshan-networks-inc-1.pdf)为本站 (山海) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
折叠