1、Unlocking LLM Performance with :Optimizing Training and Inference PipelinesYang Xiang,Yunshan NetworksOutline1.1.BackgroundBackground:Challenges in Training and Inference Efficiency2.2.Status QuoStatus Quo:Issues with Traditional Solutions and Tools3.3.ApproachApproach:Building Zero-Code Observabili
2、ty with eBPF4.4.Practical CasePractical Case:Full-Stack Profiling and D-Tracing in PyTorchLLM Training:High Costs andLow EfficiencyGPT-4GPT-4Llama-3.1Llama-3.1Size1.8T405BGPU25K A10016K H100Days9010054MFU32%36%38%43%Everything We Know About GPT-4-Klu.aiGPT4-All Details LeakedThe Llama 3 Herd of Mode
3、lsTraining duration:MonthsGPU count:10KGPU MFU:40%Annual GPU failure rate:6%-11%Model size:Trillions of params148/54*365/16384=6%(148+72+19+17+6+6)/54*365/16384=11%Causes of Training InefficiencyBeyond FailuresGPU KernelsMemory copyNetwork transmission Yanjie Gao(Microsoft Research)et al,ACM ICSE 20
4、24,An Empirical Study on Low GPU Utilization of Deep Learning Jobs.Yanjie Gao(Microsoft Research)et al,ACM ICSE 2023,An Empirical Study on Quality Issues of Deep Learning Platform.How can you determine if your training job has these inefficiencies?LLM Inference:High Costs and LatencyLlamaLlama8B8B70
5、B70B405B405BFP3236GB267GB1.48TB1.48TBFP1620GB135GB758GBINT812GB70GB382GBINT48GB37GB193GBLLM Memory RequirementsLLM Inference Performance Engineering:Best Practices80GB:1 GPU640GB:1 Node x 8 GPU1.28TB:2 Node x 8 GPUTime To First Token(TTFT)Time Per Output Token(TPOT)Model Bandwidth Utilization(MBU)Fe
6、wer GPUs?With fewer GPUs,each GPU needs to load more model parameters.More GPUs?With more GPUs,collective communication becomes more complex,and memory fragmentation increases.No Silver Bullet Observability is the Prerequisite for Optimization.Challenges in Troubleshooting Memory Consumption during