《用于提高数据中心计算密度的散热技术.pdf》由会员分享,可在线阅读,更多相关《用于提高数据中心计算密度的散热技术.pdf(22页珍藏版)》请在三个皮匠报告上搜索。
1、Thermal Techniques Thermal Techniques for Data Center for Data Center Compute DensityCompute DensityTom GarvensSupermicroVP Hardware Solutions Better Faster Greener 2024 SupermicroAgendaAgendaBetter Faster Greener 2024 Supermicro GenAI LLM Era Data Center Power and Cooling Challenges Solutions and T
2、CO Future Trends8/23/20242175BLLMsLLMsPetascale Data SetsPetascale Data SetsMassive GPU ComputeMassive GPU ComputeChatGPTGemini=LLM Parameters(GPT-3 175B)are like adjustable dials in a complex machine.More adjustments=more optimization(for LLMs this is more nuanced text)300 Billion Tokens(100 Tokens
3、=75 words)Better Faster Greener 2024 SupermicroGenAI LLM Era8/23/20243Tensor ProcessingTensor ProcessingHeavy linear algebra matrix multiplication with bulk data transfers between GPUsPartition size defined by GPU memory in coherent domain Better Faster Greener 2024 SupermicroLLM Training:Tensor Par
4、allelism&Model PipelinesGPU Server 1GPU Server 2Models and Data Sets must be subModels and Data Sets must be sub-divided to fit into divided to fit into GPU memory for performance(time)optimizationGPU memory for performance(time)optimizationGPTGPT-3 3:175 billion parameters and 300 billion tokens.On
5、 1024 A100 GPUs(80GB HBM each)-140 TFLOPs per GPU and the time required to train is 34 days.Tensor Parallelism Tensor Parallelism Reduces the required pipeline depthEnables matrix operations across GPUs8/23/20244Data Center Challenges Rise of the GPUBetter Faster Greener 2024 Supermicro xPU Thermal
6、Design Points increasing CPUs getting hotter(500W+H2,2024)GPUs and AI Accelerators significantly hotter and more power hungry(1000W+H2,2024)AI GPU training servers consume 10kW+per server Silicon max temp specs decreasing Thermal density continues to compress at the silicon and s