《boundaryless-computing-optimizing-llm-performance-cost-and-efficiency-in-multi-cloud-architecture-yi-dui-dou-zhao-daepnano-llmxia-reyi-jian-zhu-red-hat-kai-zhang-alibaba-cloud-intelligence.pdf》由会员分享,可在线阅读,更多相关《boundaryless-computing-optimizing-llm-performance-cost-and-efficiency-in-multi-cloud-architecture-yi-dui-dou-zhao-daepnano-llmxia-reyi-jian-zhu-red-hat-kai-zhang-alibaba-cloud-intelligence.pdf(28页珍藏版)》请在三个皮匠报告上搜索。
1、Boundaryless Computing:Optimizing LLM Performance,Cost,and Efficiency in Multi-cloud ArchitectureWho we are?Kai Zhang ()Senior Staff Engineer,Alibaba Cloud IntelligenceJian Zhu ()Senior Software Engineer,RedHatAgendaChallenges and solution of running LLM cross clouds/regions Accelerates LLM from the
2、 data perspective-FluidManages multiple clusters in the K8s way-OCM Demo-Deploy and scale LLM inference service crossing clouds quickly and easilyFuture worksThe Challenges&SolutionChallenges to infrastructure brought by LLMGPU resources in a single data center or cloud region cannot meet LLM worklo
3、ads resource requirementsDistributing,synchronizing,and managing model consistency and data security across multiple geographies is a challenge of efficiency and complexityModelParametersGPU countsTraining daysLlama 7B80*A10042GPT3175B1K*A10030Llama 3.1405B16K*H10054The emergence of AIGC/LLM has led
4、 to a significant increase in GPU resource consumption,especially during the pre-training phase of foundation models.Microsoft has hundreds of thousands of GPUs deployed in more than 60 data centers in Azure cloud for serving ChatGPTChallenges to infrastructure brought by LLMThe large model causes t
5、he inference service to start very slowly,which seriously affects the elasticity and user experienceRegional inference services,repeatedly pulling models from remote storage,rapidly driving up bandwidth costs12.55GiB60.57GiB134GiB100s266s635sStartup Time(loading model from OSS)Model size(FP16)Llama-
6、2-7b-chat-hfQwen1.5-32B-ChatQwen1.5-72B-Chat$Llama-2-7b-chat-hfQwen1.5-32B-ChatQwen1.5-72B-Chat$Optimization of LLM efficiency in multi-clouds and multi-regionsOptimize GPU resources schedulingOptimize data/model access performanceOptimize the ease of use of multi-geographic model services1.Schedule