boundaryless-computing-optimizing-llm-performance-cost-and-efficiency-in-multi-cloud-architecture-yi-dui-dou-zhao-daepnano-llmxia-reyi-jian-zhu-red-hat-kai-zhang-alibaba-cloud-intelligence.pdf

编号:627217 PDF 28页 3.11MB 下载积分:VIP专享
下载报告请您先登录!

boundaryless-computing-optimizing-llm-performance-cost-and-efficiency-in-multi-cloud-architecture-yi-dui-dou-zhao-daepnano-llmxia-reyi-jian-zhu-red-hat-kai-zhang-alibaba-cloud-intelligence.pdf

1、Boundaryless Computing:Optimizing LLM Performance,Cost,and Efficiency in Multi-cloud ArchitectureWho we are?Kai Zhang ()Senior Staff Engineer,Alibaba Cloud IntelligenceJian Zhu ()Senior Software Engineer,RedHatAgendaChallenges and solution of running LLM cross clouds/regions Accelerates LLM from the

2、 data perspective-FluidManages multiple clusters in the K8s way-OCM Demo-Deploy and scale LLM inference service crossing clouds quickly and easilyFuture worksThe Challenges&SolutionChallenges to infrastructure brought by LLMGPU resources in a single data center or cloud region cannot meet LLM worklo

3、ads resource requirementsDistributing,synchronizing,and managing model consistency and data security across multiple geographies is a challenge of efficiency and complexityModelParametersGPU countsTraining daysLlama 7B80*A10042GPT3175B1K*A10030Llama 3.1405B16K*H10054The emergence of AIGC/LLM has led

4、 to a significant increase in GPU resource consumption,especially during the pre-training phase of foundation models.Microsoft has hundreds of thousands of GPUs deployed in more than 60 data centers in Azure cloud for serving ChatGPTChallenges to infrastructure brought by LLMThe large model causes t

5、he inference service to start very slowly,which seriously affects the elasticity and user experienceRegional inference services,repeatedly pulling models from remote storage,rapidly driving up bandwidth costs12.55GiB60.57GiB134GiB100s266s635sStartup Time(loading model from OSS)Model size(FP16)Llama-

6、2-7b-chat-hfQwen1.5-32B-ChatQwen1.5-72B-Chat$Llama-2-7b-chat-hfQwen1.5-32B-ChatQwen1.5-72B-Chat$Optimization of LLM efficiency in multi-clouds and multi-regionsOptimize GPU resources schedulingOptimize data/model access performanceOptimize the ease of use of multi-geographic model services1.Schedule

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(boundaryless-computing-optimizing-llm-performance-cost-and-efficiency-in-multi-cloud-architecture-yi-dui-dou-zhao-daepnano-llmxia-reyi-jian-zhu-red-hat-kai-zhang-alibaba-cloud-intelligence.pdf)为本站 (山海) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
折叠