empower-large-language-models-llms-serving-in-production-with-cloud-native-ai-technologies-zhi-chang-xia-nanochang-hou-la-xia-nfbo-yun-nfllms-lize-cai-sap-yang-che-alibaba-cloud-intelligence.pdf

编号:627261 PDF 33页 5.21MB 下载积分:VIP专享
下载报告请您先登录!

empower-large-language-models-llms-serving-in-production-with-cloud-native-ai-technologies-zhi-chang-xia-nanochang-hou-la-xia-nfbo-yun-nfllms-lize-cai-sap-yang-che-alibaba-cloud-intelligence.pdf

1、Empower Large Language Models(LLMs)in Production With Cloud Native AI TechnologiesLize Cai,Senior Software Engineer,SAPYang Che,Senior Engineer,Alibaba CloudAbout usLize CaiSenior Software Engineer in SAPYang CheSenior Engineer in Alibaba CloudAgendaIntroductionLLM Challenges in ProductionManages LL

2、M lifecycle in the K8s way-KServeAccelerates LLM scaling from data perspective-FluidDemoFuture WorksQAIntroductionmaximelabonneIntroductionIt is a common use case to provide a playground to try out different models IntroductionBut it is not so easyLLM Challenges in ProductionNew requirements on serv

3、ing LLMNew inference APIs like text generation,embeddings.Streaming response is required for real-time user experience.Variety of models and runtimesTGI,vLLM,TRT-LLM etc.Llama,Mistral,Phi,Qwen etc.LLM services from cloud providersDifferent providers have their own spec(api and token calculation)whic

4、h leading to a poor user experience and increased maintenance efforts.High computing costthe need for expensive hardware,high energy consumption,and associated infrastructure expenses.Data privacyModel and request data can be sensitive and private for inference.Manages LLM lifecycle in the K8s way-K

5、ServeWhat is KServe?Highly scalable and standards-based cloud-native model inference platform on Kubernetes for trusted AI that encapsulates the complexity of deploying AI models to production.What is KServe?Core InferenceTransformer/PredictorServing RuntimesCustom Runtime SDKOpen Inference Protocol

6、Serverless AutoscalingCloud/PVC StorageAdvanced InferenceModelMesh for Multi-Model ServingInference GraphPayload LoggingRequest BatchingCanary RolloutModel Explanability&MonitoringText,Image,Tabular ExplainerBias DetectorAdversarial DetectorOutlier DetectorDrift DetectorWhat is KServe?Serving Runtim

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(empower-large-language-models-llms-serving-in-production-with-cloud-native-ai-technologies-zhi-chang-xia-nanochang-hou-la-xia-nfbo-yun-nfllms-lize-cai-sap-yang-che-alibaba-cloud-intelligence.pdf)为本站 (山海) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
折叠