使用 Triton 优化深度学习推理的大规模部署.pdf

编号:29551 PDF 68页 3.47MB 下载积分:VIP专享
下载报告请您先登录!

使用 Triton 优化深度学习推理的大规模部署.pdf

1、NVIDIA使用Triton优化深度学习推理的大规模部署徐添豪,张雪萌,黄孟迪#page#Triton OverviewInference Server pipelineA100 Multi-Instance GPU (MIG)AGENDADeployment on KubernetesIntegration with KFServingMetrics for Monitoring and AutoscalingPerformance Analyzer: Optimization GuidanceCustommer Case Studies#page#NVIDIATriton Overview

2、#page#Inefficiency Limits InnovationDifficulties with Deploying Data Center InferenceSingle Framework OnlySingle Model OnlyCustom Development33Chainer21HOuXGPYT6RCHTensorfloRecSySNLPASRtheanoDevelopers need to reinSolutions can only supportteSome systems are overused whileplumbing for every applicam

3、odels from one frameworkothers are underutiliized#page#NVIDIA Triton Inference ServerProduction Inference Server on GPU and CPUMaximize real-time inferenceNVIDIAT4performance of CPUS and GPUsNVIDIAT4Quickly deploy and manage multipleQO口Teslamodels per GPU per nodeV100TeslaEasily scale to heterogeneo

4、us GPUsV100and multi GPU nodesNVIDIAA100Integrates with orchestrationNVIDIAsystems and auto scalers via latencyA100and health metricsCPUOpen source for seamlessCPUcustomizattion and integrattion#page#Triton Inference Server ArchitecturePreviously “TensorRT Inference Server”口 Support for multiple fra

5、meworkss Concurrent model executiondi制 CPU and Multi-GPU supportDynamic batchings Sequence batching for stateful models HTTP/REST,gRPC,shared library Health and status metrics (Prometheus) reportingModel ensembling and pipelinings Shared-memory API (system and CUDA) GCS and S3 support Open source- m

6、onthly releases on NGC and GitHub#page#FeaturesUtilizationUsabilityCustomizationPerformanceConcurrent Model ExecutionMultiple Model Format SupportModel EnsembleSystem/CUDA Shared MemoryMultiple models (or multipleTensorRTPipeline of one or more models andInputs/outputs needed to be passedinstancesof

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(使用 Triton 优化深度学习推理的大规模部署.pdf)为本站 (X-iao) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
折叠