《使用 Triton 优化深度学习推理的大规模部署.pdf》由会员分享,可在线阅读,更多相关《使用 Triton 优化深度学习推理的大规模部署.pdf(68页珍藏版)》请在三个皮匠报告上搜索。
1、NVIDIA使用Triton优化深度学习推理的大规模部署徐添豪,张雪萌,黄孟迪#page#Triton OverviewInference Server pipelineA100 Multi-Instance GPU (MIG)AGENDADeployment on KubernetesIntegration with KFServingMetrics for Monitoring and AutoscalingPerformance Analyzer: Optimization GuidanceCustommer Case Studies#page#NVIDIATriton Overview
2、#page#Inefficiency Limits InnovationDifficulties with Deploying Data Center InferenceSingle Framework OnlySingle Model OnlyCustom Development33Chainer21HOuXGPYT6RCHTensorfloRecSySNLPASRtheanoDevelopers need to reinSolutions can only supportteSome systems are overused whileplumbing for every applicam
3、odels from one frameworkothers are underutiliized#page#NVIDIA Triton Inference ServerProduction Inference Server on GPU and CPUMaximize real-time inferenceNVIDIAT4performance of CPUS and GPUsNVIDIAT4Quickly deploy and manage multipleQO口Teslamodels per GPU per nodeV100TeslaEasily scale to heterogeneo
4、us GPUsV100and multi GPU nodesNVIDIAA100Integrates with orchestrationNVIDIAsystems and auto scalers via latencyA100and health metricsCPUOpen source for seamlessCPUcustomizattion and integrattion#page#Triton Inference Server ArchitecturePreviously “TensorRT Inference Server”口 Support for multiple fra
5、meworkss Concurrent model executiondi制 CPU and Multi-GPU supportDynamic batchings Sequence batching for stateful models HTTP/REST,gRPC,shared library Health and status metrics (Prometheus) reportingModel ensembling and pipelinings Shared-memory API (system and CUDA) GCS and S3 support Open source- m
6、onthly releases on NGC and GitHub#page#FeaturesUtilizationUsabilityCustomizationPerformanceConcurrent Model ExecutionMultiple Model Format SupportModel EnsembleSystem/CUDA Shared MemoryMultiple models (or multipleTensorRTPipeline of one or more models andInputs/outputs needed to be passedinstancesof