1、NVIDIAMERLIN HUGECTR: DEEP DIMEINTO PERFORMANCEOPTIMIZATIONMinseok Lee, December 17th 2020#page#NVIDIA MERLIN OPEN BETADemocratizing Large-Scale Deep Learning RecommendersETLDATATRAININGINFERENCELOADERO(10)0(1000)HugeCTRSENERATIONVTabularNVTabO(Billions)1FTensorFlowOPyTorchEMBEDDINGSUser QueryRAPIDS
2、RAPIDSCUDNNTritonDATALAKE#page#RELATED SESSIONS IN GTC CHINALearn More About NVIDIA MerlinMerlin:GPU加速的推荐条统框察CNS20590-王泽豪,英伟达亚太AI开发者技术解决方素经理,NVIDIAMerlinNVTabular:基于GPU加速的推荐条统特征工程最佳实践CNS20624-黄孟迪,深度学习工程师,NVIDIA使用GPUembeddingcache加逸CTR推理过程CNS20626-都凡,GPU计算专家,NVIDIA将HugeCTREmbedding集成于TensorFlowCNS203
3、77-董建兵,GPU计算专家,NVIDIAGPU加速的数据处理在推荐条统中的应用CNS20813-魏英灿,GPU计算专家,NVIDIA#page#HUGECTR OVERVIEW#page#HUGECTR: SCALABLE.ACCELERATED TRAININGhttps:/ efficient GPU framework and reference design dedicated for Click-Through-Rate (CTR) estimating trainingDesigned for distributed training with model-parallel em
4、bedding tables and data-parallel neural networksCovers common and recent architectures and their variants such as Deep Learning Recommendation Model (DLRM)Wide and Deep,Deep Cross Network,and DeepFM#page#HUGECTR PIPELINETo Train Large Scale Recommender ModelsNode1OapoNGPUOGPU1GPU2GPU3GPUOGPU1GPU2GPU
5、3Neural NetworkNeural NetworkNeural NetworkNeural NetworkNeural NetworkNeural NetworkNeural Networkleural NetworkData ParallelModel ParallelEmbedding#page#PERFORMANCE OPTIMIZATION#page#HUGECTR DATA READERPrefetching 8 Latency HidingComputeBatch NGPUTrain on GPUActive TimeBatch N+1Train on GPUCopy to
6、 GPUBatch N+2Train on GPUCopy to GPURead to CPUBatch N+3Train on GPURead to CPUCopy to GPUBatch N+4Copy to GPURead to CPUTrain OPrefetch 3 batches(3 worker threads)TimenviDt#page#CHALLENGES IN EMBEDDING LAYERHow to Mitigate Memory Demands and Communication OverheadEmbedding table may not fit inasing