1、Large-Scale Embedding Training in Tensorflow&PytorchHui Kang,Shijie Liu,Nvidia Devtch software engineer|2024/06/22IntroductionSparse Operation Kit for TensorflowTorchRec Optimization for PytorchAgendaIntroductionMulti-stage Recommender System Pipelineand Scope of This PresentationOur FocusDeep Learn
2、ing Recommendation Models(DLRM)Simplified View Bottom MLPso Process dense input features,often independent to the embedding lookups Embedding tableso Map one-hot/multi-hot categorical features into the embedding vectors Interactiono Learn effective feature crosses Top MLPso Output the final recommen
3、dation result,e.g.,click probabilityEmbeddingTable 0EmbeddingTable NCategoricalfeature 0Categoricalfeature N.InteractionBottomMLPsNumerical featuresTopMLPs.Goals of This Presentation Introduce Sparse Operation Kit,a TensorFlow plugin to accelerate large scale embedding training Introduce our optimiz
4、ation on TorchRec Share the achieved multi-gpu training performanceSparse Operation Kit For TensorflowSparse Operation Kit Sparse Operation Kit(SOK)is a Python package wrapped GPU accelerated operations dedicated for sparse training/inference cases.SOK,as a plugin for TensorFlow,provides model paral
5、lelism functionality for the embedding part.What are the challenges in the embedding part of model parallelism?Embedding table storage The embedding table is very large and requires a significant amount of storage space.Due to the limited memory of GPUs,it is not possible to store large embedding ta
6、bles.Many users choose TensorFlows PS architecture to train large embedding tables.However,when using the PS architecture for training,network bandwidth often becomes the bottleneck that restricts training speed.The indices of the embedding table are generally hashed,requiring the embedding table to