1、Zhenxiao Luo(罗罗震霄震霄)Sr.Staff EngineerPinterestLast Mile Data Processing for LLM at Pinterest目目录录 What is Pinterest How Pinterest Leverages Large Language Models Legacy data processing pipelines Painpoints Ray Introduction Batch Inference using Ray Multi-Model Inference CarryOver Columns Accumulators
2、 LLM Inference results Pinterest Integration Next StepsZhenxiao LuoSr.Staff Software Engineer PinterestSr.Staff Software Engineer PinterestPresto Committer&Technical Steering Committee member since 2019Work on Data at Uber,Twitter,Facebook,Netflix,Cloudera,Vertica etc.Bachelor from Fudan UniversityP
3、h.D.(on leave)from the University of Wisconsin Madison讲师简讲师简介介Pinterest#1 Image Sharing Social NetworkMAU:500+MillionPublish and discover recipes,home,style,motivation,and inspiration on the Internetdouble digit growth in both MAU and RevenueWhat is Large Language Model(LLM)Large-scale transformer m
4、odels Ability to recognize input patterns and generate text output Billions of parametersPinterest has in house LLM Data Privacy Cost Service AvailabilityThrough batch inference,we leverage OSS LLM at Pinterest to:Enable new use cases Alternative service to OpenAI ChatGPT APILLM Batch Inference Plat
5、form+Inference Backend Platform-Ray Batch Inference Inference optimization Flash Attention vLLMInference Optimization-Flash Attention Good for non-sequence generation use cases-e.g.embedding extraction Flash attention accelerate model forward pass during inference Transformers consist of attention o
6、perations,Attention is memory bandwidth-bound,flash attention reduces memory complexity from quadratic to linearvLLM A community open-source project for efficient LLM sequence generation Fast PagedAttention for KV cache Continuous batching Quantization Optimized CUDA kernels Flexible Python-based Hu