1、何普江英特尔AI软件架构师大语言模型时代:最大化CPU价值的优化策略QCON 2023 SHANGHAI目录01 背景(为什么?)02 CPU上如何优化大语言模型?04 总结03 最大化CPU价值背景(为什么考虑最大化CPU价值?)QCON 2023 SHANGHAIComputing Needs in LLM4Probability of next tokenSoftMaxMatMulLayer NormBMMSoftMaxBMMMaskedMulti-HeadAttentionMatMulLayer NormMatMulSoftMaxMatMul1st tokenNext tokensQK
2、V MatMul in MHAA:2048x4096B:4096x12288A:1x4096B:4096x12288MHA(1st BMM)A:16x2048x256B:16x2048x256A:16x1x256B:16x2048x256Output MatMul in MHAA:2048x4096B:4096x4096A:1x4096B:4096x40961st MatMul in FFNA:2048x4096B:4096x16384A:1x4096B:4096x163842nd MatMul in FFNA:2048x16384B:4096x4096A:1x16384B:4096x4096
3、Input embeddingMatMul shapes in GPT-J(suppose prompt token size=2048,batch size=1,greedy search)x 28Compute BoundMemory Read Bandwidth BoundGPT-J Model StructureQKV MatMulQCON 2023 SHANGHAIGPT Series Model Analysis5Parameters visited during one time inference=?4?2+2 4?2+?Memory latency&Compute laten
4、cy?=?=2?Arithmetic intensity?=2?=2?FLOIPS/bytePeak AI for SPR-SP with BF16 with AMX?16=123.2?307.2/1000=401FLOIPS/byteCompute bound?16 B S 401Memory bound?16 1GB/core HBM memory capacity1TB/s memory BW up to112.5MB shared LLCDDR5 8 channels per CPU 4800MTS(1DPC)/16 DIMMs per socket64GB HBM2e QCON 20
5、23 SHANGHAICPU is NOT Fully Utilized!70.000010.000020.000030.000040.000050.000060.000070.000005001,0001,5002,0002,5003,000metric_CPU utilization%metric_CPU utilization%Vector DBText Emb2Context RetrieverLLMREST/gRPC APIPre-trained/finetuned LLM modelPre-processing and post processing in LLM inferenc
6、e is relatively simple and do not need too much CPU resource.Text Emb1Should I attend QCON?Yes.LLM Inference PipelineCPU Utilization in LLM Training(offload mode)Even for offload LLM training,CPU is still not fully utilized.CPU上如何优化大语言模型?Optimization Leverage the high-performance kernel(e.g.,oneDNN)