1、AArch64 优化矩阵计算内核在llama.cpp 上的应用Arm 生态助力大模型时代AI可持续发展主任软件工程师李天羽 2024 ArmTraining Frontier Models20-15%of AI workloadsLed by handful of hyperscalersWill remain cost and power intense for nowInferencing80-85%of AI workloadsCustomize to industry verticals&enterprisesHundreds of startups launched since 20
2、23An evolving AI software stackNeeds to be done at low TCO(Perf/Watt)at ScaleGenerative AI AdoptionGenerative AI Adoption 2024 ArmRise of Smaller Specialized LLMsRise of Smaller Specialized LLMsDemocratizes LLMs by bringing them to a wider set of developers“Small”LLMs 2-70B parametersTypically open-
3、sourceEfficient at focused tasks,data-setsCan be easily fine-tuned,augmentedRuns on wide variety of platforms CPUs&GPUsLower security risk,better privacy“Large”LLMs 180B 1T+parametersTypically closed-sourceEfficient at variety of generic tasksLimited fine-tuning,augmentationRequires large cluster of
4、 GPUs/acceleratorsPrivacy,security can be a challengeLLaMa3QwenBaichuanPhi3GLMGPT4ERNIEClaude.and 1000s of others on e.g.Hugging-Face just a handful 2024 ArmCPUs vs.GPUs CPUs vs.GPUs Cars vs.Trains/PlanesCars vs.Trains/Planes GPUs compute-heavy and expensive Require a certain threshold of ML inferen
5、cing to justify cost,integration.Incur an accessibility tax added latency.CPUs compute-decent and scalable Can scale with ML inferencing needs.Great for on-demand inferencing.Adoption trends Large-scale ML users start with GPUs for offline,mix-match with CPUs for on-demand operations.Entry-level ML
6、users-start with CPUs,and then decide if scale justifies GPUs.Cost,usagePerf,batch-size$,on-demand$,offline100s,1-101000s,100s*Above chart is based on LLaMa2-7B performance other models have similar characteristics,but different crossover points.Like Cars Any time,Any place,Any distanceLike Trains/P