GPU Tensor Core 上大型语言模型的高效任意精度加速.pdf

编号:651799 PDF 44页 3.12MB 下载积分:VIP专享
下载报告请您先登录!

GPU Tensor Core 上大型语言模型的高效任意精度加速.pdf

1、Efficient Arbitrary Precision Acceleration for Large Language Models on GPU Tensor CoresShaobo Ma,Chao Fang,Haikuo Shao,Zhongfeng WangICAIS Lab,Nanjing University,ChinaJan 23,2025OutlinesBackground&Motivation01Our WorksExperimentsConclusion02030401Background&Motivation1.1.1 Background:Quantization o

2、f LLMsChallenges Brought by the Growth in Size of LLMsMore memory(storage)More computational power and time(inference)Growth in Size of Transformer Models4BERT(340M)GPT-1(117M)GPT-2(1.5B)GPT-3(175B)GPT-4(1000+B)PaLM(540B)Gopher(280B)020040060080010001200201720182019202020212022202320241.1.1 Backgrou

3、nd:Quantization of LLMsChallenges Brought by the Growth in Size of LLMsMore memory(storage)More computational power and time(inference)One Effective MethodModel quantizationStorage requirementComputational overhead Growth in Size of Transformer Models5BERT(340M)GPT-1(117M)GPT-2(1.5B)GPT-3(175B)GPT-4

4、(1000+B)PaLM(540B)Gopher(280B)02004006008001000120020172018201920202021202220232024Challenges Brought by the Growth in Size of LLMsMore memory(storage)More computational power and time(inference)One Effective MethodModel quantizationStorage requirementComputational overhead Quantization WorksGPTQ(3-

5、4bit)1TSLD(2bit)2OneBit(1bit)31.1.1 Background:Quantization of LLMsModelsModelsFP16(GB)FP16(GB)GPTQ 3bitGPTQ 3bit(GB)(GB)TSLDTSLD(GB)(GB)OneBitOneBit(GB)(GB)LLaMA-7B13.52.51.71.3LLaMA-13B26.04.93.32.2LLaMA-30B65.112.28.14.9LLaMA-65B130.624.516.39.2Storage Reduction Brought by Model QuantizationGrowt

6、h in Size of Transformer Models61 Frantar,Elias,et al.Gptq:Accurate post-training quantization for generative pre-trained transformers.arXiv preprint arXiv:2210.17323(2022).2 Kim,Minsoo,et al.Token-scaled logit distillation for ternary weight generative language models.Advances in Neural Information

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(GPU Tensor Core 上大型语言模型的高效任意精度加速.pdf)为本站 (芦苇) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
折叠