Faster Transformer 3.0 编码器的 INT8 量化实现.pdf

编号:29457 PDF 36页 1.48MB 下载积分:VIP专享
下载报告请您先登录!

Faster Transformer 3.0 编码器的 INT8 量化实现.pdf

1、NVIDIATHE INT8 QUANTIZATIONOF FASTER TRANSFORMER3.0ENCODERYuChen,2020/12#page#AGENDAWhat is Faster TransformerIntroduce the Faster Transformer 8 its EncoderWhat is INT8 QuantizationIntroduce the INT8 quantization technique used in Faster Transformer3.0 EncoderHow to do INT8 Quantization with cuBLASL

2、tIntroduce how to use cuBLASLt to implement INT8 QuantizationINT8 Quantization of Faster Transformer EncodeThe performance of Faster Transformer INT8 EncodDemonstrate the performanceFurther improvementINT8 output GEMMSummary#page#WHAT IS FASTERTRANSFORMER#page#WHAT IS FASTER TRANSFORMERFaster Transf

3、ormer 3.0Faster Transformer 2.0Provide an INT8 quantizedProvideahighly optimizedencoder8abert-tf-OpenNMT-tf based decoder andquantization tooldecoding.2019/082020/062020/022020/09Faster Transformer 2.1Faster Transformer 1.010AddProvide a highly optimized BERTideaintoencoder.equivalent transformerlay

4、er.TMIG1#page#WHAT IS FASTER TRANSFORMERFasterTransformer encoderBased on top of CUDA + cuBLAS + cuBLASLt C+/TensorRT plugin/TensorFlow OP APIPytorch OP APBatch size(B):smaller orequalto 512Sequence length (S):smaller or equal to 1024.ForINT8 datatypesequencelength should bea multipleof32.00 Head nu

5、mber(H) and size per head (N):16 heads * 64 per heads (BERT large with 16 layers)12 heads * 64 perheads (BERT base with 12 layers)4heads *32 per heads8heads *96perheadsData type: FP32,FP16 and INT8 (only supported on T4)Any number layer(N:) if the memory is enoughTMIG1#page#WHAT IS FASTER TRANSFORME

6、RFasterTransformer decoder and decodingBased on top of CUDA + cuBLAS C+/TensorFlow OP APIPytorch OP API The decoder is the model that contains some transformerlayers. On the other hand, decoding refers to the wholetranslating process, including the lookup embedding table,position encodinga decoder a

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(Faster Transformer 3.0 编码器的 INT8 量化实现.pdf)为本站 (X-iao) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
折叠