1、NVIDIATHE INT8 QUANTIZATIONOF FASTER TRANSFORMER3.0ENCODERYuChen,2020/12#page#AGENDAWhat is Faster TransformerIntroduce the Faster Transformer 8 its EncoderWhat is INT8 QuantizationIntroduce the INT8 quantization technique used in Faster Transformer3.0 EncoderHow to do INT8 Quantization with cuBLASL
2、tIntroduce how to use cuBLASLt to implement INT8 QuantizationINT8 Quantization of Faster Transformer EncodeThe performance of Faster Transformer INT8 EncodDemonstrate the performanceFurther improvementINT8 output GEMMSummary#page#WHAT IS FASTERTRANSFORMER#page#WHAT IS FASTER TRANSFORMERFaster Transf
3、ormer 3.0Faster Transformer 2.0Provide an INT8 quantizedProvideahighly optimizedencoder8abert-tf-OpenNMT-tf based decoder andquantization tooldecoding.2019/082020/062020/022020/09Faster Transformer 2.1Faster Transformer 1.010AddProvide a highly optimized BERTideaintoencoder.equivalent transformerlay
4、er.TMIG1#page#WHAT IS FASTER TRANSFORMERFasterTransformer encoderBased on top of CUDA + cuBLAS + cuBLASLt C+/TensorRT plugin/TensorFlow OP APIPytorch OP APBatch size(B):smaller orequalto 512Sequence length (S):smaller or equal to 1024.ForINT8 datatypesequencelength should bea multipleof32.00 Head nu
5、mber(H) and size per head (N):16 heads * 64 per heads (BERT large with 16 layers)12 heads * 64 perheads (BERT base with 12 layers)4heads *32 per heads8heads *96perheadsData type: FP32,FP16 and INT8 (only supported on T4)Any number layer(N:) if the memory is enoughTMIG1#page#WHAT IS FASTER TRANSFORME
6、RFasterTransformer decoder and decodingBased on top of CUDA + cuBLAS C+/TensorFlow OP APIPytorch OP API The decoder is the model that contains some transformerlayers. On the other hand, decoding refers to the wholetranslating process, including the lookup embedding table,position encodinga decoder a