《基于 Tensor Core 的 CNN INT8 定点训练加速.pdf》由会员分享,可在线阅读,更多相关《基于 Tensor Core 的 CNN INT8 定点训练加速.pdf(24页珍藏版)》请在三个皮匠报告上搜索。
1、MachineIntelligenceOf Damo基于TensorCore的CNNINT8定点训练加速李英晗赵康顾震宇张迎亚潘攀阿里巴巴达摩院一机器智能技术MD#page#1.工作背景与目的Contents2.CNNINT8训练量化与反量化目录Tensor Core INT8 Implicit GEMM卷积实现方法4.实验结果#page#工作背景与目的#page#工作背景与目的湖山到影阳中8LNI面示当59学士响身I8LNI考虑数据隐私,有些训练要在用户侧的推理机上进行TuringGPUINT8TensorCore算力是FP16的两倍下一代训练用GPU(A100)TensorCore支持IN
2、T8Tesla T4TeslaA100Tesla V100 PCleFP16 Tensor112 TFlops65TFlops312 TFlopsCoreINT8 TensorNA624 Tops130TopsCoreMi01#page#CNNINT8训练量化与反量化#page#CNNINT8训练量化与反量化FP16 to INT8Conv_INT8INT32_to_FP16BN FP16Relu FP16Imax-lmax-lmax *scalelImax * scalel127-127-1270INT8量化训练INT8量化推理MiD02#page#CNNINT8训练量化与反量化conv-d
3、equantize fusedquantizeXS8XF16dequantizeiconv_S8YF16YS32quantizeWF16W.S8Xs8Conv dw-dequantize fusedquantize_dwdequantizeconv dw S8dW_S32dW_F16dY_W_S8dY_F16dequantizedX F16dY_wS8dX S32conv dx S8quantize dxconv_dx-dequantize fusedW.s8Mi03#page#Tensor Core INT8 ImplicitGEMM卷积实现方法#page#TensorCoreINT8Imp
4、licitGEMM卷积实现方法卷积转矩阵乘法(Forward)W16B/8B packedFilter(NHWC)Feature MapaccessmatrixB col-major工RSCFilterSR*S*CP*Oimg2colnotalignedlCRSNPQNxPxQCIRSKFeature Map (NHWC)P+QmatrixArow-majorKMiD04#page#TensorCoreINT8ImplicitGEMM卷积实现方法INT8 NCHW to NHWCint32_ts8_44/4x4int8PRMT:两个32bit源寄存器共8字节中抽取选定的任意4个asm(reg.
5、u32rarbcd”字节存到目的寄存器中“prmt.b32ra,%0,%1,0x5140:”prmt.b32da,b,index“prmt.b32rb,%2,%3,0x5140:n”“prmt.b32r_c,%0,%1,0x7362:n”4int8“prmt.b32r_d,%2,%3,0x7362:n”“prmt.b32%0.ra,b,0x5410:”“prmt.b32 %1.ra,rb,0x7632:”“prmt.b32%2,r_crd,0x5410:”“prmt.b32%3,r_crd,0x7632:n:“+r”(s8_40),“+r(s8_41),4xint8“+r(s8_42),“+r
6、(s8_43);MiD05#page#TensorCoreINT8ImplicitGEMM卷积实现方法Tiled GEMM (Threadblock Tile)n_tilektileglobal memory_throughput=(m_tile +n_tile)math_throughput=m_tile*n_tile*k*2math_throughputm_tile*n_tile水2memory_throughputm_tile +n_tilem_tile,n_tile越大,计算访存比越大k tlem_tile,ntile最大值受sharedmemory和寄存器容量限制m_tileThre