1、KAPLA:Scalable NN Accelerator Dataflow Design Space Structuring and Fast ExploringZhiyao Li and Mingyu GaoASP-DAC 2025Tsinghua UniversityShanghai Artificial Intelligence LabShanghai Qi Zhi InstituteBackground&MotivationDomain-specific accelerators(DSAs)for neural networks(NNs)oAbundant parallelism o
2、f PEsoLocalized memory accessesoReduced control overheads2Google TPU v1ISCA1770 x efficiencyActivationNormalize/PoolScratchpadHBMAccumulatorsPE ArrayBackground&MotivationDSAs hardware architectures and dataflow schedulers3AlgorithmUse CaseArchitectureScheduler Computation parallelism Data access pat
3、terns Hyper-parametersCostmodelDataflowPerformanceFlexibilityTPU acceleratorXLA compilerA Large Design Space of Dataflow SchedulingComplex hardware structuresoDiverse PE array structures:1D,2D,adder tree,oDeep hierarchical buffer storage:register file,global buffer,main memory,oNumerous constraints:
4、PE dataflow,buffer capacity,Various algorithm designsoComplex model topologieso3D/4D tensor dimensionsoAbundant layer typesFlexible use scenariosoOffline:traditional compilersoOnline:MLaaS,NAS,Takeaway message:Time-consuming and frequently-occurringscheduling can no longer be ignored!4Hierarchical N
5、N Dataflow Taxonomy51Segment Slicing2Layer Pipelining3Node ParallelismLoop Blocking4L1L2L3L4L5L6L7MEMI0:5O0W00:5I5:10O1W15:10I10:15O2W210:15I15:20O3W315:20Node0Node1Node2Node3MEMI10:15O2W210:15GLBI10:12O2W210:12Node20:200:200:20 L2L3L4Time0:50:515:2015:200:412:16ioioio5PE MappingSegment slicing Mode
6、l DAG is sliced into multiple segments where each contains one or more layers Each segment is scheduled on hardwareone after oneLayer pipelining Layers in a segment are processed in a pipelined style Intermediate data are forwarded on-chipwithout spilled to off-chip memoryNode parallelism Each layer