《HotChips_Boqueria_Presentation_v15.pdf》由会员分享,可在线阅读,更多相关《HotChips_Boqueria_Presentation_v15.pdf(19页珍藏版)》请在三个皮匠报告上搜索。
1、BoqueriaRobert Beachler VP of Product/Hardware EngineeringDr.Martin Snelgrove Co-founder and CTOCopyright 2022 UNTETHER AI Corp.A Brief History of the Current AI Summer201220162020201820222014Deepmindacquired by GoogleAlphaGo beats Lee SedolUntether AI founded in TorontorunAI200 introducedFirst at-m
2、emory inference accelerator500 INT8 TOPs200MB SRAM8 TOPs/WTSMC 16nmBoqueria introduced2 PetaFlops FP8238MB SRAM30 TFLOPs/WTSMC 7nmCopyright 2022 UNTETHER AI Corp.AI Inference Presents 3 Key Challenges to Chip MakersIncreasing computational and power requirementsScalability and flexibility for changi
3、ng NN landscapeAccuracy loss costs$millions and risks livesThe Computational Limits of Deep LearningNeil C.Thompson1,Kristjan Greenewald2,Keeheon Lee3,Gabriel F.MansonFirst-Generation Inference Accelerator Deployment at FacebookNHSTA report,June 2022 for July 2021 to May 2022Model CategoryModel Name
4、Model Size(Mparams)RecommendationLess complex70,000More complex100,000Computer VisionResNeXt101-32x4-4844RegNetY700FBNetV3 based model28.6Video UnderstandingResNeXt3D based58NLPXLRM-R558Copyright 2022 UNTETHER AI Corp.Architecting an AI Inference AcceleratorPower-efficient throughput is required to
5、meet NN compute demandData movement is the costliest part of inference 90%of energy consumptionData movement is different between training and inferenceOptimizing compute architecture to minimizing distance travelled results in inference-specific AI acceleratorsProper level of granularity to create
6、a scalable compute architectureRight balance between coarse-grained and fine-grained approachDont over-fit for a particular application/NNUtilize the most efficient datatype for a given application and accuracy requirementsA mixture of datatypes provides the best resultsDesigning Energy-Efficient Co