1、Wafer-Scale AI:GPU Impossible PerformanceSean Lie,Co-founder and CTO,Cerebras SystemsHot Chips 2024Cerebras SystemsFounded in 2016400 EmployeesOfficesSilicon Valley|San Diego|Toronto|BangaloreCustomersNorth America|Asia|EuropeThe largest chip ever produced46,225 mm2silicon4 trillion transistors900,0
2、00 AI cores125 Petaflops of AI compute44 Gigabytes of on-chip memory21 PByte/s memory bandwidth214 Pbit/s fabric bandwidth5nm TSMC processCerebras Wafer-Scale Engine Cerebras Wafer-Scale Engine Versus the H100Cerebras WSE-34 Trillion Transistors46,225 mm2 SiliconLargest GPU80 Billion Transistors814
3、mm2 SiliconCS 2Cerebras CS-3 Click to edit Master text styles Second level Third levelFourth levelFifth levelCondor Galaxy 1 -4 ExaflopsSanta Clara,CaliforniaCondor Galaxy 2 -4 ExaflopsStockton,CaliforniaCondor Galaxy 3-5 -20 ExaflopsDallas,TexasCondor Galaxy 6-9 -32 ExaflopsMinneapolis,MNCo-designe
4、d cluster architecture to scale using data-parallelism only WSE large enough to run even the largest models on a single chip Avoid hybrid model parallelism complexity MemoryX store streams weights to CS-3s SwarmX fabric performs broadcast/reduce Multi-system scaling with the same execution model as
5、single systemThe only architecture withExaflop-scale training performanceBut programs like a single deviceDesigned end-to-end for large-scale TrainingMemoryXOptimizer ComputeWeightMemoryWeightsGradientsWeightsGradientsSwarmXCS-3Training SOTA large models everydaySample of open-source models trained
6、on CerebrasFrom multi-lingual LLMs to healthcare chatbots to code modelsBut also designed for inferenceThe Generative Inference ProblemGenerative inference today is really slowSource:Artificial Analysis https:/artificialanalysis.ai/0200400600800100012001400160018002000Claude 2Claude 3 HaikuClaude 3