《Mpache:GPU 上的交互感知多级缓存绕过.pdf》由会员分享,可在线阅读,更多相关《Mpache:GPU 上的交互感知多级缓存绕过.pdf(18页珍藏版)》请在三个皮匠报告上搜索。
1、Mengyue Xi,Tianyu Guo,Xuanteng Huang,Zejia Lin,Xianwei ZhangMpache:Interaction Aware Multi-level Cache Bypassing on GPUsASPDACAdaptive and Flexible Memory ArchitectureEmail:Time:13:40-14:05,January 23,2025Designed for parallel computingThousands of cores Executing thousands of threads simultaneously
2、.Support diverse applicationsArtificial intelligence(AI)High-performance computing(HPC)Graphics renderingGPUsCoreSM.SMCoreCoreCoreCoreCoreCoreCoreCoreCore.GPU2GPU Memory Hierarchy.CoreSMCoreCoreCoreCoreCoreCoreCoreControlRegistersConstant CacheShared MemoryL1 CacheL2 CacheGlobal MemoryCoreSMCoreCore
3、CoreCoreCoreCoreCoreControlRegistersConstant CacheShared MemoryL1 CacheCoreSMCoreCoreCoreCoreCoreCoreCoreControlRegistersConstant CacheShared MemoryL1 CacheUsing an Nvidia GPU as an example3Cache Becomes BottleneckLimited cache hit rates:Cache conflicts caused by thousands of threadsIrregular memory
4、 access patterns reduce cache efficiency Cache pollution:Streaming data occupies cache space L2 CacheL1 CacheGPU100,000s of threads4Trend1:Enlarged Cache Capacity0204060802016201820202022NvidiaL2 Cache SizeYear30 Larger05102010201220202022AMD12 LargerYearL2 Cache SizeNvidia:30 x increase from 2MB to
5、 72MBAMD:12x increase from 512KB to 6MB5AMD:Introduces three cache levels in its RDNA architecturesTrend2:Deepened Cache Levels.L2 CacheGlobal MemoryCoreCU(SM)CoreCoreCoreCoreCoreCoreCoreControlRegistersConstant CacheShared MemoryL0 CacheCoreCU(SM)CoreCoreCoreCoreCoreCoreCoreControlRegistersConstant
6、 CacheShared MemoryL0 Cache.Shader ArrayL1 CacheCoreCU(SM)CoreCoreCoreCoreCoreCoreCoreControlRegistersConstant CacheShared MemoryL0 CacheShader ArrayL1 Cache.6Optimization Opportunity:BypassL2 CacheL1 CacheGPUBypass L1Bypass L2DRAMBypass AllMemory load requests Bypass a specific cache level for load