1、NVIDIA使用RAPIDS加速APACHESPARK3.0Alessandro Bellina and Jason Lowe, 10/6/2020Allen Xu and Liangcai Li,12/15/2020#page#AGENDA用于ApacheSpark的RAPIDS加速器工作原理加速Shuffle0.2版本中的亮点0.3版本中的计划#page#RAPIDS ACCELERATOR FORAPACHE SPARK 3.0#page#加速的ETL?GPU可以让大象跑得更快吗?Https:/Wwpigsels.co#page#是的TPCX-BBLike指标测试结票(10TB数据集,2
2、NodesDGX-2集群)Query Time:GPU Vs CPU (Mins)ime(mins)Query#21Query#5Query#16Query#22DCPU25.956.167.133.80GPU1.311.160.560.14环境:2DGX-2(96CPUCores,1.5TB内存16V100GPUS,512GBGPU内存/每个节点)非官方、完整的TPCX-BB测试(仅ETL)#page#端到端的DLRM在CRITEO数据集上的表现Spark ETL+训练CriteoDataset(1TB)45.0是最初实现的160x是纯CPU方式的48x(4%费用)(sinoH)是传统方式的
3、10x(1/6费用)Time144.045.00.705072Original CPU (1 Core forSpark CPU(96 Core for ETL)SparkGPU8-V100forETLSpark CPU (96 Core for ETLETL96Core CPU for&SparkGPU(1-V100&Training)&1-V100Training)Training)Training)0745.045.007Training0.5ETL144.012.112.1#page#“The more you buy,tthe more yousave.”Jensen Huang,G
4、TC 2020#page#spark.conf.set(“spark.rapids.sql.enabled”,“true)start=time.time()Spark.sql(selecto_orderpriority,count(*)as order_countfrom无需代码更改orderswhere(none)date1993-07-01o_orderdateand o_orderdate date 1993-87-01+interval 3 monthand exists(相同的SQLandDataFrame代码,selectfromLineitemwhereLorderkey = o
5、_orderkeyand LcommitdateString=fuserNameGpuProjectEquality(=,=)if(userhif(NOT(cast(valuew106Foo)asint)=0)me.equalsiFoohellohellolelsefelseBitwisegoodbye“good byeAS myUDFValUe)#136Scala.mathThe UDF is compiledto catalystNVIDIAUDF PluginOpaque to Spark:scalasspark.sqlCselectvalue,myuDFvalue)fromType c
6、astsdemo).collectSELECT myuDFicol) FROM tableres57:Arraylorgapache.spark.sql.RowProject String opsArrayFoo,hello,Bar,goodbyej)myuDF(value#106)ASmyuDF(value)梦131More Support plannedExperimental feature in RAPIDS Accelerator for Apache Spark 0.2#page#加速PANDASUDFS两个方面Spark Pandas UDFs实现对Python进程的GPU资源管