1、Danny Chan/Apache Hudi Committer刘大龙/阿里巴巴工程师Flink Hudi Flink Hudi Streaming Data LakeStreaming Data LakeBuild streaming data lake using Apache Flink and Apache HudiApache Hudi 101Apache Hudi 101Flink Hudi IntegrationFlink Hudi IntegrationFlink Hudi Use CaseFlink Hudi Use CaseApache Apache HudiHudi Ro
2、admapRoadmap#1#2#3#4#1#1Apache Hudi 101Apache Hudi 101Why Data Lake?Data LakeHudi,Delta Lake,IcebergTrasaction,Open FormatTraditional Data WarehouseTeradata,VerticaMPP DBCoupling Compution&StorageCloud Native Data WarehouseRedshift,Snowflake,BigQueryTransaction,Private Format 2015 2018 2021Apache Hu
3、di-Data Lake PlatformHudi is not just a table format!Timeline ServiceInstantInstantActionTimeStatecommitdelta_commitcleancompactionrollbackrequestedinflightcompletedsavepointAll actions performed on the tableat different instants of time that constructinstantaneous views of the tableInstantInstantFi
4、le FormatPartitionFileSliceFileSliceFileGroupFileGroupFileGroup*.parquet*.log.*A base path have several partitions,each partition has multiple file groups with unique IDs,each file group has several file slicesCopy On WriteThe incremental data set was cached as in-memory index,while scanning the bas
5、e file,the writer looks up the index and merges the record if possible.When the latest file slice size hits the threshold,rolls over to a new file groupFileGroupnew datanew datascan and mergescan and mergeMerge On ReadThe incremental data set(new data)was always appended to the latest version log fi
6、le.The log file handle rolls over to new file group automatically when the size hits the threshold(default 1GB)FG-1new datanew datarolls overfile slice*.log.*.log.*FG-2#2 2Flink Hudi IntegrationFlink Hudi IntegrationFlink Writer Pipelinerowdata to hoodieBucketAssignerStream WritercoordinatorCleanerS