scaling-kubernetes-best-practices-for-managing-large-scale-batch-jobs-with-spark-and-argo-workflow-kubernetesdaepfa-lia-mao-lia-zha-huan-sparkreargo-yu-zhuang-liu-jiaxu-alibaba-cloud.pdf

编号:627291 PDF 24页 4.80MB 下载积分:VIP专享
下载报告请您先登录!

scaling-kubernetes-best-practices-for-managing-large-scale-batch-jobs-with-spark-and-argo-workflow-kubernetesdaepfa-lia-mao-lia-zha-huan-sparkreargo-yu-zhuang-liu-jiaxu-alibaba-cloud.pdf

1、Scaling Kubernetes:Best Practices for Managing Large-Scale Batch Jobs with Spark and Argo WorkflowYu Zhuang Alibaba CloudJiaxu Liu Alibaba CloudAbout UsAlibaba Cloud Container Kubernetes Service(ACK)Yu Zhuang:9 years on KubernetesACK ArchitectFocus on offline batch job and multi-clusterJiaxu Liu:7 y

2、ears on KubernetesACK Senior EngineerFocus on observability and large-scale cluster management Agenda1.Why running offline batch job on Kubernetes2.Spark on Kubernetes3.Argo Workflow on Kubernetes4.Best practices of large scale offline batch job on KubernetesCloud Native OS-KubernetesStateless Appli

3、cation,Stateful Application,Batch Jobs(Data Process,AI training/Inference,Science Computing)Why run batch jobs on Kubernetes Single infrastructure:manages both online and offline workloads.Scalability:scales your offline jobs horizontally.Cost Optimization:shares resource in on premises and run on s

4、erverless in cloud.Multi-Tenancy:isolates with namespace,resource quota,and kqueue.Portability:allows to migrate offline jobs in different cloud providers.Ecosystem:provides operation capabilities,e.g.monitoring,logging,and security.SparkApache Spark is a unified analytics engine for large-scale dat

5、a processing.It provides high-level APIs in Java,Scala,Python and R,and an optimized engine that supports general execution graphs.It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing,pandas API on Spark for pandas workloads,MLlib for machine l

6、earning,GraphX for graph processing,and Structured Streaming for incremental computation and stream processing.Spark CoreSpark SQLStandaloneHadoop YARNKubernetesMLlibStreamingGraphXSpark on Kubernetes./bin/spark-submit-master k8s:/https:/:-deploy-mode cluster-name spark-pi-class org.apache.spark.exa

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(scaling-kubernetes-best-practices-for-managing-large-scale-batch-jobs-with-spark-and-argo-workflow-kubernetesdaepfa-lia-mao-lia-zha-huan-sparkreargo-yu-zhuang-liu-jiaxu-alibaba-cloud.pdf)为本站 (山海) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
折叠