1、 Compute-Efficient and Scaled GenAI on Kubernetes with OCI AI BlueprintsVishnu Kammari,Principal Product Manager,OCIDennis Kennetz,Sr.Machine Learning Engineer,OCIAgenda212345Enterprise Pain-PointsBest PracticesOCI Solutions&DemoCustomer StoriesPanel DiscussionEnterprises self-host LLMs on GPUs for
2、a variety of reasons.3Copyright 2025,Oracle and/or its affiliates|Confidential:Internal/Restricted/Highly RestrictedSecurity&ComplianceKeep sensitive data in-house.Meet regulatory or contractual obligations(e.g.healthcare,public sector).Customization&ControlFine-tune models with proprietary data.Avo
3、id API rate limits.Control over model upgrades.Performance&Cost EfficiencyDeploy close to enterprise data sources.Minimize latency.Reduce per-token costs at scale.When enterprises self-host LLMs,driving compute-efficiency and scale introduces 3 key challenges.4Copyright 2025,Oracle and/or its affili
4、ates|Confidential:Internal/Restricted/Highly RestrictedSoftware and Framework ChoicesIntegration,MLOps,and Infra MonitoringOnboarding&Infra ChoicesChallenge#1:Enterprises spend months right sizing and configuring infrastructure to ensure performance and compliance.5Copyright 2025,Oracle and/or its a
5、ffiliates|Confidential:Internal/Restricted/Highly RestrictedOnboarding&Infra ChoicesOptimize network setup and integrate storage(e.g.local NVMe,object storage,and Oracle network file storage service integration with tiering)to minimize latencyEstimate right number and size of GPUs(e.g.H100 vs H200 f
6、or inference workloads)Auto-provision RDMA networking for clustered GPU nodes(e.g.Llama-405B on 2 H100 nodes)Configure secure GPU access and compliance(e.g.IAM policies,network security rules)Install GPU drivers,CUDA,and other libraries while avoiding compatibility/performance issuesChallenge#2:Ente