1、构建云原生的端到端生成式 AI 应用亚马逊云科技解决方案架构师/肖元君议程大模型推理面临的挑战通过 Amazon SageMaker 和 Amazon Bedrock 降低大模型推理复杂度构建端到端的生成式 AI 应用与 Demo 演示大模型推理面临的挑战大模型推理面临的挑战复杂性复杂性模型体量大 模型并行模型 serving 基础设施设置成本成本模型编译 模型托管成本运营开销 待部署和管理的模型数量性能性能模型编译 模型压缩延迟 吞吐量 可用性模型多卡并行推理方式模型多卡并行推理方式Tensor parallel Intra layerPipeline parallelism Inter l
2、ayer模型压缩模型压缩PruningDistillation模型 Pruning模型蒸馏模型量化(BitsandBytes,GPTQ,SmoothQuant)AttentionAttention 层计算优化层计算优化FlashAttentionPageAttentionContinuousContinuous BatchingBatchingLLM LLM 推理优化框架推理优化框架DeepSpeed inference 模型量化Tensor ParallelMoE 推理(模型蒸馏)Hugging Face TGITPAttention 计算优化(Flash Attention and Pag
3、ed Attention)Continuous batching模型量化Nvidia FasterTransformer模型压缩TP/PP模型量化vLLMTPContinuous batchingPaged AttentionHugging Face AcceleratePP模型量化TensorRT-LLM议程大模型推理面临的挑战通过 Amazon SageMaker 和 Amazon Bedrock 降低大模型推理复杂度构建端到端的生成式 AI 应用与 Demo 演示SageMakerSageMaker 大模型推理镜像大模型推理镜像(LMILMI)DJL ServingDeepSpeed,H
4、ugging Face Accelerate,FasterTransformer,transformers-neuronxPyTorchGPU:cuDNN cuBLASNCCL CUDA toolkitAWS Inferentia:NeuronCPU:mklBase imageZero-code setup:DeepSpeed,and Hugging Face;FasterTransformer;transformers-neuronx handlersSupported instances type:G4dnG5P3P4P5Inf2使用基于使用基于Amazon Amazon SageMake
5、rSageMaker 的的 Large model Large model inference/LMI inference/LMI 容器容器:支持多种不同的推理引擎:HF accelerate,deepspeed,fastertransformer等 利用内置的S5cmd命令可以快速的下载大模型Static/Dynamic/Rolling Batch支持流式生成新的tokendeepspeed推理引擎支持bf16模型:开源版本的deepspeed inference不支持bf16模型Flash attention/Paged attentionLargeLarge ModelModel Inf
6、erence(LMI)Inference(LMI)镜像新特性镜像新特性 Now available on DLC with DJLServing 0.25.0 Integrate with vLLM(DeepSpeed Container,MPI Engine,rolling_batch to vllm)Integrate with TensorRT-LLM TensorRT-LLM Smoothquant(75%latency drop comparing to TGI INT8)Very Fast FP16/BF16 inference speed Rolling batch withou