《超越AI准确性:通过Mosaic AI框架构建值得信赖且负责任的AI应用程序.pdf》由会员分享,可在线阅读,更多相关《超越AI准确性:通过Mosaic AI框架构建值得信赖且负责任的AI应用程序.pdf(23页珍藏版)》请在三个皮匠报告上搜索。
1、Beyond AI Accuracy Building responsible AI application in the era of agentic AI Through Mosaic AI frameworkAnanya Roy11-June-2025 Shift thinking towards responsible agentic AI Takeaways Build custom metrics,iterate to impact Demo of end-to-end eval metric with Mosaic AI Summary:Design for trust and
2、controlResponsible AI:in the era of agents4The pillars for building agentic(or any)AI applicationsResponsible AISafety&GuardrailsHuman-centric designGovernanceObservability&TransparencyAccuracy&EvaluationSecurityApplication MonitoringResponsible AI:in the era of agents5Why it is important and what w
3、e would coverResponsible AISafety&GuardrailsHuman-centric designGovernanceObservability&TransparencyAccuracy&Evaluation(custom)SecurityApplication MonitoringTodays sessionWhy Evaluation is Important6Rationale for implementing an evaluation system*RAG:retrieval augment generationEvaluating LLM model
4、Evaluating AI AgentsLLM model Evaluation Static metrics,only validate the output on a standard benchmark focusing on Language.Agentic Apps EvaluationDynamic metrics with multiple components.Apps perform“actions”instead of just generating“outputs”.7Where the metrics fall short?Consider environment is
5、 constant with no variability.Cost is deterministic based on input/output.Output changes upon scenario and environment,no single benchmark can be applied.Purpose built agents cant be tested across generalized benchmarks.Cost is non-deterministic and varies based on execution flow.Lets understand eva
6、luation8How we have been evaluating LLM Applications?AI ApplicationSummarizationKnowledge Base RAGAgentic AIClassificationsentiment analysis etc.Statistical metricsContext relevanceGroundedRelevancyLLM Judge metricsHuman-in-loopUse Case TypesStandard metrics are not always enoughmetricsBLEU/ROUGE/BE