《使用 AI FEEDBACK ON DATABRICKS 的 RL 进行大规模建模对齐.pdf》由会员分享,可在线阅读,更多相关《使用 AI FEEDBACK ON DATABRICKS 的 RL 进行大规模建模对齐.pdf(30页珍藏版)》请在三个皮匠报告上搜索。
1、2024 Databricks Inc.All rights reservedModel Alignment at Model Alignment at Scale using RL Scale using RL from AI Feedback from AI Feedback on Databrickson DatabricksMichael ShtelmaMichael ShtelmaRyuta YoshimatsuRyuta YoshimatsuAlex MillerAlex Miller12024 Databricks Inc.All rights reserved2024 Data
2、bricks Inc.All rights reserved2TeamTeamMichael ShtelmaLead Specialist Solutions Architect at DatabricksRyuta YoshimatsuSpecialist Solutions Architect at DatabricksAlex MillerSpecialist Solutions Architect at Databricks2024 Databricks Inc.All rights reserved2024 Databricks Inc.All rights reserved Wha
3、t is Model Alignment and why do we need it?Using RLHF to align models RLAIF:Using LLM as a Reward Model DPO:Direct Preference Optimization Model Alignment Solution Accelerator Implementation details Test Results Important Metrics3AgendaAgenda2024 Databricks Inc.All rights reserved2024 Databricks Inc
4、.All rights reserved4What is Model Alignment?What is Model Alignment?How does the typical LLM project looks like?How does the typical LLM project looks like?Define use caseDefine evaluation harnessChoose base modelAdapt modelTest&Deploy2024 Databricks Inc.All rights reserved2024 Databricks Inc.All r
5、ights reserved Model Adaptation may include:Continued pre-trainingSupervised Instruction Fine-tuningModel Alignment We need model alignment if its hard to express the business requirements using simple instructions5What is Model Alignment?What is Model Alignment?Model Adaptation phaseModel Adaptatio
6、n phaseContinued Pre-trainingInstruction Fine-tuningModel Alignment2024 Databricks Inc.All rights reserved2024 Databricks Inc.All rights reserved RLHF was introduced as a tool to align LLMs to human preferences by OpenAI in their InstructGPT paper(https:/arxiv.org/pdf/2203.02155)RLHF usually consist