《昆仑万维:2025 Skywork-Reward-V2技术报告(英文版)(23页).pdf》由会员分享,可在线阅读,更多相关《昆仑万维:2025 Skywork-Reward-V2技术报告(英文版)(23页).pdf(23页珍藏版)》请在三个皮匠报告上搜索。
1、2025-07-04Skywork-Reward-V2:Scaling Preference DataCuration via Human-AI SynergyChris Yuhao LiuLiang ZengYuzhen XiaoJujie HeJiacai LiuChaojie WangRui YanWei ShenFuxiang ZhangJiacheng XuYang LiuYahui Zhou2050 Research,Skywork AIhttps:/huggingface.co/Skyworkhttps:/ the critical role of reward models(R
2、Ms)in reinforcement learning from humanfeedback(RLHF),current state-of-the-art open RMs perform poorly on most existingevaluation benchmarks,failing to capture the spectrum of nuanced and sophisticatedhuman preferences.Even approaches that incorporate advanced training techniqueshave not yielded mea
3、ningful performance improvements.We hypothesize that thisbrittleness stems primarily from limitations in preference datasets,which are oftennarrowly scoped,synthetically labeled,or lack rigorous quality control.To addressthese challenges,we present a large-scale preference dataset comprising 40 mill
4、ionpreference pairs,named SynPref-40M.To enable data curation at scale,we design ahuman-AI synergistic two-stage pipeline that leverages the complementary strengths ofhuman annotation quality and AI scalability.In this pipeline,humans provide verifiedannotations,while large language models perform a
5、utomatic curation based on humanguidance.Training on this preference mixture,we introduce Skywork-Reward-V2,asuite of eight reward models ranging from 0.6B to 8B parameters,trained on a carefullycurated subset of 26 million preference pairs from SynPref-40M.We demonstrate thatSkywork-Reward-V2 is ve
6、rsatile across a wide range of capabilities,including alignmentwith human preferences,objective correctness,safety,resistance to stylistic biases,andbest-of-N scaling,achieving state-of-the-art performance across seven major rewardmodel benchmarks.Ablation studies confirm that the effectiveness of o