1、Self-improvement and Self-evolving of Large Language Models大语言模型的自我改进和自我进化刘群 LIU Qun华为诺亚方舟实验室 Huawei Noahs Ark LabRLChina 2023 大模型与AI Agent2023-11-24,SuzhouIntroductionSELF:Language-Driven Self-Evolution for LLMsGaining Wisdom from Setbacks:Aligning LLMs via Mistake AnalysisRelated Work and Discussi
2、onContentIntroductionSELF:Language-Driven Self-Evolution for LLMsGaining Wisdom from Setbacks:Aligning LLMs via Mistake AnalysisRelated Work and DiscussionContentTraining Data for LLMsGPT-3(OpenAI,2020.5):500 Billion tokensPalm(Google,2022.4):780 Billion tokensChinchilla(Deepmind):1.4 Trillion token
3、sLlama(Meta):1.5 Trillion tokensLlama2(Meta):2 Trillion tokensGPT-4(OpenAI):13 Trillion tokens(text*2+code*4)+2 Trillion tokens(image)1 total:33Will we run out of data?(a)Projections for low-qualitylanguage data(b)Projections for high-qualitylanguage data(c)Projections for vision dataFig.1:Projectio
4、ns of data usage.Each graph shows two extrapolations of data usage,one from past trends and one from compute availabilityestimations plus scaling laws.Both projections are constrained to be lower than the estimated data stock.In all three cases,this constraintcauses a slowdown in data usage growth.I
5、II.METHODSA.Projecting growth in training dataset sizesPrevious work compiled historical trends of dataset sizework for different application domains21.Our definition of dataset size is the number of uniquedatapoints on which the model is trained.The definition ofdatapoint is different for each doma
6、in.In particular,forlanguage data we define a datapoint as a word,and for imagedata we define a datapoint as an image.Additional details onthis choice of dataset size metric can be found in 1.Using the historical trend,together with the size of thelargest datasets used to date,we can estimate the fu