1、多模态大语言模型中的上下文学习杨旭 东南大学演讲嘉宾杨旭东南大学 副教授杨旭博士2021年6月从南洋理工大学计算机科学与技术系获工学博士学位,导师为蔡剑飞,张含望教授。现为东南大学计算机科学与工程学院、软件学院、人工智能学院副教授、任东南大学新一代人工智能技术与交叉应用教育部重点实验室副主任。现主要从事视觉文本多模态大模型应用研究以及一种新的大模型训练-部署模式:学习基因的研究。目 录CONTENTS1.Background2.Heuristic-based configuration strategies3.Learning-based configuration strategiesBac
2、kground“Why do we need In Context Learning?”PART 01The Development of GPTLMICLMultimodal1 Background6GPT(2018)1.5B ParametersPrompt EngineeringGPT-2(2019)175B ParametersIn-context LearningGPT-3(2020)GPT-4(2023)1324DataDataPre-training Fine-tuningPromptImageTextMultimodal VideoIn-contextExamples7GPT-
3、2s Capability of Prompt Engineering l GPT-2 exhibits a distinctive feature known as“prompt engineering”.l This can be compared to the architecture of modern computers,where both data and commands exist in the form of 0s and 1s encoding.LMICL1 BackgroundMultimodal8GPT-3s Capability of In-Context Lear
4、ningl GPT-3 possesses a unique capability known as“In-context learning”.l It will learn the representation of tasks from the provided in-context examples.LMICL1 BackgroundMultimodal9Why In-Context Learning?In-Context Learning Prompt EngineeringYield precise responsesUnlock the potential of LLMsA spe
5、cialized prompt engineeringAdapt to a task using a few examplesfew shotLiu,Pengfei,et al.Pre-train,prompt,and predict:A systematic survey of prompting methods in natural language processing.Dong,Qingxiu,et al.A survey for in-context learning.LMICL1 BackgroundMultimodal10Why In-Context Learning?outsi
6、de-in methodologies to unravel the inner properties of LLMsPros of ICLHow many meters does a 1-kilogram object fall in 1 second?4.9 mObjects fall with a constant acceleration due to gravity,regardless of their mass.What about 10-kilogram?4.9 mProviding incorrect examples does not affect the LLMs abi