《鲍凡_多模态生成大模型-v2_watermark.pdf》由会员分享,可在线阅读,更多相关《鲍凡_多模态生成大模型-v2_watermark.pdf(28页珍藏版)》请在三个皮匠报告上搜索。
1、A Tutorial on Large Multi-Modal Generative ModelsTsinghua Univiersty,ShengShu CTO,Fan BaoWhat is Multi-Modality?Modality:A way to organize information(信息组织的某种方式)Visual information:images、videos Space information:3D Abstract information:Text Large Multi-Modal Models:Sufficiently understand interleave
2、d inputs of various modalities Smartly choose a proper modality as its output,i.e.,a proper way to output informationEach modality has its own special knowledgeParadigms of Large ModelsLarge Language ModelLarge Multi-Modal ModelArchitectureConverge to transformerMany solutions,not an absolutely opti
3、mal oneScaling lawBig data+trillions of parameters emergent abilityEarly stage of verificationAlignmentInstruction tuning+RLHF friendly assistant of humansEarly stage of verificationSchemes for Large Multi-Modal Models Extend Large Language Models Extend Diffusion ModelsExtend Large Language Models
4、Adapter mode:Add learnable modules to LLM decoder Flamingo Feature alignment mode:Align features of other modalities to the embedding space of language tokens Freeze LLM:ClipCap、BLIP-2、PaLM-E Learn all parameters:KOSMOS、EmuAdapter Mode:FlamingoFreeze the self-attention layers of LLMInject learnable
5、cross-attention layersImage embeddings as key&value,text embeddings as queryAdapter Mode:FlamingoShow some simple in-context learning abilities of images and languagesThe ability comes from43M webpages(consist of interleaved image-text data)1.8B pairs of image and textOnly output the single language
6、 modalityFeature Alignment Mode:ClipCapLearn a mapping network to convert visual features to language embedding spaceThe converted visual features serve as the prefix.The language model generates subsequent texts.Only support the captioning task.Work before the burst of large models:Data size=100M i