1、Infinity:视觉自回归生成新路线演讲人:韩剑CVPR2025 Oral目录010203自回归模型和Scaling Law视觉自回归 v.s.扩散模型Infinity:视觉自回归生成新路线分析和思考01自回归模型和Scaling LawAutoRegressive Modelscredit:Autoregressive Models in Vision:A SurveyAutoregressive Sequence ModelingSequence RepresentationScaling Law2020 Scaling Laws for Neural Language ModelsAu
2、toRegressive Models2022 Parti:Scaling Up the AutoRegressive transformer2017 VQVAE:Tokenize Images into discrete token index2021 VQGAN:Image tokenizer+AutoRegressive transformer2020 iGPT:Generative Image&Pretraining from PixelsChallenges:Autoregressive models perform significantly worse than sota dif
3、fusion models in high-resolution image systhesis.Autoregressive models have not demonstrated the same scaling law properties as LLMs in text generation.Due to raster order prediction,autoregressive models suffer from very slow prediction speeds.The raster-scan order is not the most natural order for
4、 images,as it loses the global information crucial for visual modeling.1 Mark Chen et al.,Generative Pretraining From Pixels,in ICML,2020.2 Aaron van et al.,Neural Discrete Representation Learning,2017.3 PATRICK ESSER et al.,Taming Transformers for High-Resolution Image Synthesis,2021 CVPR.4 Jiahui
5、Yu et al.,Scaling Autoregressive Models for Content-Rich Text-to-Image Generation,2022 Arxiv.02视觉自回归v.s.扩散模型Visual Autoregressive Models When humans perceive images or engage in painting,they often start with a holistic overview before delving into finer details.This approach of going from coarse to
6、 fine,grasping the overall context before refining local details,is very natural.Next scale prediction vs Next token predictionVisual Autoregressive Models Stage1:Train multi-scale Image Tokenization for Images,which Tokenize Images into discrete token index Stage2:Train GPT-style(AutoRegressive in