1、KIMI-VL TECHNICALREPORTKimi TeamABSTRACTWe present Kimi-VL,an effi cient open-source Mixture-of-Experts(MoE)vision-language model(VLM)that offers advanced multimodal reasoning,long-context understanding,and strong agentcapabilitiesall while activating only 2.8B parameters in its language decoder(Kim
2、i-VL-A3B).Kimi-VL demonstrates strong performance across challenging domains:as a general-purpose VLM,Kimi-VL excels in multi-turn agent tasks(e.g.,OSWorld),matching flagship models.Furthermore,itexhibits remarkable capabilities across diverse challenging vision language tasks,including college-leve
3、l image and video comprehension,OCR,mathematical reasoning,multi-image understanding.Incomparative evaluations,it effectively competes with cutting-edge effi cient VLMs such as GPT-4o-mini,Qwen2.5-VL-7B,and Gemma-3-12B-IT,while surpassing GPT-4o in several key domains.Kimi-VL also advances in proces
4、sing long contexts and perceiving clearly.With a 128K extendedcontext window,Kimi-VL can process diverse long inputs,achieving impressive scores of 64.5 onLongVideoBench and 35.1 on MMLongBench-Doc.Its native-resolution vision encoder,MoonViT,further allows it to see and understand ultra-high-resolu
5、tion visual inputs,achieving 83.2 on InfoVQAand 34.5 on ScreenSpot-Pro,while maintaining lower computational cost for common tasks.Building upon Kimi-VL,we introduce an advanced long-thinking variant:Kimi-VL-Thinking.Developed through long chain-of-thought(CoT)supervised fi ne-tuning(SFT)and reinfor
6、cementlearning(RL),this model exhibits strong long-horizon reasoning capabilities.It achieves scores of61.7 on MMMU,36.8 on MathVision,and 71.3 on MathVista while maintaining the compact 2.8Bactivated LLM parameters,setting a new standard for effi cient yet multimodal thinking models.Codeand models