1、Logical Closed Loop:Uncovering Object Hallucinations in Large Vision-Language ModelsNew Laboratory of Pattern Recognition(NLPR),State Key Laboratory of Multimodal Artificial Intelligence Systems(MAIS),Institute of Automation,Chinese Academy of Sciences(CASIA)Qiang LiuYSSNLP2024YSSNLP2024YSSNLP2024YS
2、SNLP2024YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP20242Promising LVLMspBackgroundpLarge language models(LLMs)like GPT-4,Llama-3,and Claude,have showcased impressive abilities.pEmpowered by LLMs,large vision-language models(LVLMs)are facilitated to perform strong multimodal understanding and reas
3、oning,e.g.MiniGPT-4,InstructBLIP,LLaVA,and QWEN-VL.InstructBLIP(NIPS 2023)LLaVA(NIPS 2023)YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP20243Hallucinations in LVLMsYSSNLP2024YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP20244Relat
4、ed WorkpRepresentative WorkpHigh-quality pre-trained and instruction tuning.pMitigate hallucinates during the process of decoding.pRevise output with external feedback.pDrawbackspDepend on the quality of instruction data construction and require substantial computational resourcespRequire the access
5、 to the internal states of LVLMs.pRely on external detection models and fail to explore the intrinsic capabilities of LVLMs.Robust Instruction Dataset Construction Attention Patterns Related to HallucinationPost-rectify Hallucinations by Perception Models YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP2024YSSN
6、LP2024YSSNLP2024YSSNLP2024YSSNLP2024YSSNLP20245Torturing LVLMspMotivationpExisting LVLMs are prone to generating hallucinatory objects,significantly compromising the safety and credibility of the output content.pThe logical consistency of LVLM behaviors have the potential to elucidate the underlying