Abstract: Zero-shot image captioning can harness the knowledge of pre-trained visual language models (VLMs) and language models (LMs) to generate captions for target domain images without paired ...
This paper aims to address universal segmentation for image and video perception with the strong reasoning ability empowered by Visual Large Language Models (VLLMs). Despite significant progress in ...
Abstract: Visual grounding tasks aim to localize image regions based on natural language references. In this work, we ex-plore whether generative VLMs predominantly trained on image-text data could be ...
1 University of Science and Technology of China 2 WeChat, Tencent Inc. 1. A Novel Parameter Space Alignment Paradigm Recent MLLMs follow an input space alignment paradigm that aligns visual features ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results