The reasoning segmentation task, which demands a nuanced comprehension of intricate queries to accurately pinpoint object regions, is attracting increasing attention. However, Multi-modal Large Language Models (MLLM) often find it difficult to accurately localize the objects described in complex reasoning contexts. We believe that the act of reasoning segmentation should mirror the cognitive stages of human visual search, where each step is a progressive refinement of thought toward the final object. Thus we introduce the Chains of Reasoning and Segmenting (CoReS) and find this top-down visual hierarchy indeed enhances the visual search process. Specifically, we propose a dual-chain structure that generates multi-modal, chain-like outputs to aid the segmentation process. Furthermore, to steer the MLLM's outputs into this intended hierarchy, we incorporate in-context inputs as guidance. Extensive experiments demonstrate the superior performance of our CoReS, which surpasses the state-of-the-art method by 6.5% on the ReasonSeg dataset.
Overall architecture of CoReS. The input of MLLM consists of the user input in gray and the extra in-context input in orange, which consists of question-answer examples unrelated to the user query. MLLM generates output at the logical level of chain-of-reasoning, where the token embeddings of [LOC] and [SEG] serve as prompt inputs for different positions of the segmentation chain, guiding the chain to generate segmentation results progressively. For conciseness, the diagram excludes the feature extraction from the image using the vision backbone and its input to the mask decoder.
If you use our work in your research, please cite:
@article{bao2024cores,
title={CoReS: Orchestrating the Dance of Reasoning and Segmentation},
author={Bao, Xiaoyi and Sun, Siyang and Ma, Shuailei and Zheng, Kecheng and Guo, Yuxin and Zhao, Guosheng and Zheng, Yun and Wang, Xingang},
journal={arXiv preprint arXiv:2404.05673},
year={2024}
}