CoReS: Orchestrating the Dance of Reasoning and Segmentation

CASIA1,  Alibaba Group2,  Northeastern University3,  Ant Group4 

News: Accepted by ECCV2024!

Comparison between our CoReS and LISA. UP: the process of LISA, DOWN: the diagram of CoReS. Given textual and visual inputs, LISA directly uses the [SEG] token output by MLLM to generate a mask. On the contrary, our CoReS involves breaking down the task of "finding the part that gives dogs keen sense of smell" into a logical chain such as "first find the front part of the dog's face, then focus on this specific area, search for the nose of the dog." It can be observed that LISA incorrectly segments the dog's eyes, which are similarly round, dark, and important in sensory perception. In contrast, through in-context input and dual-chain structure, CoReS achieves the segmentation of the nose of the dog correctly.

Visual comparison of CoReS and LISA.

Abstract

The reasoning segmentation task, which demands a nuanced comprehension of intricate queries to accurately pinpoint object regions, is attracting increasing attention. However, Multi-modal Large Language Models (MLLM) often find it difficult to accurately localize the objects described in complex reasoning contexts. We believe that the act of reasoning segmentation should mirror the cognitive stages of human visual search, where each step is a progressive refinement of thought toward the final object. Thus we introduce the Chains of Reasoning and Segmenting (CoReS) and find this top-down visual hierarchy indeed enhances the visual search process. Specifically, we propose a dual-chain structure that generates multi-modal, chain-like outputs to aid the segmentation process. Furthermore, to steer the MLLM's outputs into this intended hierarchy, we incorporate in-context inputs as guidance. Extensive experiments demonstrate the superior performance of our CoReS, which surpasses the state-of-the-art method by 6.5% on the ReasonSeg dataset.

Method

Overall architecture of CoReS. The input of MLLM consists of the user input in gray and the extra in-context input in orange, which consists of question-answer examples unrelated to the user query. MLLM generates output at the logical level of chain-of-reasoning, where the token embeddings of [LOC] and [SEG] serve as prompt inputs for different positions of the segmentation chain, guiding the chain to generate segmentation results progressively. For conciseness, the diagram excludes the feature extraction from the image using the vision backbone and its input to the mask decoder.

Results


Qualitative interpretation of the advantages of the multi-modal chain-of-thought over LISA. From left to right are the input image, LISA result, CoReS first logic layer segmentation result, CoReS final result, and ground truth mask.


More visual comparisons of CoReS and LISA.


BibTeX

If you use our work in your research, please cite:

@article{bao2024cores,
  title={CoReS: Orchestrating the Dance of Reasoning and Segmentation},
  author={Bao, Xiaoyi and Sun, Siyang and Ma, Shuailei and Zheng, Kecheng and Guo, Yuxin and Zhao, Guosheng and Zheng, Yun and Wang, Xingang},
  journal={arXiv preprint arXiv:2404.05673},
  year={2024}
}