HARIS: Human-Like Attention for Reference Image Segmentation
Referring image segmentation method combined with a parameter-efficient fine-tuning framework to improve accuracy and zero-shot segmenting
Abstract
Referring image segmentation (RIS) aims to locate the particular region corresponding to the language expression. Existing methods incorporate features from different modalities in a bottom-up manner. This design may get some unnecessary image-text pairs, which leads to an inaccurate segmentation mask. In this paper, we propose a referring image segmentation method called HARIS, which introduces the HumanLike Attention mechanism and uses the parameter-efficient finetuning (PEFT) framework. To be specific, the Human-Like Attention gets a feedback signal from multi-modal features, which makes the network center on the specific objects and discard the irrelevant image-text pairs. Besides, we introduce the PEFT framework to preserve the zero-shot ability of pretrained encoders. Extensive experiments on three widely used RIS benchmarks and the PhraseCut dataset demonstrate that our method achieves state-of-the-art performance and great zero-shot ability.
Comments
None