PromptNu

Prompting Vision-Language Model for Nuclei Instance Segmentation and Classification

Zhaohui Zheng*
Xijing Hospital
Qiang Xie
USTC
Longfei Han
BTBU, WiSOMNi
Dingwen Zhang#
NWPU, Xijing Hospital
Junwei Han#
NWPU, CQUPT‌‌

* Equal contribution; # Corresponding author


Abstract

overview

Comparisons between the traditional nuclei instance training paradigm with PromptNu. PromptNu can transfer knowledge from prompts containing abundant nuclear characteristics and pretrained vision-language models (VLMs) to nuclei instance segmentation and classification.


Framework

overview

Firstly, attribute-aware and class-specific text prompts are generated by incorporating multifaceted prior knowledge, drawn from GPT-4V pathology visual analysis, statistical considerations, and pathologists' clinical expertise. Next, PromptNu extracts both image and text embeddings, leveraging global information from attribute-aware text embeddings integrated into image embeddings via PNuRL. To better exploit class-specific knowledge, PNuDP calculates the original image-text matching paradigm into pixel-text matching. The pixel-text score maps are subsequently fed into the decoder and supervised using ground-truth labels.


Dataset

overview

Our experiments are conducted on six datasets, ie, MoNuSeg, CPM-17, PanNuke and NuInsSeg for the semantic segmentation and instance segmentation tasks, Lizard and CoNSeP for semantic segmentation, instance segmentation and classification tasks.


Visualization

overview

Visualization results on MoNuSeg, CPM-17, PanNuke, NuInsSeg, Lizard and CoNSeP datasets. For instance segmentation (top three rows), different colors of nuclear boundaries represent distinct instances, while for classification results (bottom two rows), varying colors indicate different types. Regions of evident improvements are enlarged via yellow boxes to show better details.


Citation

@article{yao2025promptnu,
  title = {Prompting Vision-Language Model for Nuclei Instance Segmentation and Classification},
  author = {Jieru Yao and Guangyu Guo and Dingwen Zhang and Qiang Xie and Longfei Han and Zhaohui Zheng and Junwei Han},
  year = {2025},
}
Acknowledgements

The website style was inspired from DreamFusion.