🩺Enhancing Abnormality Grounding for Vision Language Models with Knowledge Descriptions🚀

Jun Li1,2, Che Liu4, Wenjia Bai4, Rossella Arcucci4,
Cosmin I. Bercea*1,3 Julia A. Schnabel*1,2,3,5

*Shared senior authors.
1 Technical University of Munich, Germany
2 Munich Center for Machine Learning, Germany
3 Helmholtz AI & Helmholtz Center Munich, Germany
4 Imperial College London, London, UK
5 King’s College London, London, UK

Teaser GIF Overview of our approach. We train a 0.23B model on just 16,087 samples (1.5% of the data) and achieve similar or better results than the 7B RadVLM, pre-trained on 1 million samples, by using text descriptions that highlight key visual features of abnormalities.

Abstract

Visual Language Models (VLMs) have demonstrated impressive capabilities in visual grounding tasks. However, their effectiveness in the medical domain, particularly for abnormality detection and localization within medical images, remains underexplored. A major challenge is the complex and abstract nature of medical terminology, which makes it difficult to directly associate pathological anomaly terms with their corresponding visual features. In this work, we introduce a novel approach to enhance VLM performance in medical abnormality detection and localization by leveraging decomposed medical knowledge. Instead of directly prompting models to recognize specific abnormalities, we focus on breaking down medical concepts into fundamental attributes and common visual patterns. This strategy promotes a stronger alignment between textual descriptions and visual features, improving both the recognition and localization of abnormalities in medical images.We evaluate our method on the 0.23B Florence-2 base model and demonstrate that it achieves comparable performance in abnormality grounding to significantly larger 7B LLaVA-based medical VLMs, despite being trained on only 1.5% of the data used for such models. Experimental results also demonstrate the effectiveness of our approach in both known and previously unseen abnormalities, suggesting its strong generalization capabilities.

Results

MY ALT TEXT

BibTeX


      @misc{li2025enhancingabnormalitygroundingvision,
        title={Enhancing Abnormality Grounding for Vision Language Models with Knowledge Descriptions}, 
        author={Jun Li and Che Liu and Wenjia Bai and Rossella Arcucci and Cosmin I. Bercea and Julia A. Schnabel},
        year={2025},
        eprint={2503.03278},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2503.03278}, 
  }