Project information
Decompose and Explain: How to Look Inside Computer Vision Models
- Project Identification
- GA26-23981S
- Project Period
- 1/2026 - 12/2028
- Investor / Pogramme / Project type
-
Czech Science Foundation
- Standard Projects
- MU Faculty or unit
- Faculty of Informatics
Modern vision models are increasingly complex, foundational, and multimodal, posing challenges to their understandability and interpretability, especially in critical domains like healthcare, finance, law or autonomous systems. Post-hoc explainable methods often fail to provide meaningful insights into internal components’ functionality and scale poorly with model complexity. We propose a compositional approach that interprets models as combinations of interpretable components, enabling modular insights and scalable explanations. We aim to develop a framework and algorithms for designing concept-based interfaces between components, implement prototypes, and validate their usefulness in medical vision tasks. This will empower designers to understand the components’ added value, optimize models’ architecture and complexity, help ensure safe and effective use, and build trust in high-stakes applications.