Publication details

Why rankings of biomedical image analysis competitions should be interpreted with care

Authors

MAIER-HEIN Lena EISENMANN Matthias REINKE Annika ONOGUR Sinan STANKOVIC Marko SCHOLZ Patrick ARBEL Tal BOGUNOVIC Hrvoje BRADLEY Andrew CARASS Aaron FELDMANN Carolin FRANGI Alejandro FULL Peter VAN GINNEKEN Bram HANBURY Allan HONAUER Katrin KOZUBEK Michal LANDMAN Bennett MÄRZ Keno MAIER Oskar MAIER-HEIN Klaus MENZE Bjoern MÜLLER Henning NEHER Peter NIESSEN Wiro RAJPOOT Nasir SHARP Gregory SIRINUKUNWATTANA Korsuk SPEIDEL Stefanie STOCK Christian STOYANOV Danail TAHA Abdel Aziz VAN DER SOMMEN Fons WANG Ching-Wei WEBER Marc-André ZHENG Guoyan JANNIN Pierre KOPP-SCHNEIDER Annette

Type Article in Periodical
Magazine / Source Nature Communications
MU Faculty or unit

Faculty of Informatics

Citation
WWW http://doi.org/10.1038/s41467-018-07619-7
Doi http://dx.doi.org/10.1038/s41467-018-07619-7
Keywords biomedical image analysis; benchmarking; challenge
Description International challenges have become the standard for validation of biomedical image analysis methods. Given their scientific impact, it is surprising that a critical analysis of common practices related to the organization of challenges has not yet been performed. In this paper, we present a comprehensive analysis of biomedical image analysis challenges conducted up to now. We demonstrate the importance of challenges and show that the lack of quality control has critical consequences. First, reproducibility and interpretation of the results is often hampered as only a fraction of relevant information is typically provided. Second, the rank of an algorithm is generally not robust to a number of variables such as the test data used for validation, the ranking scheme applied and the observers that make the reference annotations. To overcome these problems, we recommend best practice guidelines and define open research questions to be addressed in the future.
Related projects: