Home / Publications / Magnification-Aware Distillation (MAD): A Self-Supervised Framework for Unified Representation Learning in Gigapixel Whole-Slide Images
Publication

Magnification-Aware Distillation (MAD): A Self-Supervised Framework for Unified Representation Learning in Gigapixel Whole-Slide Images

Mahmut S. Gokmen, Mitchell A. Klusty, Peter T. Nelson, Allison M. Neltner, Sen-Ching Samson Cheung, Thomas M. Pearce, David A Gutman, Brittany N. Dugger, Devavrat S. Bisht, Margaret E. Flanagan, V. K. Cody Bumgardner

Details

Journal arXiv preprint
Year 2025
Categories eess.IV, cs.AI, cs.CV, cs.LG
Note 10 pages, 4 figures, 5 tables, submitted to AMIA 2026 Informatics Summit

Abstract

Whole-slide images (WSIs) contain tissue information distributed across multiple magnification levels, yet most self-supervised methods treat these scales as independent views. This separation prevents models from learning representations that remain stable when resolution changes, a key requirement for practical neuropathology workflows. This study introduces Magnification-Aware Distillation (MAD), a framework that jointly processes multiple magnification levels to produce unified representations.

10 pages, 4 figures, 5 tables, submitted to AMIA 2026 Informatics Summit