Oral Abstract
Thomas C. R Hadler, PhD
Postgraduate
Charité - Universitätsmedizin Berlin, Germany
Thomas C. R Hadler, PhD
Postgraduate
Charité - Universitätsmedizin Berlin, Germany
Clemens Ammann
Physician
Charité – Universitätsmedizin Berlin, Germany
Philine Reisdorf, MSc
PhD Student
Charité – Universitätsmedizin Berlin, Germany
Steffen Lange
Professor for Theoretical Computer Science
Hochschule Darmstadt, Germany
Jeanette Schulz-Menger, MD
Head Working Group Cardiac MRI
Charité/ University Medicine Berlin and Helios, Germany
For the training of robust and reliable convolutional neural networks (CNN) in CMR, the learning environment (LE), including data pre-processing and choice of CNN parameters, plays a crucial role (1,2). In CMR, domain shifts such as different technical setups, sites and patient characteristics, may interfere with CNN performance. Typically, LEs are manually designed by data scientists who finetune them with heuristics and experiments (3,4). Data augmentations increase the performance and generalizability of CNNs across domains by increasing image variability (Fig.1). Optimizing LEs requires determining high-dimensional combinations of many augmentations, their probabilities, and CNN parameters for the training process (5). Typically, LEs are fixed for the entire training process. However, in principle optimal LEs may vary over the training process, called shifting LEs, with different combinations being optimal at different time points.
The aim is to design an evolutionary algorithm that computes shifting LEs for CNNs that predict myocardial segmentations and reference points for parametric mapping.
Methods:
An evolutionary algorithm was designed to train 24 CNNs in parallel to learn contours and reference points. Each CNN is initialized with a random LE. After each train step the CNNs are evaluated on a validation set. The 5 worst performers are replaced with copies of the 5 top performers, then the LE parameters of all CNNs are slightly mutated to explore new combinations (Fig.2). The overall dataset consists of 721 images from 283 patients annotated by a CMR expert with reference points and myocardial contours. Training data was the first half, validation and test data the two last quarters respectively. LE parameter averages were calculated as they evolved. The segmentation performance was evaluated using the Dice similarity coefficient (Dice), the reference point estimation via CNN and expert distance, and overall performance as T1 mean deviations.
Results:
All LE parameters evolved over time with average data augmentation probabilities increasing from 0% to approximately 45% (Fig.3c). The data augmentation parameters for individual operations partially shifted over time (e.g. contrast increased to 45% by train step 10, then decreased to 18% by train step 40), others remained stationary after an initial increase (e.g. rotation plateaued after 20 train steps) (Fig.3a). Average CNN performances improved during evolution: Dice increased to 84%, and reference point distance decreased to 5mm (Fig.3d). The best performing CNN was evaluated on an unseen test dataset. Its performance was Dice: 84%, reference point distance: 5mm (Fig.3e), T1 global difference: -1ms, and segmental differences between -2 and 0 (Fig.3f).
Conclusion:
The evolutionary algorithm optimizes shifting learning environments effectively, producing high-performing CNNs for T1 image segmentation and reference point estimation. The generic approach should generalize well to other optimization problems.