Unsupervised 3D Semantic Segmentation Domain Adaptation
Domain Adaptation (DA) has recently raised strong interests in the medical imaging community. By encouraging algorithms to be robust to unseen situations or different input data domains, Domain Adaptation improves the applicability of machine learning approaches to various clinical settings. While a large variety of DA techniques has been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly address single-class problems. To tackle these limitations, the crossMoDA challenge introduces the first large and multi-class dataset for unsupervised cross-modality Domain Adaptation.
The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the tumour and the cochlea. While contrast-enhanced T1 (ceT1) Magnetic Resonance Imaging (MRI) scans are commonly used for VS segmentation, recent work has demonstrated that high-resolution T2 (hrT2) imaging could be a reliable, safer, and lower-cost alternative to ceT1. For these reasons, we propose an unsupervised cross-modality challenge (from ceT1 to hrT2) that aims to automatically perform VS and cochlea segmentation on hrT2 scans. The training source and target sets are respectively unpaired annotated ceT1 and non-annotated hrT2 scans.
Source (contrast-enhanced T1)
Target (high resolution T2)
All images were obtained on a 32-channel Siemens Avanto 1.5T scanner using a Siemens single-channel head coil:
All data will be made available online with a permissive copyright-license (CC-BY-SA 4.0), allowing for data to be shared, distributed and improved upon. All structures were manually segmented in consensus by the treating neurosurgeon and physicist using both the ceT1 and hrT2 images. To cite this data, please refer to https://doi.org/10.7937/TCIA.9YTJ-5Q73.
No additional data is allowed, including the data released on TCIA and pre-trained models.
The use of a generic brain atlas is tolerated as long as its use is made clear and justified.
Example of tolerated use cases:
Examples of cases that are not allowed:
No additional annotations are allowed.
Models can be adapted (trained) on the target domain (using the provided target training set) in an unsupervised way, i.e. without labels.
The participant teams will be required to release their training and testing code and explain how they fine-tuned their hyper-parameters. Note that the code can be shared with the organisers only as a way to verify validity, and if needed, NDAs can be signed.
The top 3 ranked teams will be required to submit their training and testing codes in a docker container for verification after the challenge submission deadline in order to ensure that the challenge rules have been respected.
Classical semantic segmentation metrics, in this case, the Dice Score (DSC) and the Average Symmetric Surface Distance(ASSD), will be used to assess different aspects of the performance of the region of interest. These metrics are implemented here. The metrics (DSC, ASSD) were chosen because of their simplicity, their popularity, their rank stability, and their ability to assess the accuracy of the predictions.
Participating teams are ranked for each target testing subject, for each evaluated region (i.e., VS and cochlea), and for each measure (i.e., DSC and ASSD). The final ranking score for each team is then calculated by firstly averaging across all these individual rankings for each patient (i.e., Cumulative Rank), and then averaging these cumulative ranks across all patients for each participating team.