Cross-Modality Domain Adaptation for Medical Image Segmentation

Unsupervised 3D Semantic Segmentation Domain Adaptation

πŸ‘‹ Results are out!

**Cross-Modality Domain Adaptation for Medical Image Segmentation**

Announcements

NVIDIA sponsors one NVIDIA RTX 3090 (24 GB - retail price: 1500$) for the challenge winner.

Challenge participants will have the opportunity to submit their methods as part of the post-conference MICCAI BrainLes 2021 proceedings.

Aim

Domain Adaptation (DA) has recently raised strong interests in the medical imaging community. By encouraging algorithms to be robust to unseen situations or different input data domains, Domain Adaptation improves the applicability of machine learning approaches to various clinical settings. While a large variety of DA techniques has been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly address single-class problems. To tackle these limitations, the crossMoDA challenge introduces the first large and multi-class dataset for unsupervised cross-modality Domain Adaptation.

Task

The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the tumour and the cochlea. While contrast-enhanced T1 (ceT1) Magnetic Resonance Imaging (MRI) scans are commonly used for VS segmentation, recent work has demonstrated that high-resolution T2 (hrT2) imaging could be a reliable, safer, and lower-cost alternative to ceT1. For these reasons, we propose an unsupervised cross-modality challenge (from ceT1 to hrT2) that aims to automatically perform VS and cochlea segmentation on hrT2 scans. The training source and target sets are respectively unpaired annotated ceT1 and non-annotated hrT2 scans.

Source (contrast-enhanced T1)

Target (high resolution T2)

Data

All images were obtained on a 32-channel Siemens Avanto 1.5T scanner using a Siemens single-channel head coil:

  • Contrast-enhanced T1-weighted imaging was performed with an MPRAGE sequence with in-plane resolution of 0.4Γ—0.4 mm, in-plane matrix of 512Γ—512, and slice thickness of 1.0 to 1.5 mm (TR=1900 ms, TE=2.97 ms, TI=1100 ms)
  • High-resolution T2-weighted imaging was performed with a 3D CISS or FIESTA sequence in-plane resolution of 0.5x0.5 mm, in-plane matrix of 384x384 or 448x448, and slice thickness of 1.0 to 1.5 mm (TR=9.4 ms, TE=4.23ms).

All data will be made available online with a permissive copyright-license (CC-BY-SA 4.0), allowing for data to be shared, distributed and improved upon. All structures were manually segmented in consensus by the treating neurosurgeon and physicist using both the ceT1 and hrT2 images. To cite this data, please refer to https://doi.org/10.7937/TCIA.9YTJ-5Q73.

Rules

  1. No additional data is allowed, including the data released on TCIA and pre-trained models. The use of a generic brain atlas is tolerated as long as its use is made clear and justified.
    Example of tolerated use cases:

    • Spatial normalisation to MNI space
    • Use of classical single-atlas based tools (e.g., SPM)

    Examples of cases that are not allowed:

    • Multi-atlas registration based approaches in the target domain
  2. No additional annotations are allowed.

  3. Models can be adapted (trained) on the target domain (using the provided target training set) in an unsupervised way, i.e. without labels.

  4. The participant teams will be required to release their training and testing code and explain how they fine-tuned their hyper-parameters. Note that the code can be shared with the organisers only as a way to verify validity, and if needed, NDAs can be signed.

  5. The top 3 ranked teams will be required to submit their training and testing codes in a docker container for verification after the challenge submission deadline in order to ensure that the challenge rules have been respected.

Evaluation

Classical semantic segmentation metrics, in this case, the Dice Score (DSC) and the Average Symmetric Surface Distance(ASSD), will be used to assess different aspects of the performance of the region of interest. These metrics are implemented here. The metrics (DSC, ASSD) were chosen because of their simplicity, their popularity, their rank stability, and their ability to assess the accuracy of the predictions.

Participating teams are ranked for each target testing subject, for each evaluated region (i.e., VS and cochlea), and for each measure (i.e., DSC and ASSD). The final ranking score for each team is then calculated by firstly averaging across all these individual rankings for each patient (i.e., Cumulative Rank), and then averaging these cumulative ranks across all patients for each participating team.

Timeline

26th March 2021:Registration is open!
5th April 2021:Release of the training and validation data (see data page)
5th May 2021:
Start of the validation period. Participants are invited to submit their predictions on the validation dataset (see  submission page)
15th July 2021 25th July 2021:Start of the evaluation period. Participants are invited to submit their predictions on the testing dataset (see  submission page)
3rd August 2021 13th August 2021::End of the evaluation period
27th September 2021:Challenge results are announced at MICCAI 2021
30th November 2021:Participants are invited to submit their methods to the MICCAI 2021 BrainLes Workshop.
December 2021:

Submission of a joint manuscript summarizing the results of the challenge to a high-impact journal in the field (e.g., TMI, MedIA)

Winners of crossMoDA-2020 challenge

Full leaderboard and description of the different proposed approaches here(/participants):

#Team NameAffiliationDSCTechnical report
1SamoyedYonsei University83.9​Self-Training Based Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma and Cochlea Segmentation
2PKU_BIALABPeking University83.4​Unsupervised Domain Adaptation in Semantic Segmentation Based on Pixel Alignment and Self-Training (PAST)
3jwc-rad​Seoul National University82.5​Using Out-of-the-Box Frameworks for Unpaired Image Translation and Image Segmentation for the crossMoDA Challenge

Organising Team

Avatar

Reuben Dorent

Leadership, Conceptual Design, Data Pre-Processing, Stats and Metrics Committee

King’s College London, United Kingdom

Avatar

Tom Vercauteren

Leadership, Conceptual Design, Stats and Metrics Committee

King’s College London, United Kingdom

Avatar

Jonathan Shapey

Clinical Advisor, Data Curation

King’s College London, United Kingdom

King’s College Hospital NHS Foundation Trust, United Kingdom

Avatar

Samuel Joutard

Conceptual Design, Challenge Day-to-day Support

King’s College London, United Kingdom

Avatar

Aaron Kujawa

Conceptual Design, Data Pre-Processing, Data Curation

King’s College London, United Kingdom

Avatar

Ben Glocker

Conceptual Design, Stats and Metrics Committee

Imperial College London, United Kingdom

Avatar

Jorge Cardoso

Conceptual Design, Stats and Metrics Committee

King’s College London, United Kingdom

Avatar

Marc Modat

Conceptual Design, Stats and Metrics Committee

King’s College London, United Kingdom

Avatar

Nicola Rieke

Conceptual Design, Stats and Metrics Committee

NVIDIA

Avatar

Spyridon Bakas

Conceptual Design, Stats and Metrics Committee

University of Pennsylvania, USA

Sponsors

NVIDIA sponsors one NVIDIA RTX 3090 (24 GB - retail price: 1500$) for the challenge winner.

BANA (British Acoustic Neuroma Association) sponsors a cash prize of Β£100.

Contact