Results

Leaderboard

TeamRankingVS_DiceVS_ASSDCochlea_DiceCochlea_ASSD
Samoyed10.82970.52320.84880.3424
PKU_BIALAB20.87070.36600.79780.2955
jwc-rad30.82881.04360.82170.2858
MIP40.79951.29020.82480.1822
PremiLab50.77272.77620.79670.2936
epione-liryc60.78602.05680.76580.3858
MedICL70.77563.06340.74450.5333
DBMI_pitt80.473410.99500.79690.5086
Hi-Lib90.66864.39440.66491.2663
smriti16109100.72302.98760.51310.9523
IMI110.60044.47320.42819.8191
GapMIND120.60813.83770.51761.6570
gabybaldeon130.62327.57860.39873.9180
SEU_Chen140.114238.07440.494514.0109
skjp150.210424.48300.213915.6275
IRA160.119330.83890.214219.5226

Event recording

Proposed approaches

#1 - Samoyed

Self-Training Based Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma and Cochlea Segmentation

Hyungseob Shin ; Hyeon Gyu Kim; Sewon Kim; Yohan Jun ; Taejoon Eo ; Dosik Hwang (Yonsei University)

Paper

#2 - PKU_BIALAB

Unsupervised Domain Adaptation in Semantic Segmentation Based on Pixel Alignment and Self-Training (PAST)

Hexin Dong; Fei Yu; Jie Zhao; Bin Dong; Li Zhang (Peking University)

Paper

#3 - jwc-rad

Using Out-of-the-Box Frameworks for Unpaired Image Translation and Image Segmentation for the crossMoDA Challenge

Jae Won Choi (College of Medicine, Seoul National University)

Paper

#4 - MIP

Cross-Modality Domain Adaptation for Vestibular Schwannoma and Cochlea Segmentation

JHan Liu; Yubo Fan; Can Cui; Dingjie Su; Andrew Mcneil; Benoit Dawant (Vanderbilt University)

Paper

#5 - PremiLab

DAR-UNet: Dual Attention ResU-Net for CrossMoDa Challenge

Kai Yao; Zixian Su; Xi Yang; Kaizhu Huang; jie Sun (Xi’an Jiaotong-Liverpool University )

Paper

#6 - epione-liryc

Cross-Modality Domain Adaptation for Vestibular Schwannoma and cochlea segmentation from high-resolution T2 MRI (Epione-Liryc team)

Buntheng Ly; Victoriya Kashtanova; Yingyu Yang; Aurelien Maillot; Marta Nunez-Garcia; Maxime Sermesant (INRIA)

Paper

#7 - MedICL

Unsupervised Cross-modality Domain Adaptation for Segmentating Vestibular Schwannoma and Cochlea with Data Augmentation and Model Ensemble

Hao Li; Dewei Hu; Qibang Zhu; Kathleen E Larson; Huahong Zhang; Ipek Oguz

Paper

#8 - DBMI_pitt

Fast Single Direction Translation for Brain Image Domain Adaptation

Yanwu Xu; Mingming Gong ; Kayhan Batmanghelich (University of Pittsburgh - University of Melbourne)

Paper

#9 - MINDGap

Learning on MIND features and noisy labels from image registration

Christian N Kruse; Mattias Heinrich (University of Luebeck)

Paper

#10 - Hi-Lib

A GANs-based Modality Fusion and Data Augmentation for CrossMoDA Challeng

Jianghao Wu; Ran Gu; Shuwei Zhai; Wenhui Lei; Guotai Wang (University of Electronic Science and Technology of China)

Paper

#11 - smriti161096

nn-Unet Training on CycleGAN-translated images for cross-modal domain adaptation in biomedical imaging

Smriti Joshi; Richard Osuala; Carlos Martın-Isla; Victor M. Campello; Carla Sendra-Balcells; Karim Lekadir; Sergio Escalera (University of Barcelona)

Paper

#12 - IMI

MIND THE domain GAP: unsupervised modality independent deformable domain

Lasse Hansen; Mattias Heinrich (University of Luebeck)

Paper

#13 - gabybaldeon

C-MADA: Unsupervised Cross-Modality Adversarial Domain Adaptation framework for medical Image Segmentation

Maria Baldeon Calisto; Susana K. Lai-Yuen (Universidad San Francisco de Quito, University of South Florida)

Paper

#14 - skjp

MIND THE domain GAP: unsupervised modality independent deformable domain

Satoshi Kondo (Muroran Institute of Technology)

Paper

#15 - SEU_chen

A Cascade nnUNet By Mini-Entropy Domain Adaptation On Segmentation of Tumor and Cochlea

Chen Xiaofei (Southeast University)

#15 - IRA

Comparing Unsupervised Domain Adaptation and Style-Transfer Methods in CrossMoDA Challenge

Arseniy Belkov; Boris Shirokikh ; Mikhail Belyaev (Moscow Institute of Physics and Technology, Skolkovo Institute of Science and Technology)

Paper