Skip to content

The official code for our BVM 2022 paper "Few-shot Unsupervised Domain Adaptation for Multi-modal Cardiac Image Segmentation"

License

Notifications You must be signed in to change notification settings

MingxuanGu/Few-shot-UDA

Repository files navigation

Few-shot-UDA

The official code for our BVM 2022 paper "Few-shot Unsupervised Domain Adaptation for Multi-modal Cardiac Image Segmentation"

Abstract

Unsupervised domain adaptation (UDA) methods intend to reduce the gap between source and target domains by using unlabeled target domain and labeled source domain data, however, in the medical domain, target domain data may not always be easily available, and acquiring new samples is generally time-consuming. This restricts the development of UDA methods for new domains. In this paper, we explore the potential of UDA in a more challenging while realistic scenario where only one unlabeled target patient sample is available. We call it Few-shot Unsupervised Domain adaptation (FUDA). We first generate target-style images from source images and explore diverse target styles from a single target patient with Random Adaptive Instance Normalization (RAIN). Then, a segmentation network is trained in a supervised manner with the generated target images. Our experiments demonstrate that FUDA improves the segmentation performance by 0.33 of Dice score on the target domain compared with the baseline, and it also gives 0.28 of Dice score improvement in a more rigorous one-shot setting.

Dataset

  • Download the Multi-sequence Cardiac MR Segmentation Challenge (MS-CMRSeg 2019) dataset: https://zmiclab.github.io/zxh/0/mscmrseg19/
  • Data structure:
    • trainA/trainAmask: bSSFP/T2 sample 6-45
    • testA/testAmask: bSSFP/T2 sample 1-5
    • trainB/trainBmask: LGE sample 6-45
    • testB/testBmask: LGE sample 1-5
  • To preprocess the data, check preprocess_data.py. You may need to modify the file paths to run the code.

Download pretrained RAIN

Installation

git clone https://github.com/MingxuanGu/Few-shot-UDA/
cd Few-shot-UDA

Training

To train RAIN module:

Example: python3 train_RAIN.py --style_weight 5 --content_weight 5 --latent_weight 1 --recons_weight 5 --vgg ../ASM_SV/pretrained/vgg_normalised.pth --augmentation

To train the DR-UNet

Example: python3 train_FUDA.py --backbone dr_unet --mode fewshot --jac --learning-rate 3.5e-4 --power 0.5 --eps_iters 3 --learning-rate-s 120 --num-steps 100 --num-steps-stop 100 --warmup-steps 0 --vgg_decoder pretrained/best_decoder.bssfp2t2.lr0.0001.sw5.0.cw5.0.lw1.0.rw5.0.aug.e200.Scr7.691.pt --style_encoder pretrained/best_fc_encoder.bssfp2t2.lr0.0001.sw5.0.cw5.0.lw1.0.rw5.0.aug.e200.Scr7.691.pt --style_decoder pretrained/best_fc_decoder.bssfp2t2.lr0.0001.sw5.0.cw5.0.lw1.0.rw5.0.aug.e200.Scr7.691.pt --restore_from pretrained/best_DR_UNet.fewshot.eps2.lrs20.0.pat_10_lge.e70.Scr0.58.pt

Download Our Pretrained Weights

We also provide the pretrained DR_Unet for direct evaluation.

Evaluation

Example: python3 evaluator.py --restore_from "weights/best_DR_UNet.fewshot.eps2.lrs40.0.pat_10_lge.e77.Scr0.625.pt"

Results

Acknowledge

Our project is based on code from: Adversarial Style Mining for One-Shot Unsupervised Domain Adaptation (ASM).

Citation

Please consider citing the following paper in your publications if they help your research.

 @inproceedings{gu2022bvm,
  title={Few-shot Unsupervised Domain Adaptation for Multi-modal Cardiac Image Segmentation},
  author={M. {Gu} and S. {Vesal} and R. {Kosti} and A. {Maier}},
  booktitle={Bildverarbeitung für die Medizin},  
  year={2022}
}

About

The official code for our BVM 2022 paper "Few-shot Unsupervised Domain Adaptation for Multi-modal Cardiac Image Segmentation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages