Skip to content

Commit

Permalink
Merge remote-tracking branch 'Yulv-git/master'
Browse files Browse the repository at this point in the history
  • Loading branch information
FabianIsensee committed Apr 12, 2022
2 parents 3f5979d + c2add89 commit 5e154f1
Show file tree
Hide file tree
Showing 14 changed files with 31 additions and 31 deletions.
24 changes: 12 additions & 12 deletions documentation/common_questions.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
# FAQ

- [FAQ](#faq)
* [Where can I find the segmentation metrics of my experiments?](#where-can-i-find-the-segmentation-metrics-of-my-experiments-)
* [What postprocessing is selected?](#what-postprocessing-is-selected-)
* [Evaluating test set results](#evaluating-test-set-results)
* [Creating and managing data splits](#creating-and-managing-data-splits)
* [How can I swap component XXX (for example the loss) of nnU-Net?](#how-can-i-swap-component-xxx--for-example-the-loss--of-nnu-net-)
* [How does nnU-Net handle multi-modal images?](#how-does-nnu-net-handle-multi-modal-images-)
* [Why does nnU-Net not use all my GPU memory?](#why-does-nnu-net-not-use-all-my-gpu-memory-)
* [Do I need to always run all U-Net configurations?](#do-i-need-to-always-run-all-u-net-configurations-)
* [Sharing Models](#sharing-models)
* [Can I run nnU-Net on smaller GPUs?](#can-i-run-nnu-net-on-smaller-gpus-)
* [Why is no 3d_lowres model created?](#why-is-no-3d-lowres-model-created-)
- [Where can I find the segmentation metrics of my experiments?](#where-can-i-find-the-segmentation-metrics-of-my-experiments)
- [What postprocessing is selected?](#what-postprocessing-is-selected)
- [Evaluating test set results](#evaluating-test-set-results)
- [Creating and managing data splits](#creating-and-managing-data-splits)
- [How can I swap component XXX (for example the loss) of nnU-Net?](#how-can-i-swap-component-xxx-for-example-the-loss-of-nnu-net)
- [How does nnU-Net handle multi-modal images?](#how-does-nnu-net-handle-multi-modal-images)
- [Why does nnU-Net not use all my GPU memory?](#why-does-nnu-net-not-use-all-my-gpu-memory)
- [Do I need to always run all U-Net configurations?](#do-i-need-to-always-run-all-u-net-configurations)
- [Sharing Models](#sharing-models)
- [Can I run nnU-Net on smaller GPUs?](#can-i-run-nnu-net-on-smaller-gpus)
- [Why is no 3d_lowres model created?](#why-is-no-3d_lowres-model-created)

## Where can I find the segmentation metrics of my experiments?
**Results for the validation sets of each fold** are stored in the respective output folder after the training is completed. For example, this could be.
Expand Down Expand Up @@ -149,7 +149,7 @@ iterations each (250000 iterations). The training time thus scales approximately
know what you are doing! Again, training times will be increased if you do this! 3) is a better way of increasing the
patch size.

3) Run `nnUNet_plan_and_preprocess` with a larger GPU memory budget. This will make nnU-Net plan for larger patch sizes
3) Run `nnUNet_plan_and_preprocess` with a larger GPU memory budget. This will make nnU-Net plan for larger batch sizes
during experiment planning. Doing this can change the patch size, network topology, the batch size as well as the
presence of the U-Net cascade. To run with a different memory budget, you need to specify a different experiment planner, for example
`nnUNet_plan_and_preprocess -t TASK_ID -pl2d None -pl3d ExperimentPlanner3D_v21_32GB` (note that `-pl2d None` will
Expand Down
2 changes: 1 addition & 1 deletion documentation/extending_nnunet.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ classes are ExperimentPlanner2D_v21 for the 2D U-Net and ExperimentPlanner3D_v21
U-Net cascade. Just like nnUNetTrainers, the ExperimentPlanners inherit from each other, resulting in minimal programming
effort to incorporate changes. Just like with the trainers, simply give your custom ExperimentPlanners a unique name and
save them in some subfolder of nnunet.experiment_planning. You can then specify your class names when running
`nnUNet_plan_and_preprocess` and nnU-Net will find them automatically. When inheriting form ExperimentPlanners, you **MUST**
`nnUNet_plan_and_preprocess` and nnU-Net will find them automatically. When inheriting from ExperimentPlanners, you **MUST**
overwrite the class variables `self.data_identifier` and `self.plans_fname` (just like for example
[here](../nnunet/experiment_planning/alternative_experiment_planning/normalization/experiment_planner_3DUNet_CT2.py)).
If you omit this step the planner will overwrite the plans file and the preprocessed data of the planner it inherits from.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ def get_properties_for_stage(self, current_spacing, original_spacing, original_s

def plan_experiment(self):
use_nonzero_mask_for_normalization = self.determine_whether_to_use_mask_for_norm()
print("Are we using the nonzero maks for normalizaion?", use_nonzero_mask_for_normalization)
print("Are we using the nonzero mask for normalizaion?", use_nonzero_mask_for_normalization)

spacings = self.dataset_properties['all_spacings']
sizes = self.dataset_properties['all_sizes']
Expand Down
2 changes: 1 addition & 1 deletion nnunet/inference/predict_simple.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ def main():
parser.add_argument('--disable_mixed_precision', default=False, action='store_true', required=False,
help='Predictions are done with mixed precision by default. This improves speed and reduces '
'the required vram. If you want to disable mixed precision you can set this flag. Note '
'that yhis is not recommended (mixed precision is ~2x faster!)')
'that this is not recommended (mixed precision is ~2x faster!)')

args = parser.parse_args()
input_folder = args.input_folder
Expand Down
2 changes: 1 addition & 1 deletion nnunet/run/load_pretrained_weights.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ def load_pretrained_weights(network, fname, verbose=False):

new_state_dict = {}

# if state dict comes form nn.DataParallel but we use non-parallel model here then the state dict keys do not
# if state dict comes from nn.DataParallel but we use non-parallel model here then the state dict keys do not
# match. Use heuristic to make it match
for k, value in pretrained_dict.items():
key = k
Expand Down
6 changes: 3 additions & 3 deletions nnunet/training/dataloading/dataset_loading.py
Original file line number Diff line number Diff line change
Expand Up @@ -280,7 +280,7 @@ def generate_train_batch(self):
need_to_pad[d] = self.patch_size[d] - case_all_data.shape[d + 1]

# we can now choose the bbox from -need_to_pad // 2 to shape - patch_size + need_to_pad // 2. Here we
# define what the upper and lower bound can be to then sample form them with np.random.randint
# define what the upper and lower bound can be to then sample from them with np.random.randint
shape = case_all_data.shape[1:]
lb_x = - need_to_pad[0] // 2
ub_x = shape[0] + need_to_pad[0] // 2 + need_to_pad[0] % 2 - self.patch_size[0]
Expand Down Expand Up @@ -390,7 +390,7 @@ def __init__(self, data, patch_size, final_patch_size, batch_size, oversample_fo
and increase CPU usage. Therefore, I advise you to call unpack_dataset(folder) first, which will unpack all npz
to npy. Don't forget to call delete_npy(folder) after you are done with training?
Why all the hassle? Well the decathlon dataset is huge. Using npy for everything will consume >1 TB and that is uncool
given that I (Fabian) will have to store that permanently on /datasets and my local computer. With htis strategy all
given that I (Fabian) will have to store that permanently on /datasets and my local computer. With this strategy all
data is stored in a compressed format (factor 10 smaller) and only unpacked when needed.
:param data: get this with load_dataset(folder, stage=0). Plug the return value in here and you are g2g (good to go)
:param patch_size: what patch size will this data loader return? it is common practice to first load larger
Expand Down Expand Up @@ -528,7 +528,7 @@ def generate_train_batch(self):
assert len(case_all_data.shape) == 3

# we can now choose the bbox from -need_to_pad // 2 to shape - patch_size + need_to_pad // 2. Here we
# define what the upper and lower bound can be to then sample form them with np.random.randint
# define what the upper and lower bound can be to then sample from them with np.random.randint

need_to_pad = self.need_to_pad.copy()
for d in range(2):
Expand Down
2 changes: 1 addition & 1 deletion nnunet/training/model_restore.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def load_model_and_checkpoint_files(folder, folds=None, mixed_precision=None, ch
"""
used for if you need to ensemble the five models of a cross-validation. This will restore the model from the
checkpoint in fold 0, load all parameters of the five folds in ram and return both. This will allow for fast
switching between parameters (as opposed to loading them form disk each time).
switching between parameters (as opposed to loading them from disk each time).
This is best used for inference and test prediction
:param folder:
Expand Down
2 changes: 1 addition & 1 deletion nnunet/training/network_training/network_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -348,7 +348,7 @@ def load_checkpoint_ram(self, checkpoint, train=True):

new_state_dict = OrderedDict()
curr_state_dict_keys = list(self.network.state_dict().keys())
# if state dict comes form nn.DataParallel but we use non-parallel model here then the state dict keys do not
# if state dict comes from nn.DataParallel but we use non-parallel model here then the state dict keys do not
# match. Use heuristic to make it match
for k, value in checkpoint['state_dict'].items():
key = k
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -224,9 +224,9 @@ def validate(self, do_mirroring: bool = True, use_sliding_window: bool = True,
else:
softmax_fname = None

"""There is a problem with python process communication that prevents us from communicating obejcts
"""There is a problem with python process communication that prevents us from communicating objects
larger than 2 GB between processes (basically when the length of the pickle string that will be sent is
communicated by the multiprocessing.Pipe object then the placeholder (\%i I think) does not allow for long
communicated by the multiprocessing.Pipe object then the placeholder (I think) does not allow for long
enough strings (lol). This could be fixed by changing i to l (for long) but that would require manually
patching system python code. We circumvent that problem here by saving softmax_pred to a npy file that will
then be read (and finally deleted) by the Process. save_segmentation_nifti_from_softmax can take either
Expand Down
2 changes: 1 addition & 1 deletion nnunet/training/network_training/nnUNetTrainerV2.py
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ def initialize_optimizer_and_scheduler(self):
def run_online_evaluation(self, output, target):
"""
due to deep supervision the return value and the reference are now lists of tensors. We only need the full
resolution output because this is what we are interested in in the end. The others are ignored
resolution output because this is what we are interested in the end. The others are ignored
:param output:
:param target:
:return:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -279,9 +279,9 @@ def validate(self, do_mirroring: bool = True, use_sliding_window: bool = True, s
else:
softmax_fname = None

"""There is a problem with python process communication that prevents us from communicating obejcts
"""There is a problem with python process communication that prevents us from communicating objects
larger than 2 GB between processes (basically when the length of the pickle string that will be sent is
communicated by the multiprocessing.Pipe object then the placeholder (\%i I think) does not allow for long
communicated by the multiprocessing.Pipe object then the placeholder (I think) does not allow for long
enough strings (lol). This could be fixed by changing i to l (for long) but that would require manually
patching system python code. We circumvent that problem here by saving softmax_pred to a npy file that will
then be read (and finally deleted) by the Process. save_segmentation_nifti_from_softmax can take either
Expand Down
6 changes: 3 additions & 3 deletions nnunet/training/network_training/nnUNetTrainerV2_DDP.py
Original file line number Diff line number Diff line change
Expand Up @@ -521,9 +521,9 @@ def validate(self, do_mirroring: bool = True, use_sliding_window: bool = True,
else:
softmax_fname = None

"""There is a problem with python process communication that prevents us from communicating obejcts
"""There is a problem with python process communication that prevents us from communicating objects
larger than 2 GB between processes (basically when the length of the pickle string that will be sent is
communicated by the multiprocessing.Pipe object then the placeholder (\%i I think) does not allow for long
communicated by the multiprocessing.Pipe object then the placeholder (I think) does not allow for long
enough strings (lol). This could be fixed by changing i to l (for long) but that would require manually
patching system python code. We circumvent that problem here by saving softmax_pred to a npy file that will
then be read (and finally deleted) by the Process. save_segmentation_nifti_from_softmax can take either
Expand Down Expand Up @@ -641,7 +641,7 @@ def load_checkpoint_ram(self, checkpoint, train=True):

new_state_dict = OrderedDict()
curr_state_dict_keys = list(self.network.state_dict().keys())
# if state dict comes form nn.DataParallel but we use non-parallel model here then the state dict keys do not
# if state dict comes from nn.DataParallel but we use non-parallel model here then the state dict keys do not
# match. Use heuristic to make it match
for k, value in checkpoint['state_dict'].items():
key = k
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ def initialize(self, training=True, force_load_plans=False):
def run_online_evaluation(self, output, target):
"""
due to deep supervision the return value and the reference are now lists of tensors. We only need the full
resolution output because this is what we are interested in in the end. The others are ignored
resolution output because this is what we are interested in the end. The others are ignored
:param output:
:param target:
:return:
Expand Down
2 changes: 1 addition & 1 deletion tests/test_steps_for_sliding_window_prediction.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ def _verify_steps(self, steps, patch_size, image_size, step_size):
str(patch_size), step_size)
target_step_sizes_in_voxels = [i * step_size for i in patch_size]

# this code is copied form the current implementation. Not ideal, but I don't know hoe else to the the
# this code is copied from the current implementation. Not ideal, but I don't know hoe else to the the
# expected num_steps
num_steps = [int(np.ceil((i - k) / j)) + 1 for i, j, k in zip(image_size, target_step_sizes_in_voxels,
patch_size)]
Expand Down

0 comments on commit 5e154f1

Please sign in to comment.