batch_dice for 3d_fullres depends on whether a 3d_cascade_fullres was planned as well #1301
-
Hi, is it correct that the batch_dice is set differently for
I was wondering why batch_dice should be set differently, if 3d_fullres is trained (and not 3d_cascade_fullres) in the context of large images, which trigger the cascade during planning? Would this cause different training results for 3d_fullres, compared to a training where planning the cascade is suppressed? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Yes it's intentional. |
Beta Was this translation helpful? Give feedback.
Yes it's intentional.
If there exists no 3d_lowres then we can assume that the patch size is similar to the individual image size, so computing the dice loss per sample in the batch makes sense as each sample is essentially one image. This assumes, of course, that the target metric is the dice score, computed for each image independently and then averaged over all images.
Conversely, if a 3d_lowres stage exists, then patch_size << image_size. Thus, we know each patch only shows a fraction of an entire image and the class distributions in the patches most likely do not reflect the true class distribution in the images. This can cause stability problems. Pretending all samples/patches in th…