You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been playing around with 102a_coco.ipynb notebook for some time, I was trying to use the same notebook for creating evaluation results of the same and I found out that , at unpad function which is called inside the get_class_mAP gives a tensor being none error, which I am not able to resolve, can anyone help me how can I load my data in fastai and use the mAP function as provided in the notebook 102a.
This is how my dataset is loaded into databunch:
#@title Sampling functions: { display-mode: "both" }
#@markdown The sampling function defines from which position on the high power field (HPF) a patch should be extracted for training and validation. This function is needed because HPFs are too big to fit into memory (RAM and VRAM), and therefore we randomly sample patches.
def sample_function(y, classes, size, level_dimensions, level):
width, height = level_dimensions[level]
if len(y[0]) == 0:
return randint(0, width - size[0]), randint(0, height -size[1])
else:
#if randint(0, 5) < 2:
if True:
class_id = np.random.choice(classes, 1)[0] # select a random class
ids = np.array(y[1]) == class_id # filter the annotations according to the selected class
xmin, ymin, _, _ = np.array(y[0])[ids][randint(0, np.count_nonzero(ids) - 1)] # randomly select one of the filtered annotatons as seed for the training patch
# To have the selected annotation not in the center of the patch and an random offset.
xmin += random.randint(-size[0]/2, size[0]/2)
ymin += random.randint(-size[1]/2, size[1]/2)
xmin, ymin = max(0, int(xmin - size[0] / 2)), max(0, int(ymin -size[1] / 2))
xmin, ymin = min(xmin, width - size[0]), min(ymin, height - size[1])
return xmin, ymin#@title Select a training and validation scanner { run: "auto", display-mode: "form" }
def create_wsi_container(annotations_df: pd.DataFrame):
container = []
for image_name in tqdm(annotations_df["file_name"].unique()):
image_annos = annotations_df[annotations_df["file_name"] == image_name]
bboxes = [box for box in image_annos["box"]]
labels = [label for label in image_annos["cat"]]
container.append(SlideContainer(image_folder/image_name, y=[bboxes, labels], level=res_level,width=patch_size, height=patch_size, sample_func=sample_function))
return container
#@markdown Options can also be combined like: Hamamatsu XR, Hamamatsu S360
train_scanner = "Hamamatsu XR" #@param ["Hamamatsu XR", "Hamamatsu S360", "Aperio CS"] {allow-input: true}
val_scanner = "Hamamatsu S360" #@param ["Hamamatsu XR", "Hamamatsu S360", "Aperio CS"] {allow-input: true}
patch_size = 256 #@param [256, 512, 1024]
res_level = 0
train_annos = df[df["scanner"].isin(train_scanner.split(","))]
train_container = create_wsi_container(train_annos)
val_annos = df[df["scanner"].isin(val_scanner.split(","))]
valid_container = create_wsi_container(val_annos)
#@title Just 50 images are too few for big batch sizes, so lets increase the number by random sampling { run: "auto" }
import numpy as np
train_samples_per_scanner = 1500 #@param {type:"integer"}
val_samples_per_scanner = 500 #@param {type:"integer"}
train_images = list(np.random.choice(train_container, train_samples_per_scanner))
valid_images = list(np.random.choice(valid_container, val_samples_per_scanner))
#@title Just 50 images are too few for big batch sizes, so lets increase the number by random sampling { run: "auto" }
#@title We build on the [FastAI](https://www.fast.ai/) pipline. Please take a look a the [documentation](https://docs.fast.ai/) for more information. { run: "auto", display-mode: "form" }
batch_size = 12 #@param {type:"integer"}
#@markdown Lets add some basic data [augmentation](https://docs.fast.ai/vision.augment.html)
do_flip = True #@param {type:"boolean"}
flip_vert = True #@param {type:"boolean"}
max_rotate = 90 #@param {type:"number"}
max_zoom = 1.1 #@param {type:"number"}
max_lighting = 0.2 #@param {type:"number"}
max_warp = 0.2 #@param {type:"number"}
p_affine = 0.75 #@param {type:"number"}
p_lighting = 0.75 #@param {type:"number"}
tfms = get_transforms(do_flip=do_flip,
flip_vert=flip_vert,
max_rotate=max_rotate,
max_zoom=max_zoom,
max_lighting=max_lighting,
max_warp=max_warp,
p_affine=p_affine,
p_lighting=p_lighting)
train, valid = ObjectItemListSlide(train_images), ObjectItemListSlide(valid_images)
item_list = ItemLists(".", train, valid)
lls = item_list.label_from_func(lambda x: x.y, label_cls=SlideObjectCategoryList)
lls = lls.transform(tfms, tfm_y=True, size=patch_size)
data = lls.databunch(bs=batch_size, collate_fn=bb_pad_collate,num_workers=0).normalize()
When I use this data and learner model that I have trained on it , it gives an error while running the mAP code.
def compute_class_AP(model, dl, n_classes, iou_thresh=0.5, detect_thresh=0.05, num_keep=100):
tps, clas, p_scores = [], [], []
classes, n_gts = LongTensor(range(n_classes)),torch.zeros(n_classes).long()
with torch.no_grad():
for input,target in progress_bar(dl):
output = model(input)
for i in range(target[0].size(0)):
bbox_pred, preds, scores = get_predictions(output, i, detect_thresh)
tgt_bbox, tgt_clas = unpad(target[0][i], target[1][i])
ious = IoU_values(bbox_pred, tgt_bbox)
max_iou, matches = ious.max(1)
detected = []
for i in range_of(preds):
if max_iou[i] >= iou_thresh and matches[i] not in detected and tgt_clas[matches[i]] == preds[i]:
detected.append(matches[i])
tps.append(1)
else: tps.append(0)
clas.append(preds.cpu())
p_scores.append(scores.cpu())
n_gts += (tgt_clas.cpu()[:,None] == classes[None,:]).sum(0)
tps, p_scores, clas = torch.tensor(tps), torch.cat(p_scores,0), torch.cat(clas,0)
fps = 1-tps
idx = p_scores.argsort(descending=True)
tps, fps, clas = tps[idx], fps[idx], clas[idx]
aps = []
#return tps, clas
for cls in range(n_classes):
tps_cls, fps_cls = tps[clas==cls].float().cumsum(0), fps[clas==cls].float().cumsum(0)
if tps_cls[-1] != 0:
precision = tps_cls / (tps_cls + fps_cls + 1e-8)
recall = tps_cls / (n_gts[cls] + 1e-8)
aps.append(compute_ap(precision, recall))
else: aps.append(0.)
return aps
I get this error which is due to the fact that some empty tensors are being passed to unpad function.
ERROR:
RuntimeError: min(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument.
The text was updated successfully, but these errors were encountered:
I have been playing around with 102a_coco.ipynb notebook for some time, I was trying to use the same notebook for creating evaluation results of the same and I found out that , at unpad function which is called inside the get_class_mAP gives a tensor being none error, which I am not able to resolve, can anyone help me how can I load my data in fastai and use the mAP function as provided in the notebook 102a.
This is how my dataset is loaded into databunch:
#@title Sampling functions: { display-mode: "both" }
#@markdown The sampling function defines from which position on the high power field (HPF) a patch should be extracted for training and validation. This function is needed because HPFs are too big to fit into memory (RAM and VRAM), and therefore we randomly sample patches.
When I use this data and learner model that I have trained on it , it gives an error while running the mAP code.
I get this error which is due to the fact that some empty tensors are being passed to unpad function.
ERROR:
RuntimeError: min(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument.
The text was updated successfully, but these errors were encountered: