Skip to content
This repository was archived by the owner on Nov 21, 2023. It is now read-only.

issue with segmentation/detection results #56

Closed
kvananth opened this issue Jan 27, 2018 · 3 comments
Closed

issue with segmentation/detection results #56

kvananth opened this issue Jan 27, 2018 · 3 comments

Comments

@kvananth
Copy link

I'm not able to reproduce segmentation/detection results for R101-FPN-2x model:
python2 tools/test_net.py
--cfg configs/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml
TEST.WEIGHTS https://s3-us-west-2.amazonaws.com/detectron/35861858/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml.02_32_51.SgT4y1cO/output/train/coco_2014_train:coco_2014_valminusminival/generalized_rcnn/model_final.pkl
NUM_GPUS 1

The expected segs mAP must be around ~0.35 - that's what I get when I run masks from MODEL_ZOO for this particular model on COCO-API
but from the results running the above command gives this:
Running per image evaluation...
Evaluate annotation type segm
DONE (t=21.18s).
Accumulating evaluation results...
DONE (t=2.23s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.001
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.002
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.003
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.003
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.005
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.005
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.002

I believe the dataset is right - it reads coco_2014_minival - 5000 imgs.

I don't see where the bug. Kindly help.

@rbgirshick
Copy link
Contributor

When running this command, I get the expected results; for masks:

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.364
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.585
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.387
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.166
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.392
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.540
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.303
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.459
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.478
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.266
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.520
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.642

I suggest inspecting the intermediate results produced by the testing code to debug what's going wrong. As a reference, here are the expected detections from the detections.pkl file that is saved at the end of inference: https://www.dropbox.com/s/fua3er045xn1nla/test-coco_2014_minival-generalized_rcnn-detections.pkl?dl=1.

@kvananth
Copy link
Author

I'll look into it. Thanks!

@kvananth
Copy link
Author

Hey, I was able to fix it. I think the issue was with the annotation files I have been using.

It worked after I downloaded a fresh copy of COCO dataset.

Cheers!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants