nvlabs / freesolo Goto Github PK
View Code? Open in Web Editor NEWFreeSOLO for unsupervised instance segmentation, CVPR 2022
License: Other
FreeSOLO for unsupervised instance segmentation, CVPR 2022
License: Other
Hello, how can I train the model with custom dataset? and can I implement the model in google colab? thank you!
Could you upload the trained model elsewhere?
The current link is not available any longer.
i have a instance segmentation dataset ,but its label mask is too accurate compare free mask,i just want to konw how you generate free mask?and i i use my pretrained model(solo model) to generate the free mask ,is that a bad idea?please let me konw if that's not so unconvient ,thaks a lot......
Hello, I have read your paper and code, and I would like to ask how to train my own dataset. My dataset is in the category of defect detection, and I can hardly find similar images in the COCO dataset. Following your instructions, I downloaded the pre-trained model DenseCL and used inference_freemask.sh to generate coarse masks, but the generated masks hardly mark the defective areas. Therefore, I would like to ask if I need to train my own DenseCL model to generate coarse masks?
Hi, Thanx for your great work,.
when run "$ bash inference_freemask.sh", error existed as :no such file. Pls tell me where can i get this json file?
To ensure that the free masks are almost accurate, I add following codes in inference_freemaks.py
. However, the results saved in the disks is strange: the foreground part is always at the edge of an image.
after masks = masks.cpu().numpy()
'''
masks = masks.cpu().numpy()
ii=0
for mask in masks:
masks_save = masks[ii] * 255
cv2.imwrite('./results/'+img_name+'_'+str(ii)+'.jpg', masks_save)
ii=ii+1
'''
Hi,
Thank you for your work. I encountered several issues when using the code.
I tried to reproduce the results of Freemask by running the first step of the algorithm on the COCO dataset.
However, I'm unable to get the same results as in the provided json: for example, the embedding vector is not included in the annotations of the provided json. Am I missing something here?
I ran the code with the provided json on the train2017+unlabeled2017 split, but only get 0.1% mask AP after the first step of FreeSOLO. In particular, ran the tools/eval_cocoapi.py
script for the class-agnostic evaluation. I noticed that only the pairwise loss is able to go down.
Evaluating the provided final model (in this repo) does not produce 12.2% AP50 for detection (as claimed in the paper) but only obtains 9.6%. Do I need to post-process the results before evaluating?
Were people able to reproduce this? Thanks.
When i run bash test.sh FreeSOLO_R101_30k_pl.pth
.I got the result as this
category | AP | category | AP | category | AP |
---|---|---|---|---|---|
person | 0.665 | bicycle | 0.000 | car | 0.000 |
motorcycle | 0.000 | airplane | nan | bus | 0.000 |
train | 0.000 | truck | 0.000 | boat | 0.000 |
traffic light | 0.000 | fire hydrant | 0.000 | stop sign | 0.000 |
parking meter | nan | bench | 0.000 | bird | 0.000 |
cat | 0.000 | dog | 0.000 | horse | 0.000 |
sheep | nan | cow | 0.000 | elephant | 0.000 |
bear | 0.000 | zebra | 0.000 | giraffe | 0.000 |
backpack | 0.000 | umbrella | 0.000 | handbag | 0.000 |
tie | 0.000 | suitcase | nan | frisbee | 0.000 |
skis | 0.000 | snowboard | 0.000 | sports ball | 0.000 |
kite | 0.000 | baseball bat | 0.000 | baseball glove | 0.000 |
skateboard | 0.000 | surfboard | 0.000 | tennis racket | 0.000 |
bottle | 0.000 | wine glass | 0.000 | cup | 0.000 |
fork | 0.000 | knife | nan | spoon | 0.000 |
bowl | 0.000 | banana | 0.000 | apple | 0.000 |
sandwich | 0.000 | orange | nan | broccoli | 0.000 |
carrot | 0.000 | hot dog | nan | pizza | 0.000 |
donut | 0.000 | cake | nan | chair | 0.000 |
couch | 0.000 | potted plant | 0.000 | bed | 0.000 |
dining table | 0.000 | toilet | 0.000 | tv | 0.000 |
laptop | 0.000 | mouse | 0.000 | remote | 0.000 |
keyboard | 0.000 | cell phone | 0.000 | microwave | 0.000 |
oven | 0.000 | toaster | nan | sink | 0.000 |
refrigerator | 0.000 | book | 0.000 | clock | 0.000 |
vase | 0.000 | scissors | nan | teddy bear | 0.000 |
hair drier | nan | toothbrush | nan |
And it cames from python tools/eval_cocoapi.py
.
Did i missing sth? by the way,how can I get visualization images?
Do you think can FreeSolo be used for segmenting cells, and nuclei which are very small objects in digital pathology images (H&E)?
I visualized the results of Free Mask using some images in the paper.
but the results are different from the paper results (Figure 2. & Figure 4.)
I got more labels about the background or part of the object.
Did you experiment by selecting only good labels from the results?
Or did you experiment by using all labels?
Did you get the same result?
What should I do?
Hello,@WXinlong Why is the number of categories for training coco 2 instead of 81?
Hello, thank you very much for releasing the source code.
When I run bash test.sh FreeSOLO_R101_30k_pl.pth
, I get the following results. The AP on person
is 0.903, but all other categories are indeed 0. Am I missing any key experimental settings?
[04/22 13:53:59] d2.evaluation.evaluator INFO: Inference done 4991/5000. Dataloading: 0.0046 s/iter. Inference: 0.0928 s/iter. Eval: 0.0549 s/iter. Total: 0.1524 s/iter. ETA=0:00:01
[04/22 13:54:00] d2.evaluation.evaluator INFO: Total inference time: 0:12:40.989735 (0.152350 s / iter per device, on 1 devices)
[04/22 13:54:00] d2.evaluation.evaluator INFO: Total inference pure compute time: 0:07:43 (0.092807 s / iter per device, on 1 devices)
[04/22 13:54:05] d2.evaluation.coco_evaluation INFO: Preparing results for COCO format ...
[04/22 13:54:05] d2.evaluation.coco_evaluation INFO: Saving results to training_dir/FreeSOLO_pl/inference/coco_instances_results.json
[04/22 13:54:09] d2.evaluation.coco_evaluation INFO: Evaluating predictions with official COCO API...
[04/22 13:54:55] d2.evaluation.coco_evaluation INFO: Evaluation results for bbox:
| AP | AP50 | AP75 | APs | APm | APl |
|:-----:|:------:|:------:|:-----:|:-----:|:-----:|
| 0.011 | 0.028 | 0.009 | 0.004 | 0.010 | 0.031 |
[04/22 13:54:55] d2.evaluation.coco_evaluation INFO: Per-category bbox AP:
| category | AP | category | AP | category | AP |
|:--------------|:------|:-------------|:------|:---------------|:------|
| person | 0.903 | bicycle | 0.000 | car | 0.000 |
| motorcycle | 0.000 | airplane | 0.000 | bus | 0.000 |
| train | 0.000 | truck | 0.000 | boat | 0.000 |
| traffic light | 0.000 | fire hydrant | 0.000 | stop sign | 0.000 |
| parking meter | 0.000 | bench | 0.000 | bird | 0.000 |
| cat | 0.000 | dog | 0.000 | horse | 0.000 |
| sheep | 0.000 | cow | 0.000 | elephant | 0.000 |
| bear | 0.000 | zebra | 0.000 | giraffe | 0.000 |
| backpack | 0.000 | umbrella | 0.000 | handbag | 0.000 |
| tie | 0.000 | suitcase | 0.000 | frisbee | 0.000 |
| skis | 0.000 | snowboard | 0.000 | sports ball | 0.000 |
| kite | 0.000 | baseball bat | 0.000 | baseball glove | 0.000 |
| skateboard | 0.000 | surfboard | 0.000 | tennis racket | 0.000 |
| bottle | 0.000 | wine glass | 0.000 | cup | 0.000 |
| fork | 0.000 | knife | 0.000 | spoon | 0.000 |
| bowl | 0.000 | banana | 0.000 | apple | 0.000 |
| sandwich | 0.000 | orange | 0.000 | broccoli | 0.000 |
| carrot | 0.000 | hot dog | 0.000 | pizza | 0.000 |
| donut | 0.000 | cake | 0.000 | chair | 0.000 |
| couch | 0.000 | potted plant | 0.000 | bed | 0.000 |
| dining table | 0.000 | toilet | 0.000 | tv | 0.000 |
| laptop | 0.000 | mouse | 0.000 | remote | 0.000 |
| keyboard | 0.000 | cell phone | 0.000 | microwave | 0.000 |
| oven | 0.000 | toaster | 0.000 | sink | 0.000 |
| refrigerator | 0.000 | book | 0.000 | clock | 0.000 |
| vase | 0.000 | scissors | 0.000 | teddy bear | 0.000 |
| hair drier | 0.000 | toothbrush | 0.000 | | |
[04/22 13:56:08] d2.evaluation.coco_evaluation INFO: Evaluation results for segm:
| AP | AP50 | AP75 | APs | APm | APl |
|:-----:|:------:|:------:|:-----:|:-----:|:-----:|
| 0.012 | 0.030 | 0.008 | 0.001 | 0.005 | 0.036 |
[04/22 13:56:08] d2.evaluation.coco_evaluation INFO: Per-category segm AP:
| category | AP | category | AP | category | AP |
|:--------------|:------|:-------------|:------|:---------------|:------|
| person | 0.926 | bicycle | 0.000 | car | 0.000 |
| motorcycle | 0.000 | airplane | 0.000 | bus | 0.000 |
| train | 0.000 | truck | 0.000 | boat | 0.000 |
| traffic light | 0.000 | fire hydrant | 0.000 | stop sign | 0.000 |
| parking meter | 0.000 | bench | 0.000 | bird | 0.000 |
| cat | 0.000 | dog | 0.000 | horse | 0.000 |
| sheep | 0.000 | cow | 0.000 | elephant | 0.000 |
| bear | 0.000 | zebra | 0.000 | giraffe | 0.000 |
| backpack | 0.000 | umbrella | 0.000 | handbag | 0.000 |
| tie | 0.000 | suitcase | 0.000 | frisbee | 0.000 |
| skis | 0.000 | snowboard | 0.000 | sports ball | 0.000 |
| kite | 0.000 | baseball bat | 0.000 | baseball glove | 0.000 |
| skateboard | 0.000 | surfboard | 0.000 | tennis racket | 0.000 |
| bottle | 0.000 | wine glass | 0.000 | cup | 0.000 |
| fork | 0.000 | knife | 0.000 | spoon | 0.000 |
| bowl | 0.000 | banana | 0.000 | apple | 0.000 |
| sandwich | 0.000 | orange | 0.000 | broccoli | 0.000 |
| carrot | 0.000 | hot dog | 0.000 | pizza | 0.000 |
| donut | 0.000 | cake | 0.000 | chair | 0.000 |
| couch | 0.000 | potted plant | 0.000 | bed | 0.000 |
| dining table | 0.000 | toilet | 0.000 | tv | 0.000 |
| laptop | 0.000 | mouse | 0.000 | remote | 0.000 |
| keyboard | 0.000 | cell phone | 0.000 | microwave | 0.000 |
| oven | 0.000 | toaster | 0.000 | sink | 0.000 |
| refrigerator | 0.000 | book | 0.000 | clock | 0.000 |
| vase | 0.000 | scissors | 0.000 | teddy bear | 0.000 |
| hair drier | 0.000 | toothbrush | 0.000 | | |
[04/22 13:56:09] d2.engine.defaults INFO: Evaluation results for coco_2017_val in csv format:
[04/22 13:56:09] d2.evaluation.testing INFO: copypaste: Task: bbox
[04/22 13:56:09] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75,APs,APm,APl
[04/22 13:56:09] d2.evaluation.testing INFO: copypaste: 0.0113,0.0283,0.0085,0.0045,0.0101,0.0311
[04/22 13:56:09] d2.evaluation.testing INFO: copypaste: Task: segm
[04/22 13:56:09] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75,APs,APm,APl
[04/22 13:56:09] d2.evaluation.testing INFO: copypaste: 0.0116,0.0296,0.0079,0.0005,0.0055,0.0362
Hello, I meet an error when training freesolo:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation...
Hi,
When I run bash test.sh {MODEL_PATH}
and passing the weights of the downloaded trained model, I get the following error:
Loading and preparing results...
Traceback (most recent call last):
File "tools/eval_cocoapi.py", line 21, in
cocoDt=cocoGt.loadRes(resFile)
File "python3.7/site-packages/pycocotools/coco.py", line 319, in loadRes
with open(resFile) as f:
FileNotFoundError: [Errno 2] No such file or directory: 'training_dir/FreeSOLO_pl/inference/coco_instances_results.json'
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.