GithubHelp home page GithubHelp logo

mlpc-ucsd / boundaryformer Goto Github PK

View Code? Open in Web Editor NEW
89.0 89.0 8.0 6.29 MB

Code for CVPR2022 paper: Instance Segmentation with Mask-supervised Polygonal Boundary Transformers

License: Apache License 2.0

Python 93.12% Shell 0.42% C++ 2.45% Cuda 3.90% Dockerfile 0.10% CMake 0.02%

boundaryformer's People

Contributors

alexander-kirillov avatar apivovarov avatar bowenc0221 avatar bryant1410 avatar bxiong1202 avatar chenbohua3 avatar chengyangfu avatar jlazarow avatar jonmorton avatar jss367 avatar kondela avatar lyttonhao avatar marcszafraniec avatar maxfrei750 avatar mrparosk avatar obendidi avatar patricklabatut avatar ppwwyyxx avatar puhuk avatar rajprateek avatar raymondcm avatar rbgirshick avatar sampepose avatar superirabbit avatar theschnitz avatar tkhe avatar vkhalidov avatar wangg12 avatar wat3rbro avatar wenliangzhao2018 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

boundaryformer's Issues

Question about GT rasterization agreement percentage

Excellent work! Thanks for the release.
When I ran the test file after compiling the diff ras, I found the GT rasterization agreement percentage is around 60, while the masks produced by Pytorch version and CUDA version are almost same. And the whole training process works fine.

GT Rasterization agreement: 0.594480990116706
Rasterized agreement (tau = 1.0): 0.9998779296875
Gradient agreement (tau = 1.0): 1.0

segmentations = [ann["segmentation"] for ann in annotations]
ground_truth_masks = torch.stack([rasterize_polygons_within_box([np.array(segmentation[0])], xyxy, RESOLUTION) for xyxy, segmentation in zip(xyxys, segmentations)]).float().to(DEVICE)
ground_truth_rasterized = rasterize_instances(HARD_CUDA_RASTERIZER, segmentations, RESOLUTION)
agreement_percentage = torch.count_nonzero(ground_truth_masks == ground_truth_rasterized) / float(ground_truth_masks.numel())
agreement_percentages.append(agreement_percentage.item())
print("GT Rasterization agreement: {0}".format(np.mean(agreement_percentages)))

Does this implicates there is a gap between the mask produced by HARD_CUDA_RASTERIZER and the gt mask ? And what causes this gap?
Thanks in advance.

Question about getting polygon output

First of all congrats for the great work!! Thanks for release.

I run your model using the demo.py file. Yet I got segmentation result in pixel format.
I made following changes to demo.py file to get it run succesfully (inside setup_cfg(args) function):

from boundary_former.config import add_boundaryformer_config
add_boundaryformer_config(cfg)

Could you explain how should I update demo file or the code to get segmentation predictions in polygon format?

Note*
I used the following config file and model respectively:
-> config file link, model.

Best

Evaluation results for coco val

Hi Thanks for your great work! I loaded your pretrained model on COCO, and ran the inference. The results are as follows:

[06/29 21:55:25 d2.engine.defaults]: Evaluation results for coco_2017_val in csv format:
[06/29 21:55:25 d2.evaluation.testing]: copypaste: Task: bbox
[06/29 21:55:25 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl
[06/29 21:55:25 d2.evaluation.testing]: copypaste: 38.4919,59.4363,41.7748,22.3543,41.2834,50.4875
[06/29 21:55:25 d2.evaluation.testing]: copypaste: Task: segm
[06/29 21:55:25 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl
[06/29 21:55:25 d2.evaluation.testing]: copypaste: 7.8351,29.3128,1.0689,4.7795,8.2153,10.8173

You can see the AP results of segm is much lower than the number in your paper. I also visualize the results using tools/visualize_json_results.py . The visualized results are also not good. I have followed your default hyperparameters except that I set num-gpus as 1 and BATCH_SIZE = 8. Could you please give some hints on what may be wrong? Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.