GithubHelp home page GithubHelp logo

zimmerrol / mask-rcnn-edge-agreement-loss Goto Github PK

View Code? Open in Web Editor NEW
43.0 5.0 8.0 16.06 MB

Reference implementation of "Faster Training of Mask R-CNN by Focusing on Instance Boundaries"

Home Page: https://arxiv.org/abs/1809.07069

License: MIT License

Python 100.00%
mask-rcnn instance-segmentation research auxiliary-tasks

mask-rcnn-edge-agreement-loss's People

Contributors

flashtek avatar juliensiems avatar zimmerrol avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

mask-rcnn-edge-agreement-loss's Issues

Asking for code.

First of all, your work is definitely of great significance! I want to implement your method in my project: a cascaded mask rcnn. But I'm wondering whether to use the edge loss. Is it better to use the loss three times or simply use it in the last stage of cascaded mask rcnn? Do you have suggestions for that? I'm really appreciated your reply!

Performance in correct Mask R-CNN implementation

Hi, thanks for your nice work and I appreciate it much.
I have some questions about your implementation. Your implementation is based on Mask_RCNN but I found this implementation might exist many problems which led to much lower performance than the official implementation(Detectron, maskrcnn-benchmark).
I'm interested in your work and try to implement your proposed method in maskrcnn-benchmark. The only difference between my implementation and yours is that I use abs() instead of sqrt() to aggregate edges detected from X-direction and Y-direction because sqrt will result in numerical problems. sqrt and abs is nearly the same in the theory.
And I obtained results below and my implementation is consistent with the official implementation.

model loss box AP mask AP
MaskR-CNN-ResNet50(baseline) - 36.5 33.2
MaskR-CNN-ResNet50(EdgeAgreementLoss) L1 Loss 36.8 33.6
MaskR-CNN-ResNet50(EdgeAgreementLoss) L2 Loss 35.6 33.6

And I wonder why I can't obtain the performance gain in your paper. Can you provide results obtained from more accurate implementations?

Load last weight file with model.find_last()

Thank you very much for the useful implementation.

When I try to load my last weight file with

!python3 top.py train --dataset=/content/drive/'My Drive'/Mask_RCNN/samples/top/dataset --weights=last

It generated another log folder and gave me configuration errors.

I have notice that you added
if self.config.RUN_NAME: key += "_" + self.config.RUN_NAME
to model.find_last() different from matterport. May I ask you what is the purpose of it?

To fixed my error, I had to remove the newly added code and add dir_name = os.path.join(self.model_dir, dir_names[-2]) as here

Detailed answers would be very helpful.
Thank you in advance

training from scratch

Hi @JulienSiems ,

Thank you very much for a great contribution. I am working to family with Mask-rcnn.
Could you guide me how to run the code. Do I need to start training again ?
Everything are same with original Mask RCNN right?. Did you only add some functions on model.py ?
Thank you very much.

mask-rcnn edge agreement giving poor results as compared to original mask rcnn

Hi,

I was trying to run mask rcnn edge agreement on my own dataset and tried to segment with this but results are not as expected. I also ran the original Mask rcnn on these images and results are ok except at edges and tricky corners. I train the network with edge agreement as python3 liver_CT.py train --dataset=/Mask_RCNN/mask-rcnn-edge-agreement-loss-master/dataset/Liver_CT_YIQ/ --weights=imagenet. Same as the original mask rcnn expect using new codes.

Can you help me to know whether I have trained the data correctly or I need to do some more changes for training on your own data.

Thanks,
Supriti

How to not waste my annotated file?

Hi I am working on a project that will detect cracks using mask rcnn. I wan to implement your code however I already have done annotating my 3,500 images on VGG annotator. I have my json file already. As I looked into your code, I see you have a .py file for annotation tool. In order to train my model with your code, do I need to annotate again? How can I use my .json file to your code? Hoping for a reply. Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.