GithubHelp home page GithubHelp logo

austingg / mobilenet-v2-caffe Goto Github PK

View Code? Open in Web Editor NEW
96.0 96.0 100.0 11 KB

MobileNet-v2 experimental network description for caffe

License: MIT License

caffe cnn inverted-residual-linear-bottleneck mobile mobilenetv2

mobilenet-v2-caffe's People

Contributors

austingg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

mobilenet-v2-caffe's Issues

细节调参

  1. 请问你可以放出solver.prototxt吗?
  2. 第一个batchnorm前的conv使用了bias_term: true,剩余所有的batchnorm前的conv都使用了bias_term: false,请问这样做的依据是什么?
  3. 关于scale层的设置,你使用了默认mult参数,即1,1,1,1而mobilenet v1中scale层mult参数为1,0,1,0或者1, 0, 2, 0即不使用decay_mult,请问这样做的原因是什么?

Issues about the dataset

Hi, austingg:

Thanks for your sharing of the mobilentv2 network. I want to repeat the experiment and hoping to restire the paper accuracy. There are three questions for me about the dataset.

  1. Did you employ only the imagenet2012 dataset or also took the imagenet 2016 data when training?
  2. When resizing the origin images, did you resize them to fixed 256x256 or 256xN. I hear that the later one might result in better accuracy.
  3. How the data augmentation helps for the accrcury? Is there any comparison?

The model complexity(MAdds) is bigger than that in the paper

Hi, I compute the the MAdds of your .prototxt, the MAdds is 313M, which is bigger than the MAdds stated in the paper(300M). I can not figure out where went wrong. I think my computation is right(I have used the script to compute other models), and I think your implementation is also right. Do you have any idea about this? Have you computed the MAdds of your implementation? Thanks in advance.

inception data augmentation helps.

Is this data augmentation method mentioned in the “going deep with convolutions” paper?And in training or in testing to do data augmentation. I only use single-crop in picture's center now, so I have a lower accuracy in testing other's models. I often failed to achieve the accuracy mentioned in the paper, so I suspect that is when on the training only use mirror to augment data.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.