Comments (15)
It's possible. Please give me some time to find these bugs.
from awesome-semantic-segmentation-pytorch.
Please see folder /home/yourpcname/.torch/models
, is there trained model? for example: fcn32s_vgg16_pascal_voc.pth
.
from awesome-semantic-segmentation-pytorch.
Maybe there are some bugs that are hard to find in my code. Please keep this question, I will also check the reason.
Thank you for your careful and valuable thinking:+1:.
from awesome-semantic-segmentation-pytorch.
Thank you for the meaningful code.
Caffe does AlexNet's weight by fine-tuning the final layer conv.
The conv of the last layer is changed to 1x1xclass_number.
I think it is strange that almost the same performance is obtained even if using the initial VGG 16 weights without training.
I feel that the weights of all layers are not updated correctly.
I do not know if the weight of each layer is frozen or if the hyperparameter was accidentally dropped to the local solution.
from awesome-semantic-segmentation-pytorch.
Hi, My opinions on the above is as follows:
-
Although the change of loss is small during epoch=1, 10, 100, the difference of prediction results may be large, you can change parameter
no_val
intrain.py
, which can runvalidation()
in every epochs. I think it's big enough whenepoch=1000
, and less improvement of results is normal. -
There is no problem with python2.
-
The error maybe caused by parallel training, you can comment out the code
self.model = DataParallelModel(self.model).cuda()
andself.criterion = DataParallelCriterion(self.criterion).cuda()
. And remember to put them tocuda()
when defining.Change
preds, target = tuple(inputs)
must ensure that preds is a tensor list, such as [(batch, classes, H, W), (batch, classes, H, W)], you can print its type and length. I will test it later and reply you.
from awesome-semantic-segmentation-pytorch.
Hi thank you for reply.
Regarding 1:
If you evaluate with learning data (alternative estimation method), the performance should be high.
However, in practice the performance is not high.
I feel that training is not going well.
Much lower than when I implemented it with other frameworks (caffe and tensorflow).
This is the output When you evaluate 2007_000032.jpg by putting 2007_000032.jpg into the network where you trained 2007_000032.jpg.(validation pixAcc: 96.488%, mIoU: 7.564%)
from awesome-semantic-segmentation-pytorch.
Hi, thank you for your detailed comparison!
The mIoU value in score.py
refers to the average mIoU of 21 categories (for VOC). For this test image, only the plane and background appeared here, so the mIoU is low due to the calculation method.
If you want to evaluate single image, you can use function hist_info
and compute_score
in score.py
.
Is the visual result worse than the result from caffe and tf?
from awesome-semantic-segmentation-pytorch.
According to the result of the alternative estimation method (a method of making the training and test data the same), the accuracy is high even at 10 epoch in the case of fcn32s to Caffe.
from awesome-semantic-segmentation-pytorch.
It seems that the test result of caffe is better than pytorch's. I guess this result may be caused by:
- Upsampling method. (interpolate in pytorch. vs deconvolution in caffe?)
- convolution kernel initialization method and base_lr.
- The parameters of pretrained base model (vgg) are not fixed in training process, how about caffe?
I also found that the results is not as good as paper. if you find the bug in this code, which leads to the worse result, please tell me.
Thank you again for your detailed comparative experiment. I hope to improve this project with you together.
from awesome-semantic-segmentation-pytorch.
Thank you for your reply.
- It is considered to affect performance. (Caffe is a deconvolution.)
- Even with full scratch, the performance is higher. lr did the same.
- I think that 3 is the cause.
The results of VGG 16 and epoch 60 in this code are exactly the same.
I feel this code is fixed at VGG16. It may be caused by one's own environment ....
epoch0->base_model(vgg16)
epoch60
I think there are few people who want to use FCN's Pascal_voc to this point with pytorch. I think it's amazing.
from awesome-semantic-segmentation-pytorch.
Hi, the result of epoch=0
refers to no training? Dose it use only pretrained mode vgg16
and initialization parameters in _FCNHeard
?
from awesome-semantic-segmentation-pytorch.
epoch = 0 does not train, it used only pre trained model vgg16 and initialization parameters in _FCNHeard.
Some performance has been achieved without training.
I'm learning with augmentation data (I've increased Pascal data by 36 times), but Caffe has a good effect, but Pytorch has no significant effect.
It is hard to think that augmentation does not have much effect. It is thought that there is a problem in updating training.
Does eval.py correctly load the learned model weights?
Where is the code?
Is the result different even for the same image because the weights being loaded are different?
from awesome-semantic-segmentation-pytorch.
-
I think that just using the pretrained model and initialization parameters in
_FCNHead
cannot achieve the performance. Has the trained model (such asepochs=60
) been misused? -
class SegmentationDataset()
indata_loader/segbase.py
includes data augmentation, please see function_sync_tanform
and_val_sync_transform
. -
Trained model loading in
eval.py
is implemented by functionget_segmentation_model()
inmodels/model_zoo.py
, it will runget_fcn32s()
-> https://github.com/Tramac/Awesome-semantic-segmentation-pytorch/blob/ec4882a9e2025fb5c000cb21be8ebac07c09c923/models/fcn.py#L155
Trained model loading indemo.py
is implemented by functionget_model()
inmodels/model_zoo.py
, it will runget_fcn32s_vgg16_voc()
->get_fcn32s()
-> https://github.com/Tramac/Awesome-semantic-segmentation-pytorch/blob/ec4882a9e2025fb5c000cb21be8ebac07c09c923/models/fcn.py#L155
So, they should have the same result.
from awesome-semantic-segmentation-pytorch.
It understood.
-
Do git clone https://github.com/Tramac/Awesome-semantic-segmentation-pytorch.git.
-
Prepare the data.
-
python eval.py
Why is it possible to make an evaluation?
Fcn 32 can be loaded without training.
The performance at that time is, for example, Sample 1450, validation pixAcc: 85.044%, mIoU: 46.411%.
I do not know this.
from awesome-semantic-segmentation-pytorch.
I understand the reason.
I noticed that I was competing with the models I collected for pytorch.
from awesome-semantic-segmentation-pytorch.
Related Issues (20)
- Help! TypeError: an integer is required (got type tuple) HOT 4
- How do you fix the gap between tensorflow and pytorch?
- Sorry, Where is the Pretrained Model Link?
- > There might be something wrong with the version of `pillow`. Try this : `border=(0, 0, padw, padh)` -> `border=20` HOT 3
- RuntimeError: CUDA out of memory. HOT 1
- RuntimeError: Given groups=1, weight of size [128, 512, 1, 1], expected input[4, 2048, 1, 1] to have 512 channels, but got 2048 channels instead HOT 4
- Errors occured when running with multi-GPUs
- 为什么只能训练fcn系列的
- why " iteration=iteration+1" in train.py ?
- 有人能把模型跑到论文中的精度吗 HOT 1
- Why the performance dropped by half when use pytorch1.5 or the newer version?
- Hello! Is there any group number?
- 执行demo程序发生如下报错TypeError: __init__() got an unexpected keyword argument 'local_rank'怎么解决 HOT 2
- DenseASPP与官方代码似乎实现的不太一样(different implemantation with official code in DenseASPP)
- 有训练好的checkpoint吗
- 请问一下这个fcn32s_vgg16_pascal_voc.pth在哪里获取啊?根据readme的指示跑demo.py出现ValueError: Model file is not found. Downloading or trainning.
- bisenet cityscapes the miou is low
- VOC_aug里的mat格式标签怎么得到
- bisenet我只能跑到64左右,离论文里面的76差了很多,有人跑到哪个精度吗
- TypeError: init() got an unexpected keyword argument 'local_rank' HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from awesome-semantic-segmentation-pytorch.