GithubHelp home page GithubHelp logo

Comments (7)

giker17 avatar giker17 commented on July 20, 2024 1

After a mount of time's debugging, finally problem solved.
The unstable of the program comes from the parallel of data loading which causes the disorder of image files.
I change a param in the main method:

dataset_kwargs = dict(
        name = name_data,
        resize_h_w = (256, 128),
        scale = True,
        im_mean = [0.486, 0.459, 0.408],
        im_std = [0.229, 0.224, 0.225],
        batch_dims = 'NCHW',
        num_prefetch_threads = 2)

Just change the param num_prefetch_threads to 1, can solve the problem.

from alignedreid-re-production-pytorch.

huanghoujing avatar huanghoujing commented on July 20, 2024

Hi, can you run it under pytorch 0.3 and python 2.7 and report the results? I haven't tested it on pytorch 0.4 and python 3.

from alignedreid-re-production-pytorch.

giker17 avatar giker17 commented on July 20, 2024

Does the model and algorithms matter with the version of torch?
I changed some code from py2.7 to py3.6, and the core algorithm don't modified.
And i think may be there are some problem with some settings or somewhere.
Do you get different results when testing the same data repeatly in py2.7 and torch 0.3?

I'm not familiar with the code, can you give some suggestions except changing the environment?
Thx : )

from alignedreid-re-production-pytorch.

huanghoujing avatar huanghoujing commented on July 20, 2024

Hi, I think image order has nothing to do with test accuracy. The trained model weight is fixed and should give same result for same dataset. BTW, do you ever set final_batch=False during testing? Or do you use cropping or mirror during testing?

from alignedreid-re-production-pytorch.

giker17 avatar giker17 commented on July 20, 2024

Hi, I only changed one parameter mentioned above: num_prefetch_threads, and others kept unchanging.
final_batch is True, mirror_type is ['random', 'always', None][2].
And I didn't find any code for data augmentation which is no need in testing.

Only change num_prefetch_threads can get stable output every testing the same data.

And I changed the method for reranking:

  • first reranking global dist
  • then reranking the output dist along with the local dist
    re_r_global_q_g_dist = re_ranking(global_q_g_dist, global_q_q_dist, global_g_g_dist)
    re_r_global_local_q_g_dist = re_ranking(re_r_global_q_g_dist, local_q_q_dist, local_g_g_dist)

Is there any problem?
And Only use global_q_g_dist without reranking can get a similar precision result.

from alignedreid-re-production-pytorch.

huanghoujing avatar huanghoujing commented on July 20, 2024

I see no problem in your case now. I run with python 2.7 and pytorch 0.3, and different times of testing give same results. I run testing for both triplet loss and triplet loss + mutual learning, and the logs are as follows.

Triplet loss testing run 1

python script/experiment/train.py \
-d '(0,)' \
--dataset market1501 \
--normalize_feature false \
-glw 1 \
-llw 0 \
-idlw 0 \
--only_test true \
--exp_dir exp/test-baseline-run1 \
--model_weight_file exp/AlignedReID-Experiment/not_nf_ohs_gm_0.3_lm_0.3_glw_1_llw_0_idlw_0_lr_0.0002_exp_decay_at_151_total_300/run1/model_weight.pth
Loaded model weights from exp/AlignedReID-Experiment/not_nf_ohs_gm_0.3_lm_0.3_glw_1_llw_0_idlw_0_lr_0.0002_exp_decay_at_151_total_300/run1/model_weight.pth

=========> Test on dataset: market1501 <=========

Extracting feature...
1000/1000 batches done, +3.34s, total 173.39s
Done, 173.78s
Computing global distance...
Done, 2.12s
Computing scores for Global Distance...
[mAP: 71.38%], [cmc1: 87.05%], [cmc5: 94.89%], [cmc10: 96.94%]
Done, 20.22s
Re-ranking...
Done, 147.73s
Computing scores for re-ranked Global Distance...
[mAP: 85.49%], [cmc1: 89.85%], [cmc5: 94.54%], [cmc10: 95.72%]
Done, 22.71s

Triplet loss testing run 2

python script/experiment/train.py \
-d '(1,)' \
--dataset market1501 \
--normalize_feature false \
-glw 1 \
-llw 0 \
-idlw 0 \
--only_test true \
--exp_dir exp/test-baseline-run2 \
--model_weight_file exp/AlignedReID-Experiment/not_nf_ohs_gm_0.3_lm_0.3_glw_1_llw_0_idlw_0_lr_0.0002_exp_decay_at_151_total_300/run1/model_weight.pth
Loaded model weights from exp/AlignedReID-Experiment/not_nf_ohs_gm_0.3_lm_0.3_glw_1_llw_0_idlw_0_lr_0.0002_exp_decay_at_151_total_300/run1/model_weight.pth

=========> Test on dataset: market1501 <=========

Extracting feature...
1000/1000 batches done, +3.12s, total 170.08s
Done, 170.55s
Computing global distance...
Done, 2.25s
Computing scores for Global Distance...
[mAP: 71.38%], [cmc1: 87.05%], [cmc5: 94.89%], [cmc10: 96.94%]
Done, 21.48s
Re-ranking...
Done, 164.47s
Computing scores for re-ranked Global Distance...
[mAP: 85.49%], [cmc1: 89.85%], [cmc5: 94.54%], [cmc10: 95.72%]
Done, 23.64s

Triplet loss + mutual learning testing run 1

python script/experiment/train.py \
-d '(2,)' \
--dataset market1501 \
--normalize_feature false \
-glw 1 \
-llw 0 \
-idlw 0 \
--only_test true \
--exp_dir exp/test-ml-run1 \
--model_weight_file exp/AlignedReID-Experiment/not_nf_ohs_gm_0.3_lm_0.3_glw_1_llw_0_idlw_0_pmlw_0_gdmlw_1_ldmlw_0_lr_0.0002_exp_decay_at_151_total_300/run1/model_weight.pth
Loaded model weights from exp/AlignedReID-Experiment/not_nf_ohs_gm_0.3_lm_0.3_glw_1_llw_0_idlw_0_pmlw_0_gdmlw_1_ldmlw_0_lr_0.0002_exp_decay_at_151_total_300/run1/model_weight.pth

=========> Test on dataset: market1501 <=========

Extracting feature...
1000/1000 batches done, +3.21s, total 167.98s
Done, 168.38s
Computing global distance...
Done, 1.93s
Computing scores for Global Distance...
[mAP: 75.76%], [cmc1: 88.78%], [cmc5: 96.02%], [cmc10: 97.48%]
Done, 23.40s
Re-ranking...
Done, 153.56s
Computing scores for re-ranked Global Distance...
[mAP: 88.28%], [cmc1: 91.92%], [cmc5: 95.52%], [cmc10: 96.38%]
Done, 21.36s

Triplet loss + mutual learning testing run 2

python script/experiment/train.py \
-d '(3,)' \
--dataset market1501 \
--normalize_feature false \
-glw 1 \
-llw 0 \
-idlw 0 \
--only_test true \
--exp_dir exp/test-ml-run2 \
--model_weight_file exp/AlignedReID-Experiment/not_nf_ohs_gm_0.3_lm_0.3_glw_1_llw_0_idlw_0_pmlw_0_gdmlw_1_ldmlw_0_lr_0.0002_exp_decay_at_151_total_300/run1/model_weight.pth
Loaded model weights from exp/AlignedReID-Experiment/not_nf_ohs_gm_0.3_lm_0.3_glw_1_llw_0_idlw_0_pmlw_0_gdmlw_1_ldmlw_0_lr_0.0002_exp_decay_at_151_total_300/run1/model_weight.pth

=========> Test on dataset: market1501 <=========

Extracting feature...
1000/1000 batches done, +2.73s, total 144.90s
Done, 145.43s
Computing global distance...
Done, 1.97s
Computing scores for Global Distance...
[mAP: 75.76%], [cmc1: 88.78%], [cmc5: 96.02%], [cmc10: 97.48%]
Done, 20.28s
Re-ranking...
Done, 139.84s
Computing scores for re-ranked Global Distance...
[mAP: 88.28%], [cmc1: 91.92%], [cmc5: 95.52%], [cmc10: 96.38%]
Done, 21.59s

from alignedreid-re-production-pytorch.

do-not-stop avatar do-not-stop commented on July 20, 2024

After a mount of time's debugging, finally problem solved.
The unstable of the program comes from the parallel of data loading which causes the disorder of image files.
I change a param in the main method:

dataset_kwargs = dict(
        name = name_data,
        resize_h_w = (256, 128),
        scale = True,
        im_mean = [0.486, 0.459, 0.408],
        im_std = [0.229, 0.224, 0.225],
        batch_dims = 'NCHW',
        num_prefetch_threads = 2)

Just change the param num_prefetch_threads to 1, can solve the problem.

Could you tell me what the ' num_prefetch_threads' means

from alignedreid-re-production-pytorch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.