GithubHelp home page GithubHelp logo

Low accuracy in Sysu about mmt HOT 17 CLOSED

absagargupta avatar absagargupta commented on August 19, 2024
Low accuracy in Sysu

from mmt.

Comments (17)

yxgeee avatar yxgeee commented on August 19, 2024

Could you provide more details? For example, how did you split market and sysu datasets? Market for source and sysu(both RGB&IR) for target?

from mmt.

absagargupta avatar absagargupta commented on August 19, 2024

Yes market for source and sysu (both RGB and IR) as targets.

I also tried taking both source as Sysu dataset and target as modified sysu and even on that the accuracy was quite less.

from mmt.

yxgeee avatar yxgeee commented on August 19, 2024

I think the major reason might be that RGB images and IR images in Sysu dataset share the same identities, however, it is quite difficult to assign overlapping IDs for RGB and IR images with clustering algorithm. Intuitively, a RGB image and an IR image of the same person are generally far away from each other in the latent space.

from mmt.

yxgeee avatar yxgeee commented on August 19, 2024

During the inference on the test set of Sysu, the trained models are required to identify the same person’s IR images, given his/her RGB images. And vice versa. So, it is important to assign overlapping pseudo labels for RGB and IR images during training, however, it is hard with the current algorithm.

from mmt.

yxgeee avatar yxgeee commented on August 19, 2024

I have one idea. You can try to load the RGB images in Sysu with only grayscale, so that they might be closer to IR images in the latent space.

from mmt.

yxgeee avatar yxgeee commented on August 19, 2024

Remember that if you use grayscale RGB images for training, you should also load them in grayscale when testing. Maybe it will be better to load source-domain images in grayscale too.

from mmt.

absagargupta avatar absagargupta commented on August 19, 2024

Yeah that makes sense. I will have the source as the greyscale-IR images of sysu and target as the greyscale-IR images of the modified sysu. Maybe it will work. If this does not work I will try loading only greyscale in both source and target and get back to you with the results.

from mmt.

absagargupta avatar absagargupta commented on August 19, 2024

I tried training on source sysu (greyscale+IR) and target (greyscale+IR) it still gives quite bad results. mAP of 0.4%, Rank1 0.4, rank 5 1.9% and rank 10 as 3.4%.

from mmt.

Jennifer0329 avatar Jennifer0329 commented on August 19, 2024

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

from mmt.

yxgeee avatar yxgeee commented on August 19, 2024

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

from mmt.

Jennifer0329 avatar Jennifer0329 commented on August 19, 2024

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

from mmt.

yxgeee avatar yxgeee commented on August 19, 2024

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

from mmt.

Jennifer0329 avatar Jennifer0329 commented on August 19, 2024

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

from mmt.

yxgeee avatar yxgeee commented on August 19, 2024

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

You could try MSMT17_V1 for checking.
In the paper of SpCL, MMT-DBSCAN adopts constant eps=0.6 instead of rho for training.

from mmt.

Jennifer0329 avatar Jennifer0329 commented on August 19, 2024

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

You could try MSMT17_V1 for checking.
In the paper of SpCL, MMT-DBSCAN adopts constant eps=0.6 instead of rho for training

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

You could try MSMT17_V1 for checking.
In the paper of SpCL, MMT-DBSCAN adopts constant eps=0.6 instead of rho for training.

Would you please share me the MSMT17_V1 download link for research only? I have only got the V2 version before. Thanks.

from mmt.

yxgeee avatar yxgeee commented on August 19, 2024

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

You could try MSMT17_V1 for checking.
In the paper of SpCL, MMT-DBSCAN adopts constant eps=0.6 instead of rho for training

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

You could try MSMT17_V1 for checking.
In the paper of SpCL, MMT-DBSCAN adopts constant eps=0.6 instead of rho for training.

Would you please share me the MSMT17_V1 download link for research only? I have only got the V2 version before. Thanks.

You could send an email to Prof. Shiliang Zhang for the link.

from mmt.

gyh420 avatar gyh420 commented on August 19, 2024

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

You could try MSMT17_V1 for checking.
In the paper of SpCL, MMT-DBSCAN adopts constant eps=0.6 instead of rho for training

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

You could try MSMT17_V1 for checking.
In the paper of SpCL, MMT-DBSCAN adopts constant eps=0.6 instead of rho for training.

Would you please share me the MSMT17_V1 download link for research only? I have only got the V2 version before. Thanks.

You could send an email to Prof. Shiliang Zhang for the link.

hi , where l can find MSMT17_v2?

from mmt.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.