GithubHelp home page GithubHelp logo

Comments (24)

huanghoujing avatar huanghoujing commented on June 21, 2024 3

Yeah, recently there are several interesting papers on arXiv tackling transfer in ReID using GAN. The following list is what I have knowledge of.

from alignedreid-re-production-pytorch.

huanghoujing avatar huanghoujing commented on June 21, 2024

你是在duke上训练然后测的吗?还是直接用market1501训好的模型测试的?

from alignedreid-re-production-pytorch.

JpHu2017 avatar JpHu2017 commented on June 21, 2024

我是把market1501训练好的模型直接用在duke测试集上。这样出来的效果会差这么多吗?

from alignedreid-re-production-pytorch.

huanghoujing avatar huanghoujing commented on June 21, 2024

是的,不同库之间的差异很大的。所以现在有些文章在研究怎样更好地跨库迁移。

from alignedreid-re-production-pytorch.

JpHu2017 avatar JpHu2017 commented on June 21, 2024

好的,谢谢。请问您在duke数据集上做个训练吗?如果有的话,能否给我一份。感激不尽啦。

from alignedreid-re-production-pytorch.

huanghoujing avatar huanghoujing commented on June 21, 2024

不客气的。duke上你也可以自己训,运行script/tri_loss/train.py时,数据集选为duke即可:--dataset duke。如果需要训好的模型,我后续也上传一个。

from alignedreid-re-production-pytorch.

JpHu2017 avatar JpHu2017 commented on June 21, 2024

ok 😁

from alignedreid-re-production-pytorch.

huanghoujing avatar huanghoujing commented on June 21, 2024

您好,三个数据库上用triplet loss训练的模型可以下载了,性能有更多提升,参考这个工程.

from alignedreid-re-production-pytorch.

 avatar commented on June 21, 2024

你的这么高吗,我的market训练去duke测试,性能是rank-1:33%, mAP:16.5%,我一会用这个代码跑一下,谢谢!

from alignedreid-re-production-pytorch.

JpHu2017 avatar JpHu2017 commented on June 21, 2024

自己训练的模型效果会更好,你可以试试

from alignedreid-re-production-pytorch.

liangbh6 avatar liangbh6 commented on June 21, 2024

@Simon4john 请问你是用aligned reid的模型测试还是用IDE?

from alignedreid-re-production-pytorch.

 avatar commented on June 21, 2024

@liangbh6 我用的是IDE,alignedreid 我直接去杜克数据集合测试性能一选大概是30%,所以很奇怪楼主测出来的怎么这么高。

from alignedreid-re-production-pytorch.

liangbh6 avatar liangbh6 commented on June 21, 2024

@Simon4john 我看你说能到33%这么高,很怀疑自己搞错了什么==。IDE在market1501的baseline我调到83%,拿去在duke上测试只有rank1 27%这样。高手,请问有没有数据集增强或者学习率方面的tricks要用到?

from alignedreid-re-production-pytorch.

 avatar commented on June 21, 2024

@liangbh6 你用的是caffe框架吗,我在caffe 上market性能76%,在杜克上测试性能是33%

from alignedreid-re-production-pytorch.

 avatar commented on June 21, 2024

@liangbh6 @JpHu2017 楼主的map 能到30.76%,我的mAP才16.5%我也很好奇是不是我的代码搞错了。

from alignedreid-re-production-pytorch.

JpHu2017 avatar JpHu2017 commented on June 21, 2024

30.76是re-ranking以后得

from alignedreid-re-production-pytorch.

liangbh6 avatar liangbh6 commented on June 21, 2024

@Simon4john 我用的pytorch,在market1501训练,market上测试rank1 83%, mAP 62%, duke上测试rank1 27%, mAP 14%。不同框架差别这么大吗?
@JpHu2017 请问没有re-ranking之前是多少?

from alignedreid-re-production-pytorch.

 avatar commented on June 21, 2024

@liangbh6 我用的是caffe的框架,这里https://github.com/zhunzhong07/IDE-baseline-Market-1501。不知道pytorch的情况。

from alignedreid-re-production-pytorch.

liangbh6 avatar liangbh6 commented on June 21, 2024

@Simon4john 好的, 谢谢

from alignedreid-re-production-pytorch.

 avatar commented on June 21, 2024

我用了这里提供的triple loss (pytorch),在market训练得到rank-1:86.4%,mAP:72.2%的性能,然后在duke上测试性能是:rank-1:30.1%, mAP:14.0%。
此外,在duke上训练得到rank-1:79.2%,mAP:61.7%的性能,在market上测试,得到了rank-1:39.4%, mAP:16.9%的性能。

另外一方面,我在caffe上用了IDE这个代码训练IDE,在market上得到了76%的rank1,在duke上测试得到了33%的rank-1。

综合上述两种情况,我很好奇不同框架之间的差异这么大吗?还是其他的原因?

from alignedreid-re-production-pytorch.

huanghoujing avatar huanghoujing commented on June 21, 2024

我发现模型在source domain上训得分数越高,不一定在target domain也更高,很可能还会有反作用。这是domain shift不可避免的问题。这也不是说source domain上训得老高的模型就不实用,毕竟有时候训练跟测试的数据分布就是很一致的。要做跨domain的话要专门研究,仅仅在source domain上实现一个表现很好的baseline是不够的。

from alignedreid-re-production-pytorch.

 avatar commented on June 21, 2024

@huanghoujing 您说的我很同意,有时候在某一个数据集合学的很好不一定会让学到的特征的泛化能力或者迁移到其他数据集合的能力增强。这里有个文章貌似就说了这个问题:Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification (https://arxiv.org/pdf/1711.07027.pdf)

from alignedreid-re-production-pytorch.

coldgemini avatar coldgemini commented on June 21, 2024

are same persons appear in training sets? otherwise why is the performance across domain so different?

from alignedreid-re-production-pytorch.

huanghoujing avatar huanghoujing commented on June 21, 2024

Generally, different training sets do not contain the same person. E.g. training sets from Market-1501, DukeMTMC-reID, CUHK03, etc are independent.

from alignedreid-re-production-pytorch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.