GithubHelp home page GithubHelp logo

huanghoujing / alignedreid-re-production-pytorch Goto Github PK

View Code? Open in Web Editor NEW
638.0 638.0 192.0 132 KB

Reproduce AlignedReID: Surpassing Human-Level Performance in Person Re-Identification, using Pytorch.

Python 100.00%

alignedreid-re-production-pytorch's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

alignedreid-re-production-pytorch's Issues

关于 classification mutual learning

您好,我按照 Deep Mutual Learning复现 classification 的 mutual learning,但是一直没效果。偶然看到您论文里这部分实验,想问一下单独只做 classification 的 mutual learning 有什么要注意的?

Image Pre-processing

how do you set the dataset size?
you directly resize to 224x224? I find the ratio is strange and the image is strange

do crop the image then resize to 224x224?I find do it ,it will happen the head and feet of person disappear.

how do you do data extend?

cuhk03's protocol

@huanghoujing Hi, I reallly appreciate your work and want to use it to get the paper'result on cuhk03, can you help descript how to modify your code to get this result?

Num of gpus for training?

Hi, i am trying to reproduce your results and i want to know how many gpus are used for training?

如何设置TWLD及TWGALD?

看到你的测试结果里有
GL + LL + TWGD
GL + LL + TWLD
GL + LL + TWGALD
也就是测试时可以选择用global distance或者local distance或者两者结合。请问这个是在哪里设置?或许需要改哪里的代码?

好像设置了local_dist_own_hard_sample和llw就会用TWGALD吧?那么只用local distance进行test的话如何设置呢?

谢谢~

MARS Dataset

Dear @huanghoujing,
Do you have any plan to train your models (i.e., ResNet50) on MARS data set?
Since the MARS data set has ~ 1M training images, therefore we expect that the performance of models will be increase.

Generate the training/test set of CUHK03 according to AlignedReid paper

Hi Houjing,

I used this code to train with CUHK03 dataset based on the new protocol as you wrote in the readme, it performed good! But now I want to split CUHK03 dataset to training/test set based on what paper wrote, I read the code and found the information of spliting has been saved in the file of "re_ranking_train_test_split.pkl", could you please give me some advices to generate this information file for spliting? Thanks a lot!!!

关于性能提升

你好,请教下,就是加入local分支训练后,相对于只用triplet loss训练原始Resnet50性能只提升一个点吗?

训练集id数和ids_per_batch的问题

发现一个训练数据集id数量和训练时ids_per_batch的设定的问题:
train_set_ids % ids_per_batch = 1的时候会导致整个epoch最后一个batch只有一个id,此时triloss时会没有负样本,故而会报错

source code

Can you send me a source code,My qq is:1304020120,thank you

如何找到训练得到的模型?

我按照README的指示跑了一个基于combined的数据集的mutual learning的训练,最后得到了一个stdout***.txt,一个ckpt.pth和一个tensorboard文件夹,但是找不到.pth.tar模型文件。请问程序是没有保存模型文件么?还是说模型文件在ckpt.pth里,需要一些操作来提取出来?
谢谢。我是PyTorch初学者。。

how to use the re-ranking?

Python version Re-ranking (Originally from re_ranking)

  1. I have no idea to use it , how to feed data type to use re_ranking
    I can not understand it, why is num_query ,not feature
    code API :
    q_g_dist: query-gallery distance matrix, numpy array, shape [num_query, num_gallery]
    q_q_dist: query-query distance matrix, numpy array, shape [num_query, num_query]
    g_g_dist: gallery-gallery distance matrix, numpy array, shape [num_gallery, num_gallery]

  2. it is for training when training or it is for test when test?

thanks , answer can use English or 中文

Runtime error:

我按照readme运行时,出现问题Runtime error:cuda runtime error(30):unknown error at /pytorch/aten/src/THC/THCTensorRandom.cu:25,请问有同学遇到吗?

train_ml.py -d '((0,),(0,))' Error

当在同一个GPU上运行train_ml.py(-d ((0,),(0,)))时出现如下错误,到底是那里的问题呢,谢谢~~

Ep 1, 37.78s, gp 23.85%, gm 17.38%, gd_ap 13.3094, gd_an 12.2454, gL 1.4902, gdmL 1.1416, loss 2.6318
Exception in thread Thread-6:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "script/experiment/train_ml.py", line 475, in thread_target
normalize_feature=cfg.normalize_feature)
File "./aligned_reid/model/loss.py", line 215, in global_loss
dist_mat, labels, return_inds=True)
File "./aligned_reid/model/loss.py", line 163, in hard_example_mining
dist_mat[is_neg].contiguous().view(N, -1), 1, keepdim=True) #1
RuntimeError: invalid argument 2: size '[32 x -1]' is invalid for input with 1095010584 elements at /pytorch/torch/lib/TH/THStorage.c:37

Why different result every testing?

When first running the code for testing, I can get the correct result. But after that, I get incorrect result running the code repeatly over and over.
Moreover, the results are different every testing, why ?
I was confused, and want to know how to get the correct result.

Environment:
Python: 3.6
torch: 0.4.0

The code was modified a little with the re_ranking method:

re_r_global_q_g_dist = re_ranking(global_q_g_dist, global_q_q_dist, global_g_g_dist)
re_r_global_local_q_g_dist = re_ranking(re_r_global_q_g_dist, local_q_q_dist, local_g_g_dist)

移植loss问题

大佬好!
我把代码移植到python3.6+pytorch0.4下面。在训练的时候,loss.py里面报错。
当一个batch块里面出现相同的ID的图片的时候这边is_pos 计算的就不是对角矩阵了。
is_pos = labels.expand(N, N).eq(labels.expand(N, N).t())
is_neg = labels.expand(N, N).ne(labels.expand(N, N).t())
然后导致后面代码:
dist_ap, relative_p_inds = torch.max(
dist_mat[is_pos_test].contiguous().view(N, -1), 1, keepdim=True)#报错
错误提示:
RuntimeError: invalid argument 2: size '[16 x -1]' is invalid for input with 18 elements at ..\src\TH\THStorage.c:37

请问这里我是否可以直接定义一个batch大小的对角阵给is_pos 。还是is_pos 代码里面就可以出现非对角矩阵。
当我将is_pos 改成一直是对角矩阵的时候。只有全局loss,loss的值开始就非常的低,请问可能是什么问题啊。

mutual loss

Have you tried pml, gdml, and ldml seperately? Looks like you only used gdml for your current results on Market1501.
Recently I'm having trouble reproducing the results for mutual classification loss (with my own code):(
Thx.

关于图像的均值和标准差的计算方式

作者您好!问一个比较zz的问题,看您的代码train_ml.py里第132行,有关于图像归一化的参数设置,如下:
``

    self.scale_im = True
    self.im_mean = [0.486, 0.459, 0.408]
    self.im_std = [0.229, 0.224, 0.225]

这个im_mean和im_std是如何计算得到的呢?是将三个数据集的所有图片的像素值加权求均值和方差吗?

Keys not found in source state_dict and destination state_dict

使用train.py训练后保存的ckpt文件进行测试的时候提示

Keys not found in source state_dict: 
	 base.layer3.0.bn1.weight
	 base.layer1.0.bn1.bias
	 base.layer2.1.bn2.running_mean
	 base.layer2.2.conv1.weight
	 base.layer3.3.bn3.running_mean
	 base.layer2.1.bn3.weight
	 base.layer2.0.bn3.running_var
	 base.layer2.3.bn3.running_mean
	 base.layer1.2.bn1.bias
	 base.layer4.2.bn3.running_mean
	 base.layer3.1.conv2.weight
	 base.layer2.0.downsample.1.running_var
	 base.layer3.4.bn3.running_var
	 base.layer3.5.bn2.running_mean
	 base.layer3.2.bn2.running_mean
	 base.layer3.5.bn1.running_var
	 base.layer3.5.bn1.running_mean
	 base.layer1.0.bn2.bias
	 base.layer1.2.bn3.running_var
	 base.layer3.5.bn3.running_var
	 base.layer2.3.bn3.weight
	 base.layer1.0.downsample.1.weight
	 base.layer1.0.conv1.weight
	 base.layer1.1.bn2.bias
	 base.layer3.5.bn2.running_var
	 base.layer1.2.bn2.bias
	 base.layer4.0.bn1.running_var
	 base.layer2.0.bn1.running_mean
	 base.layer2.2.bn3.running_var
	 base.layer1.0.bn3.weight
	 base.layer2.2.bn2.bias
	 base.layer3.3.bn1.bias
	 base.layer3.4.bn3.bias
	 base.layer4.1.conv1.weight
	 base.layer3.1.bn2.bias
	 base.layer4.0.downsample.0.weight
	 base.layer4.0.downsample.1.weight
	 base.layer3.2.bn1.bias
	 base.layer2.1.bn1.weight
	 base.layer1.2.bn1.running_mean
	 base.layer3.0.bn3.running_mean
	 base.layer3.4.conv3.weight
	 base.layer2.1.bn3.bias
	 base.layer3.2.bn1.running_mean
	 base.layer2.2.bn1.bias
	 base.layer2.2.bn3.running_mean
	 base.layer3.1.bn1.running_mean
	 base.layer4.1.bn1.bias
	 base.layer3.2.bn3.running_mean
	 base.layer2.2.bn1.weight
	 base.layer1.1.bn3.bias
	 base.layer2.1.conv3.weight
	 base.layer4.0.downsample.1.running_mean
	 base.layer4.1.conv2.weight
	 base.layer3.2.bn3.running_var
	 base.layer3.1.bn1.bias
	 base.layer3.4.bn1.running_var
	 base.layer4.0.bn1.weight
	 base.layer2.0.bn2.running_mean
	 base.layer3.1.bn2.running_mean
	 base.layer3.3.bn3.running_var
	 base.layer4.1.bn3.running_var
	 base.layer2.2.bn2.running_mean
	 base.layer3.0.bn3.running_var
	 base.layer4.0.bn2.running_mean
	 base.layer4.1.bn2.running_var
	 base.layer2.3.conv2.weight
	 base.layer3.0.bn1.running_mean
	 base.layer1.1.bn1.bias
	 base.layer4.0.bn3.weight
	 base.layer4.2.bn2.weight
	 base.layer2.0.conv2.weight
	 base.layer1.0.bn3.running_mean
	 base.layer1.0.downsample.1.running_mean
	 base.layer2.1.bn1.running_var
	 base.layer3.4.bn2.running_mean
	 base.layer3.1.bn3.weight
	 base.layer3.0.downsample.1.weight
	 base.layer2.3.bn2.running_var
	 base.layer3.1.bn1.running_var
	 base.layer3.0.bn2.weight
	 base.layer4.2.bn3.weight
	 base.layer3.3.conv3.weight
	 base.layer2.2.bn3.weight
	 base.layer4.2.bn1.bias
	 base.layer3.5.bn3.weight
	 base.layer1.2.conv3.weight
	 base.layer3.2.conv3.weight
	 base.layer2.3.bn2.running_mean
	 base.bn1.running_mean
	 base.layer3.3.bn2.weight
	 base.layer3.2.bn2.running_var
	 base.layer2.1.bn2.running_var
	 base.layer1.2.bn2.running_var
	 base.layer1.0.downsample.0.weight
	 base.layer4.2.bn2.running_var
	 base.layer2.1.bn2.weight
	 base.layer2.1.conv2.weight
	 base.layer3.2.bn2.weight
	 base.layer3.0.bn1.running_var
	 base.layer3.3.bn1.running_mean
	 base.layer3.4.bn1.weight
	 base.layer1.1.bn3.running_mean
	 base.layer3.0.bn3.weight
	 base.layer2.2.bn1.running_var
	 base.layer3.5.bn3.running_mean
	 base.layer1.1.bn1.running_mean
	 base.layer2.1.conv1.weight
	 base.conv1.weight
	 base.layer4.0.bn1.bias
	 base.layer3.2.bn2.bias
	 base.layer3.5.conv2.weight
	 base.layer2.0.bn2.running_var
	 base.layer2.3.bn2.bias
	 base.layer2.3.bn3.running_var
	 base.layer2.0.bn1.running_var
	 base.layer2.0.downsample.1.running_mean
	 base.layer4.0.bn3.running_mean
	 base.layer1.0.bn1.running_mean
	 base.layer1.2.bn2.running_mean
	 base.layer4.0.bn2.bias
	 base.layer1.2.bn2.weight
	 base.layer3.1.conv1.weight
	 base.layer3.3.bn3.bias
	 base.layer2.2.conv3.weight
	 base.bn1.bias
	 base.layer3.3.bn2.bias
	 base.layer2.3.bn1.weight
	 base.layer3.0.downsample.1.running_var
	 base.layer3.3.bn1.weight
	 base.layer1.0.conv2.weight
	 base.layer4.1.bn1.weight
	 base.layer3.3.bn3.weight
	 base.layer1.2.conv2.weight
	 base.layer1.0.bn2.weight
	 base.layer2.2.bn1.running_mean
	 base.layer1.0.bn3.bias
	 base.layer3.2.conv1.weight
	 base.layer4.2.bn3.bias
	 base.layer1.1.bn3.running_var
	 base.layer3.4.conv2.weight
	 base.layer1.0.bn3.running_var
	 base.layer4.0.conv3.weight
	 base.layer3.5.bn1.weight
	 base.layer3.1.bn1.weight
	 base.layer3.1.bn2.running_var
	 base.layer4.2.bn2.bias
	 base.layer4.0.downsample.1.bias
	 base.layer4.0.conv1.weight
	 base.layer3.4.bn2.running_var
	 local_conv.weight
	 base.layer3.0.bn2.bias
	 base.layer3.0.conv1.weight
	 base.layer2.2.bn2.running_var
	 base.layer4.2.conv1.weight
	 base.layer3.5.bn2.weight
	 base.layer1.1.bn1.weight
	 base.layer3.0.conv2.weight
	 base.layer4.2.conv2.weight
	 base.layer3.0.downsample.0.weight
	 base.layer3.5.bn2.bias
	 base.layer2.3.bn1.running_var
	 base.layer3.5.conv3.weight
	 base.layer2.0.bn3.running_mean
	 base.layer2.1.bn1.running_mean
	 base.layer1.0.conv3.weight
	 base.layer3.0.downsample.1.bias
	 base.layer4.1.bn3.running_mean
	 base.layer3.4.bn1.bias
	 local_bn.weight
	 base.layer3.0.bn2.running_mean
	 base.layer2.1.bn2.bias
	 base.layer2.3.bn1.running_mean
	 base.layer4.2.bn3.running_var
	 base.layer1.1.bn2.weight
	 base.layer1.1.bn2.running_mean
	 base.layer3.1.conv3.weight
	 base.layer3.1.bn3.running_mean
	 base.layer1.2.bn3.bias
	 base.layer1.1.bn1.running_var
	 base.layer3.4.bn2.weight
	 base.layer1.0.downsample.1.bias
	 base.layer4.2.conv3.weight
	 base.layer3.0.bn3.bias
	 base.layer3.0.conv3.weight
	 base.layer3.4.bn1.running_mean
	 base.layer2.0.bn3.bias
	 base.layer3.3.conv1.weight
	 base.layer2.0.bn3.weight
	 base.layer3.1.bn3.bias
	 base.layer1.0.bn2.running_mean
	 base.layer1.1.conv2.weight
	 base.layer2.0.conv3.weight
	 base.layer1.1.bn3.weight
	 base.layer4.0.conv2.weight
	 base.layer2.1.bn3.running_var
	 base.layer3.3.bn2.running_mean
	 local_bn.bias
	 base.layer2.2.bn2.weight
	 base.layer3.1.bn2.weight
	 base.layer4.1.bn2.weight
	 base.layer4.2.bn1.running_mean
	 base.layer3.5.conv1.weight
	 base.layer4.0.bn2.weight
	 base.layer4.0.bn3.bias
	 base.layer2.3.conv3.weight
	 base.layer3.1.bn3.running_var
	 base.layer2.2.bn3.bias
	 base.layer1.0.bn1.running_var
	 base.layer4.1.bn1.running_var
	 base.layer2.0.downsample.0.weight
	 base.layer4.1.bn3.bias
	 base.layer4.2.bn2.running_mean
	 base.layer2.0.bn2.weight
	 fc.bias
	 base.layer1.2.bn3.weight
	 base.layer1.2.bn1.weight
	 base.bn1.running_var
	 base.layer3.2.bn1.weight
	 base.layer3.0.downsample.1.running_mean
	 base.layer4.1.bn2.running_mean
	 base.layer4.1.bn2.bias
	 base.layer3.5.bn3.bias
	 base.bn1.weight
	 base.layer3.0.bn1.bias
	 base.layer4.1.conv3.weight
	 base.layer3.2.bn3.bias
	 base.layer2.0.bn1.weight
	 base.layer1.0.bn1.weight
	 base.layer2.3.conv1.weight
	 base.layer2.0.conv1.weight
	 base.layer3.4.bn2.bias
	 base.layer3.3.bn2.running_var
	 base.layer2.0.bn1.bias
	 local_conv.bias
	 base.layer3.3.conv2.weight
	 base.layer3.5.bn1.bias
	 base.layer2.0.downsample.1.bias
	 base.layer2.1.bn1.bias
	 base.layer4.1.bn1.running_mean
	 base.layer4.2.bn1.running_var
	 base.layer2.0.downsample.1.weight
	 base.layer3.4.bn3.weight
	 base.layer1.1.bn2.running_var
	 base.layer1.1.conv1.weight
	 local_bn.running_mean
	 base.layer3.4.conv1.weight
	 base.layer4.1.bn3.weight
	 base.layer2.3.bn1.bias
	 base.layer3.4.bn3.running_mean
	 base.layer3.2.conv2.weight
	 base.layer1.2.bn3.running_mean
	 base.layer4.0.downsample.1.running_var
	 base.layer4.0.bn3.running_var
	 base.layer2.3.bn2.weight
	 base.layer4.2.bn1.weight
	 base.layer1.2.bn1.running_var
	 fc.weight
	 base.layer3.2.bn1.running_var
	 base.layer2.0.bn2.bias
	 base.layer3.3.bn1.running_var
	 base.layer2.3.bn3.bias
	 base.layer2.2.conv2.weight
	 base.layer4.0.bn2.running_var
	 base.layer3.0.bn2.running_var
	 base.layer3.2.bn3.weight
	 local_bn.running_var
	 base.layer1.0.bn2.running_var
	 base.layer1.0.downsample.1.running_var
	 base.layer1.2.conv1.weight
	 base.layer2.1.bn3.running_mean
	 base.layer4.0.bn1.running_mean
	 base.layer1.1.conv3.weight
Keys not found in destination state_dict: 
	 scores
	 state_dicts
	 ep

使用您提供的ckpt文件不会出现以上提示,请问这个可能是哪里的问题呢,望指教,谢谢~

我的环境:
Python 3.6.5 :: Anaconda, Inc.
altgraph==0.15
backports.weakref==1.0rc1
bleach==1.5.0
certifi==2018.4.16
future==0.16.0
h5py==2.6.0
html5lib==0.9999999
macholib==1.9
Markdown==2.2.0
numpy==1.11.3
opencv-python==3.2.0.7
pefile==2017.11.5
Pillow==5.1.0
protobuf==3.5.2.post1
PyInstaller==3.3.1
PyYAML==3.12
pyzmq==17.0.0
scikit-learn==0.18.1
scipy==0.18.1
six==1.11.0
tensorboardX==0.8
tensorflow==1.2.0
torch==0.3.1
torchvision==0.2.0
Werkzeug==0.14.1

model_weight.pth

please ask you where is model_weight.pth when training is over, I can,t find the final model_weight.pth , only see one ckpt.pth . whereas to test ckpt.pth , the result is very slow.

Can not Understand CMC Code

Hello! I am a student who is using your code calculating the CMC. But recently I'm stuck with your code when calculating the CMC in aligned_reid/utils/metrics.py :
matches = (gallery_ids[indices] == query_ids[:, np.newaxis])
I wonder what this line returns and what the gallert_ids[indices] and the query_ids[:,np.newaxis] respectively
looks like. I would appreciate it if you could give me an simple example to explain this line! thank you very much!

Performance dropped with model trained on the combined dataset (market1501+duke+cuhk03)?

Thanks for the reproduction code!

I trained a model with triploss (use train.py), combined train data (market1501, duke and cuhk03), and the other parameters kept same as the settings to train only on market1501. I found that both the mAP and rank1 are ~1% less than the result of trained only on market1501. Should I tune the parameters MORE CAREFULLY to get a better performance?

btw, I have reproduced the result only trained on market1501. (got 87.14% rank-1 and 71.36% mAP w/o R.R.)

Thank you!

关于重排序的问题

作者您好,我看了re-rank.py的代码,有一点很困惑:当寻找prob image的K近邻时,为何在prob images 和 gallary images中一起查找,不应该只在gallary images查找吗?prob images 和 gallary images合并以后,查找到的K近邻,可能一部分来自prob images。请作者解释一下,谢谢~

这个错误是什么意思?

root@omnisky:/raid/wtz/AlignedReID-Re-Production-Pytorch# python script/dataset/transform_cuhk03.py \

--zip_file ~/Dataset/cuhk03/cuhk03_release.zip
--train_test_partition_file ~/Dataset/cuhk03/re_ranking_train_test_split.pkl
--save_dir ~/Dataset/cuhk03
Extracting zip file
Saving images 1400/1467
Traceback (most recent call last):
File "script/dataset/transform_cuhk03.py", line 152, in
transform(zip_file, train_test_partition_file, save_dir)
File "script/dataset/transform_cuhk03.py", line 84, in transform
raise RuntimeError('Train/test partition file should be provided.')
RuntimeError: Train/test partition file should be provided.

Assertion Error: During testing with Pre-trained model

`
Traceback (most recent call last):

File "script/experiment/train.py", line 632, in
main()

File "script/experiment/train.py", line 316, in main
train_set = create_dataset(**cfg.train_set_kwargs)

File "./aligned_reid/dataset/init.py", line 54, in create_dataset
partitions = load_pickle(partition_file)

File "./aligned_reid/utils/utils.py", line 24, in load_pickle
assert osp.exists(path)

AssertionError
`
Hi, I am getting this error when Im trying to run the code to test on the market-1501 dataset. Im new to Pytorch, so please help. Im using PyTorch 3 and python 2.7 as directed in the code. Thanks

The problem of the intersection of person ids betweem trainval and gallery

Thank you for sharing the code!

when I write the codes as follow ,

if osp.exists(train_test_partition_file):
            train_test_partition = load_pickle(train_test_partition_file)
        else:
            raise RuntimeError("Train/test partition file should be provided.'") 
trainval_pids=set()
query_pids = set()
gallery_pids = set()
for im_type in ['detected','labeled']:
            trainval_im_names = train_test_partition[im_type]['train_im_names']
            query_im_names = train_test_partition[im_type]['query_im_names']
            gallery_im_names = train_test_partition[im_type]['gallery_im_names']
            
            trainval_pids.update(set([parse_im_name(n,'id') for n in trainval_im_names]))
            
            query_pids.update(set([parse_im_name(n,'id') for n in query_im_names]))
            gallery_pids.update(set([parse_im_name(n,'id') for n in gallery_im_names]))
           
        
print (trainval_pids & gallery_pids)
assert query_pids <= gallery_pids
assert trainval_pids.isdisjoint(gallery_pids)

It causes "AssertionError".
I find that there is an intersection between trainval and gallery. (set([1201, 1389]))
The training/testing partition file is “re_ranking_train_test_split.pkl” downloaded from the given link.
Looking forward for your reply, thank you.

triplet loss

作者你好,!
我发现在open-reid中公布的market1501上triplet loss的rank1有85%左右,我运行源码用一块GPU,batch_size为128,初始学习率没调整,150轮之后rank1只有79%多一点,然后我又把初始学习率减半了,batch_size还是128,结果还是79%多,过不了80%,和公布结果差不少,用softmax loss结果都有80.5%的rank1准确率,比公布的结果只低一丁点。但是到triplet loss这反而比softmax loss还低。我看您研究过open-reid中的triplet loss,发表过很多issues,所以想请教一下您。另外我用的Python2.7.3,pytorch0.2,open-reid0.2.0。不光在market1501上,在cuhk03上也是,比公布的结果低8个点,但是比。
我结合分类损失和triplet loss在market1501上也出现了效果下降的现象,不知道作者您解决了吗?
感谢!

out of memory in the stage of training

hi, I always get this error “RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1503961620703/work/torch/lib/THC/generic/THCStorage.cu:66” . is the batchsize too big? How can I change it?

Triplet Local Loss

Hi @huanghoujing,

Look like you have implemented the Triplet Local Loss. So, if you did, you forget to check the TODO list of Triplet Local Loss.

ModuleNotFoundError

When I run the program, there is a mistake ;ModuleNotFoundError: No module named 'aligned_reid',What is the reason for it ? I've just been in contact with Python and hope that the question can be answered,thank you

可否告知两个单词的翻译?

在您的代码中找到两个单词,怎么翻译都感觉不符您意:
query
gallery
query-gallery
query-query
gallery-gallery
可否告知在您的代码里,这两个单词所代表的含义?
感激不尽!

how to test ?

@huanghoujing Hello, I use the train_ml.py to train, when training is over , please tell me how to test and how to save model as model.pth.tar? Thank you!

image input size problem

我有点不是很理解,为什么您的代码实现的时候,图像的输入是256x128的尺寸呀,原论文不是224x224吗;顺便问一下改成你这样的是有什么优点吗,以及图像的不同尺寸对最终效果有影响吗。希望您能帮我解惑,谢谢啦!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.