seathiefwang / mgn-pytorch Goto Github PK
View Code? Open in Web Editor NEWReproduction of paper: Learning Discriminative Features with Multiple Granularities for Person Re-Identification
Reproduction of paper: Learning Discriminative Features with Multiple Granularities for Person Re-Identification
Hellow,
I used GX1080*1 8G and ran demo.sh. Got this problem:
usage: main.py [-h] [--nThread NTHREAD] [--cpu] [--nGPU NGPU]
[--datadir DATADIR] [--data_train DATA_TRAIN]
[--data_test DATA_TEST] [--reset] [--epochs EPOCHS]
[--test_every TEST_EVERY] [--batchid BATCHID]
[--batchimage BATCHIMAGE] [--batchtest BATCHTEST] [--test_only]
[--model MODEL] [--loss LOSS] [--act ACT] [--pool POOL]
[--feats FEATS] [--height HEIGHT] [--width WIDTH]
[--num_classes NUM_CLASSES] [--lr LR]
[--optimizer {SGD,ADAM,ADAMAX,RMSprop}] [--momentum MOMENTUM]
[--dampening DAMPENING] [--nesterov] [--beta1 BETA1]
[--beta2 BETA2] [--amsgrad] [--epsilon EPSILON] [--gamma GAMMA]
[--weight_decay WEIGHT_DECAY] [--decay_type DECAY_TYPE]
[--lr_decay LR_DECAY] [--margin MARGIN] [--re_rank]
[--random_erasing] [--probability PROBABILITY]
[--savedir SAVEDIR] [--outdir OUTDIR] [--resume RESUME]
[--save SAVE] [--load LOAD] [--save_models]
[--pre_train PRE_TRAIN]
main.py: error: argument --nThread: expected one argument
What can I do to solve it?
Line 34 in 384e820
Hi, thank you for the nice repo!
In the 33rd and 34th line of trainer.py, there are two step functions, i.e., self.scheduler.step() and self.loss.step(). Could you explain the roles of these two functions?
Thanks for your great job.
when i test your code in Market datasets using re-rangking , this error just happened , Do anyone meet the same question ?
Maybe the following reasons? but i am not sure.
1)memory is not enough , but is not possible ,cause i have lots of them
2)package version is not suitable , but according to the conda , the version is the latest
the code with red line should be the error code,
i really can not figure out why i meet this error.thanks
hellow, the class Trainer's function, extract_feature, in trainer.py has a code,
for (inputs, labels) in loader:
ff = torch.FloatTensor(inputs.size(0), 2048).zero_()
for i in range(2):
if i==1:
inputs = self.fliphor(inputs)
input_img = inputs.to(self.device)
outputs = self.model(input_img)
f = outputs[0].data.cpu()
ff = ff + f
why the feature extraction outputs take twice, What's function of the second progress ,outputs = self.model(input_img)?
Hi, seathiefwang.
Your work is good, but I have a problem. If I just want to test this model on cpu, how do I need to modify demo.sh?
how setting to test in duke dataset
i run your code with new dataset(MSMT17),but it occured :
[INFO] Making model...
[INFO] Making loss...
1.000 * CrossEntropy
1.000 * Triplet
[INFO] Epoch: 1 Learning rate: 2.00e-04
[INFO] [1/160] 65/66 [CrossEntropy: 9.6161][Triplet: 3.0369][Total: 12.6529]Traceback (most recen
t call last):
File "main.py", line 20, in
trainer.train()
File "G:\code2\MGN-pytorch-master\trainer.py", line 50, in train
loss.backward()
File "D:\Anaconda3\lib\site-packages\torch\tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "D:\Anaconda3\lib\site-packages\torch\autograd_init_.py", line 89, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: The expanded size of the tensor (1) must match the existing size (0) at non-singleton
dimension 0
how to solve it.
我有20W的类别标签,每个类别有2张图片以上,训练20 Epoch发现CrossEntropyLoss不下降
In your code, you use the global feature with 256d to do classification, and the paper said they use global features before reduction which is 2048d. Is that a mistake? or you did that for a reason?And by the way, in your code , you use adam with lr 0.0002 which is also different from the paper. Can you tell me why,? In my attemption , it seems like your setting is better than what the paper says.
I check the output of extract_feature and found the dimension is 2048, why? in paper, that's 256
您好,我在跑您的网络时出现了这种错误(我的环境是python3.5,pytorch 0.4.1)
File "main.py", line 20, in
trainer.train()
File "/opt/share0/luchen/MGN/MGN-pytorch-master/trainer.py", line 49, in train
loss = self.loss(outputs, labels)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/opt/share0/luchen/MGN/MGN-pytorch-master/loss/init.py", line 70, in forward
self.log[-1, i] += effective_loss.item()
RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCStorage.c:36
辛苦您帮忙解答一下,谢谢!
when I run sh demo.sh
[INFO] Making loss...
1.000 * CrossEntropy
1.000 * Triplet
[INFO] Epoch: 1 Learning rate: 2.00e-04
[INFO] Test:
Traceback (most recent call last):
File "main.py", line 22, in
trainer.test()
File "/media/data2/chenghj/RE-ID/MGN-pytorch-master/trainer.py", line 74, in test
dist = re_ranking(q_g_dist, q_q_dist, g_g_dist)
File "/media/data2/chenghj/RE-ID/MGN-pytorch-master/utils/re_ranking.py", line 44, in re_ranking
[np.concatenate([q_q_dist, q_g_dist], axis=1),
ValueError: zero-dimensional arrays cannot be concatenated
please tell me how to correct it . thanks!
Hi~ can you reproduce the results with the proposed experiment hyper-parameters in this paper ?
example:
train 80 epoches with re_ranking, but just got 86.7% mAP, 91.8% rank-1.
During evaluation, we both extractthe features corresponding to original images and the horizontallyflipped versions, then use the average of these as the final features.What are the benefits of doing this? @seathiefwang
In MGN-pytorch/utils/functions.py file, while calculating the average precision score, I noticed you are taking negation of dist ( y_score = -distmat[i][indices[i]][valid] (line 107)).
In normal practice, we do not take negation of the score. Could you please help me in understanding it.
你这是re-ranking之后的结果吗?
Thank you for your sharing. And I wonder if this paper have already released? I didn't find any relative informations on arxiv. Thank you again and waiting for your reply.
如果我只要提取一张图片的特征我应该如何修改程序
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.