GithubHelp home page GithubHelp logo

ryanaleksander / softened-similarity-learning Goto Github PK

View Code? Open in Web Editor NEW
49.0 49.0 13.0 17 KB

Unofficial Pytorch implementation of Unsupervised Person Re-identification via Softened Similarity Learning

Python 100.00%

softened-similarity-learning's People

Contributors

ryanaleksander avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

softened-similarity-learning's Issues

Train.py

I'm sorry to bother you to consult a low-level question. There is a code in line 25 like this:
train_loader, memory_table = load_data(config['Train Data'])
I think you want to use configparser to read the parameters. But when I run this programm it send a Traceback to me without saying anything. I know it's low-level, but I really can't solve it myself. I express my highest thanks to you.

Memory vectors,parts and cameras

Hi! I have a question. Hope you can help me. In your config.ini , Train Data, there are three memory paths. But i don't have memory_vectors. I think it is saved after training. But your code need to load memory_vectors_path (memory_parts_path, memory_cameras_path) before training. How can i do this?

How to obtain the memory table?

Hi! If I use the other dataset, how to obtain the memory table. I can download your file, but I do not know how to construct the memory table. Especially, we do not have the ground-truth labels, how to get the initial pseudo label for training.

memory_table

Hi, thanks for your code. Could you please tell me how did you get the weight files of vectors and parts?

train error

I just run "python train.py --config 'config.ini'"
Got this:
Traceback (most recent call last):
File "train.py", line 216, in
main()
File "train.py", line 26, in main
train(config['Train'], train_loader, memory_table)
File "train.py", line 100, in train
't'), ld=params.getfloat('lambda_d'))
File "train.py", line 30, in loss_fn
prob = -ld * torch.log(memory_tb.probability(outputs, targets, t))
File "/home/customer/rw/softened-similarity-learning-master/memory_table.py", line 55, in probability
sum_prob = torch.exp(outputs.matmul(self.vectors.T) / t).sum(dim=1)
AttributeError: 'Tensor' object has no attribute 'T'
Thanks!

Loss cannot converge

Thanks for your repo. I did not use the Memory Table file you provided but used ImageNet pre-trained resnet50 to directly build the Memory Table. Under the conditions of BatchSize = 32, epoch = 30, and LR = 0.1, the loss can no longer decrease when it reaches 60. What special optimization techniques did you use to reduce the loss correctly?

Wrong test.py?

Is the test.py wrong? Because the code extract feature between bounding_box_test files and query files, and the two files contain the same pictures for some person. When calculating the similarity between query_vector and test_vector , of course the same picture has the same extracted feature . So when I was using the resnet 50 which pretrained in Imagenet , I got the top_1 accuracy :90% . So if my thought is right,i hope you can redress it as soon as possible. THANKS!

memory_table for another dataset

hi. sorry to bother you. I have a question. hope you can help me. how can I generate camera.pth, vectors.pth and parts.pth that that are available in memory_table for another dataset (for example Duke dataset)?

the rank1_accuracy

Sorry to bother you! I did everything but set batch size with 8 because of my low gpu's memory and i got a low_level rank_1 accuracy about 42% .Is the accuracy normal with the batch size? And the pretrained model you given,it seems like the vector.pth more than the model's weights

test.py

I cannot find the "test.pth" and "query.pth", could you help me solve it?

test issue

It happened this issue when I was running test.py. Could you help me?
File "test.py", line 80, in evaluate
accuracy = (all_labels == test_labels).sum().item() / len(all_labels)
RuntimeError: The size of tensor a (3368) must match the size of tensor b (19732) at non-singleton dimension 0

training label changed in dataset

Hi,your work helps me a lot,and thank you for your work!But I don't understand why:

"
if self.data == 'train':
label = index
"
line 55 in dataset.py

This seems to change the actual tag value for training data

rank-1

Hello, did you consider the camera ID when calculating rank-1?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.