GithubHelp home page GithubHelp logo

group_loss's Introduction

The Group Loss for Deep Metric Learning

Official PyTorch implementation of The Group Loss for Deep Metric Learning paper (ECCV 2020) published at European Conference on Computer Vision (ECCV) 2020.

Installation

Please download the code:

To use our code, first download the repository:

git clone https://github.com/dvl-tum/The_Group_Loss_for_Deep_Metric_Learning.git

To install the dependencies:

pip install -r requirements.txt

Datasets

The code assumes that the CUB-200-2011 dataset is given in the format:

CUB_200_2011/images/001
CUB_200_2011/images/002
CUB_200_2011/images/003
...
CUB_200_2011/images/200

The code assumes that the CARS-196 dataset is given in the format:

CARS/images/001
CARS/images/002
CARS/images/003
...
CARS/images/198

The code assumes that the Stanford Online Products dataset is given in the format:

Stanford/images/00001
Stanford/images/00002
Stanford/images/00003
...
Stanford/images/22634

where 001, 002, ..., N are the IDs of the folders for each class in the dataset.

Training

In order to train, evaluate and save a model, run the following command:

python train.py

For convenience, we provide models trained in the classification task. For three datasets (CUB-200-2011, CARS-196, and Stanford Online Products, in addition to ImageNet) they can be found at:

net/bn_inception_weights_pt04.pt
net/finetuned_cub_bn_inception.pth
net/finetuned_cars_bn_inception.pth
net/finetuned_Stanford_bn_inception.pth

Please see the file:

train_finetune.py

on how to pretrain the networks for the classification task (if you want to use some other type of network). For DenseNets, please email us to send you the pretrained networks (bear in mind though, the difference in performance is minimal, so you can skip the pretraining).

For convenience (in case you only want to use networks for feature extraction), we provide trained networks in the task of Group Loss, that reach similar results to those in the paper. They can be found at:

net/trained_cub_bn_inception.pth
net/trained_cars_bn_inception.pth
net/trained_stanford_bn_inception.pth

Citation

If you find this code useful, please consider citing the following paper:

@InProceedings{Elezi_2020_ECCV,
author = {Elezi, Ismail and Vascon, Sebastiano and Torcinovich, Alessandro and Pelillo, Marcello and Leal-Taixe, Laura},
title = {The Group Loss for Deep Metric Learning},
booktitle = {European Conference on Computer Vision (ECCV)},
month = {August},
year = {2020}
}

group_loss's People

Contributors

therevanchist avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

group_loss's Issues

Shape mismatch in train.py

Hi!
Running python train.py on the cub dataset, I'm dealing with this problem:

RuntimeError: output with shape [1, 227, 227] doesn't match the broadcast shape [3, 227, 227]

It occurs in these lines:

for x, Y in dl_tr:
Y = Y.to(device)
opt.zero_grad()

Where's the problem and how can I solve it?

Where can I get supplementary material?

When I learn 'Robustness analysis' chapter, I cannot understand the Fig.4. I also dont find the code in the projects, so could you pls provide supplementary material or related code ?Thank you~

non zero diagonal similarity Matrix

Hi, as you said in the dynamic file and also in the original paper the similarity matrix (W) must has zero diagonal but in this implementation you don't set zero for diagonal element of W.
as a proposal for correction you can use below line in the set_negative_to_zero method in the gtg module:
W=W*(1 - torch.eye(W.shape[0], W.shape[0])).to(self.device)
is it right?

Reproduce results on CARS dataset.

Hi,

Thanks for the great repo which is pretty easy to read and run.

Whilst I was trying to reproduce the results for the CARS dataset, I found it is quite away from the reported. Specifically, I got something (for training data) around:

INFO:root:NMI: 70.262
INFO:root:R@1 : 72.315
INFO:root:R@2 : 82.956
INFO:root:R@4 : 90.764
INFO:root:R@8 : 95.123

For the simplicity, I only went for python train.py since the performance shall be similar between pretrained and non-pretrained models as suggested in the README.md file. The dataset used is the training data from "https://ai.stanford.edu/~jkrause/cars/car_dataset.html", and all other param used are the defaults from train.py.

I wondering if there is anything I missed?

Thank you in advance for your kind help.

Best,
Jian

Error when loading the pretrained cub model

Hi,

Thanks a lot for sharing the code!!!

I think there is an error on finetuned_cub_bn_inception.pth .
I ran python train.py --dataset_name cub but got the following errors:

size mismatch for last_linear.weight: copying a param with shape torch.Size([100, 1024]) from checkpoint, the shape in current model is torch.Size([98, 1024]).
size mismatch for last_linear.bias: copying a param with shape torch.Size([100]) from checkpoint, the shape in current model is torch.Size([98]).

I guess the model named finetuned_cub_bn_inception.pth is actually the model of cars.

Loss computation in train.py

Hi, Thanks for sharing your good code, I have a problem with the loss mentioned in the train.py file, it's here:

compute the losses

loss1 = criterion(probs_for_gtg, Y)
loss2 = criterion2(probs, Y)
loss = args.scaling_loss * loss1 + loss2

you can also find the definition of criterion(NLLLoss) and criterion2(CELoss) in this file.

1- As you mentioned in the original paper, group loss is not depend on the other loss and can be computed independently, so why you add classic classification CE loss in addition with group loss? or can you refer me (Original paper) to where you define the loss like what implemented in train.py?
2-Is the group loss depended to other loss?
3-Is it any problem for updating parameters using only group loss in the backward process?
Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.