GithubHelp home page GithubHelp logo

vehicle-reid-baseline's Introduction

vehicle-ReID-baseline

Introduction

Vehicle ReID baseline is a pytorch-based baseline for training and evaluating deep vehicle re-identification models on reid benchmarks.

Updates

2019.4.1 update some test results

2019.3.11 update the basic baseline code

Installation

  1. cd to your preferred directory and run ' git clone https://github.com/Jakel21/vehicle-ReID '.
  2. Install dependencies by pip install -r requirements.txt (if necessary).

Datasets

The keys to use these datasets are enclosed in the parentheses. See vehiclereid/datasets/init.py for details.Both two datasets need to pull request to the supplier.

Models

  • resnet50

Losses

  • cross entropy loss
  • triplet loss

Tutorial

train

Input arguments for the training scripts are unified in args.py. To train an image-reid model with cross entropy loss, you can do

python train-xent-tri.py \
-s veri \    #source dataset for training
-t veri \    # target dataset for test
--height 128 \ # image height
--width 256 \ # image width
--optim amsgrad \ # optimizer
--lr 0.0003 \ # learning rate
--max-epoch 60 \ # maximum epoch to run
--stepsize 20 40 \ # stepsize for learning rate decay
--train-batch-size 64 \
--test-batch-size 100 \
-a resnet50 \ # network architecture
--save-dir log/resnet50-veri \ # where to save the log and models
--gpu-devices 0 \ # gpu device index

test

Use --evaluate to switch to the evaluation mode. In doing so, no model training is performed. For example you can load pretrained model weights at path_to_model.pth.tar on veri dataset and do evaluation on VehicleID, you can do

python train_imgreid_xent.py \
-s veri \ # this does not matter any more
-t vehicleID \ # you can add more datasets here for the test list
--height 128 \
--width 256 \
--test-size 800 \
--test-batch-size 100 \
--evaluate \
-a resnet50 \
--load-weights path_to_model.pth.tar \
--save-dir log/eval-veri-to-vehicleID \
--gpu-devices 0 \

Results

Some test results on veri776 and vehicleID:

veri776

model:resnet50

loss: xent+htri

mAP rank-1 rank-5 rank-20
59.0 87.6 94.3 98.2

vehicleID

model:resnet50

loss: xent+htri

testset size mAP rank-1 rank-5 rank-20
800 76.4 69.1 85.8 94.5
1600 74.1 67.4 80.5 90.5
2400 71.4 65.2 78.3 89.2

vehicle-reid-baseline's People

Contributors

jakel21 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

vehicle-reid-baseline's Issues

When I train the model on VeRi dataset, the acc reaches 100%

Hi, I would like to reproduce the results in this repository.
But, when I try to train the model on VeRi dataset, the acc reaches 100%, which is really surprising.
May you help me figure out what's wrong, if you had time?
Thanks.

My log_train.txt is as follow:

**==========
Args:Namespace(adam_beta1=0.9, adam_beta2=0.999, arch='resnet50', color_aug=False, color_jitter=False, eval_freq=-1, evaluate=False, gamma=0.1, gpu_devices='0', height=128, label_smooth=False, lambda_htri=1, lambda_xent=1, load_weights='', lr=0.0003, lr_scheduler='multi_step', margin=0.3, max_epoch=60, momentum=0.9, no_pretrained=False, num_instances=4, optim='amsgrad', print_freq=10, query_remove=True, random_erase=False, resume='', rmsprop_alpha=0.99, root='./datasets', save_dir='log/resnet50-veri', seed=1, sgd_dampening=0, sgd_nesterov=False, source_names=['veri'], split_id=0, start_epoch=0, start_eval=0, stepsize=[20, 40], target_names=['veri'], test_batch_size=100, test_size=800, train_batch_size=64, train_sampler='RandomSampler', use_avai_gpus=False, use_cpu=False, visualize_ranks=False, weight_decay=0.0005, width=256, workers=0)

Currently using GPU 0
Initializing image data manager
=> Initializing TRAIN (source) datasets
=> VeRi loaded
Image Dataset statistics:

subset | # ids | # images | # cameras

train | 576 | 37778 | 20
query | 200 | 1678 | 19
gallery | 200 | 11579 | 19
----------------------------------------**
mean and std: tensor([-0.2806, -0.1601, 0.0939]) tensor([ 0.8216, 0.8342, 0.8285])
=> Initializing TEST (target) datasets
=> VeRi loaded
Image Dataset statistics:

subset | # ids | # images | # cameras

train | 576 | 37778 | 20
query | 200 | 1678 | 19
gallery | 200 | 11579 | 19

**************** Summary ****************
train names : ['veri']

train datasets : 1

train ids : 576

train images : 37778

train cameras : 20

test names : ['veri']


Initializing model: resnet50
Initialized model with pretrained weights from https://download.pytorch.org/models/resnet50-19c8e357.pth
Model size: 23.508 M
=> Start training

Epoch: [21][430/590] Time 0.339 (0.354) Data 0.1288 (0.1432) Xent 0.1333 (0.0832) Htri 0.0000 (0.0026) Acc 96.88 (98.67)
Epoch: [21][440/590] Time 0.341 (0.354) Data 0.1383 (0.1430) Xent 0.0618 (0.0833) Htri 0.0000 (0.0028) Acc 100.00 (98.68)
Epoch: [21][450/590] Time 0.407 (0.354) Data 0.1694 (0.1430) Xent 0.0463 (0.0835) Htri 0.0000 (0.0028) Acc 100.00 (98.67)
Epoch: [21][460/590] Time 0.379 (0.354) Data 0.1594 (0.1431) Xent 0.0698 (0.0835) Htri 0.0000 (0.0028) Acc 100.00 (98.67)
Epoch: [21][470/590] Time 0.364 (0.354) Data 0.1490 (0.1428) Xent 0.0423 (0.0835) Htri 0.0000 (0.0027) Acc 100.00 (98.67)
Epoch: [21][480/590] Time 0.388 (0.354) Data 0.1769 (0.1429) Xent 0.0764 (0.0835) Htri 0.0000 (0.0027) Acc 98.44 (98.66)
Epoch: [21][490/590] Time 0.348 (0.354) Data 0.1455 (0.1430) Xent 0.0498 (0.0837) Htri 0.0000 (0.0026) Acc 100.00 (98.65)
Epoch: [21][500/590] Time 0.351 (0.354) Data 0.1435 (0.1431) Xent 0.0381 (0.0835) Htri 0.0000 (0.0026) Acc 100.00 (98.65)
Epoch: [21][510/590] Time 0.337 (0.354) Data 0.1292 (0.1429) Xent 0.1196 (0.0837) Htri 0.0000 (0.0028) Acc 96.88 (98.64)
Epoch: [21][520/590] Time 0.326 (0.354) Data 0.1218 (0.1428) Xent 0.2127 (0.0841) Htri 0.0000 (0.0028) Acc 93.75 (98.63)
Epoch: [21][530/590] Time 0.327 (0.354) Data 0.1211 (0.1428) Xent 0.0778 (0.0841) Htri 0.0000 (0.0028) Acc 98.44 (98.64)
Epoch: [21][540/590] Time 0.319 (0.354) Data 0.1130 (0.1428) Xent 0.0669 (0.0842) Htri 0.0000 (0.0030) Acc 98.44 (98.63)
Epoch: [21][550/590] Time 0.334 (0.354) Data 0.1275 (0.1427) Xent 0.0948 (0.0849) Htri 0.0000 (0.0029) Acc 98.44 (98.61)
Epoch: [21][560/590] Time 0.355 (0.354) Data 0.1449 (0.1426) Xent 0.0964 (0.0851) Htri 0.0000 (0.0031) Acc 98.44 (98.61)
Epoch: [21][570/590] Time 0.354 (0.354) Data 0.1488 (0.1426) Xent 0.1672 (0.0860) Htri 0.0000 (0.0031) Acc 96.88 (98.59)
Epoch: [21][580/590] Time 0.389 (0.354) Data 0.1392 (0.1425) Xent 0.1112 (0.0864) Htri 0.0000 (0.0031) Acc 98.44 (98.59)
Epoch: [21][590/590] Time 0.390 (0.354) Data 0.1683 (0.1425) Xent 0.1350 (0.0869) Htri 0.0000 (0.0032) Acc 98.44 (98.58)
Epoch: [22][10/590] Time 0.331 (0.353) Data 0.1234 (0.1414) Xent 0.0928 (0.1049) Htri 0.0000 (0.0000) Acc 98.44 (97.81)
Epoch: [22][20/590] Time 0.357 (0.354) Data 0.1520 (0.1429) Xent 0.1362 (0.0996) Htri 0.0000 (0.0003) Acc 98.44 (98.12)
Epoch: [22][30/590] Time 0.359 (0.357) Data 0.1526 (0.1448) Xent 0.0787 (0.0937) Htri 0.0466 (0.0042) Acc 98.44 (98.28)
Epoch: [22][40/590] Time 0.380 (0.356) Data 0.1755 (0.1444) Xent 0.1180 (0.0908) Htri 0.0000 (0.0031) Acc 98.44 (98.40)
Epoch: [22][50/590] Time 0.369 (0.357) Data 0.1658 (0.1454) Xent 0.0419 (0.0861) Htri 0.0000 (0.0033) Acc 100.00 (98.59)
Epoch: [22][60/590] Time 0.341 (0.357) Data 0.1380 (0.1452) Xent 0.1054 (0.0830) Htri 0.0059 (0.0032) Acc 98.44 (98.67)
Epoch: [22][70/590] Time 0.359 (0.356) Data 0.1531 (0.1444) Xent 0.0415 (0.0781) Htri 0.0000 (0.0039) Acc 100.00 (98.84)
Epoch: [22][80/590] Time 0.346 (0.356) Data 0.1387 (0.1438) Xent 0.0452 (0.0756) Htri 0.0000 (0.0034) Acc 100.00 (98.96)
Epoch: [22][90/590] Time 0.341 (0.356) Data 0.1385 (0.1440) Xent 0.0238 (0.0720) Htri 0.0000 (0.0030) Acc 100.00 (99.03)
Epoch: [22][100/590] Time 0.343 (0.355) Data 0.1222 (0.1438) Xent 0.0389 (0.0697) Htri 0.0000 (0.0027) Acc 100.00 (99.08)
Epoch: [22][110/590] Time 0.382 (0.356) Data 0.1423 (0.1442) Xent 0.0473 (0.0674) Htri 0.0000 (0.0025) Acc 98.44 (99.13)
Epoch: [22][120/590] Time 0.379 (0.356) Data 0.1574 (0.1445) Xent 0.0330 (0.0649) Htri 0.0000 (0.0023) Acc 100.00 (99.19)
Epoch: [22][130/590] Time 0.337 (0.355) Data 0.1246 (0.1435) Xent 0.0708 (0.0635) Htri 0.0000 (0.0025) Acc 100.00 (99.23)

=> Test
Evaluating veri ...
Extracted features for query set, obtained 1678-by-2048 matrix
Extracted features for gallery set, obtained 11579-by-2048 matrix
=> BatchTime(s)/BatchSize(img): 0.019/100
Computing CMC and mAP
Results ----------
mAP: 66.7%
CMC curve
Rank-1 : 100.0%
Rank-5 : 100.0%
Rank-10 : 100.0%
Rank-20 : 100.0%

Checkpoint saved to "log/resnet50-veri/model.pth.tar-60"
Elapsed 3:43:43
=> Show performance summary
veri (source)

  • epoch 60 rank1 100.0%

eval_metrics.py

eval_metrics.py line 74
if numg_g < max_rank:

TypeError: '<' not supported between instances of 'int' and 'list'

Where is requirements.txt ?

I notice you said
"Install dependencies by pip install -r requirements.txt (if necessary)."
But I can not find the file, where is it ?

AssertionError: Error: all query identities do not appear in gallery

Traceback (most recent call last):
File "train_imgreid_xent.py", line 265, in
main()
File "train_imgreid_xent.py", line 84, in main
distmat = test(model, queryloader, galleryloader, use_gpu, return_distmat=True)
File "train_imgreid_xent.py", line 250, in test
cmc, mAP = evaluate(distmat, q_pids, g_pids, q_camids, g_camids)
File "C:\Users\Hasan\Desktop\vehicle-ReID-baseline\vehiclereid\eval_metrics.py", line 127, in evaluate
return eval_veri(distmat, q_pids, g_pids, q_camids, g_camids, max_rank)
File "C:\Users\Hasan\Desktop\vehicle-ReID-baseline\vehiclereid\eval_metrics.py", line 117, in eval_veri
assert num_valid_q > 0, 'Error: all query identities do not appear in gallery'
AssertionError: Error: all query identities do not appear in gallery

paper or technical documents

Could you provide the paper or technical documents of this code. I can't understand the whole model framework. Thank you!

Test separately

Can I train and test two databases separately, that is, I don't use Veri's weight to test the VehicleID, but use the VehicleID training model to test the VehicleID.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.