GithubHelp home page GithubHelp logo

jdai-cv / fast-reid Goto Github PK

View Code? Open in Web Editor NEW
3.3K 58.0 819.0 13.75 MB

SOTA Re-identification Methods and Toolbox

License: Apache License 2.0

Python 86.73% Makefile 0.01% CMake 0.64% C++ 11.00% Dockerfile 0.39% Cython 1.24%
person-reid open-reid re-identification person-reidentification image-retrieval re-ranking random-erasing image-search apex toolbox

fast-reid's People

Contributors

cathesilta avatar cclauss avatar grimoire avatar hsfzxjy avatar itsnamgyu avatar jinkaizheng avatar kleinyuan avatar l1aoxingyu avatar layumi avatar lingxiao-he avatar lxc86739795 avatar tcheish avatar tycallen avatar u7ko4 avatar viokingtung avatar wangguanan avatar xbq1994 avatar xiaomingzhid avatar zkcys001 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fast-reid's Issues

result

why I tried input size "SIZE_TEST: [256, 128],SIZE_TRAIN: [256, 128]",but I just got "Rank-1 :92.5%, Rank-5 :97.7% , Rank-10 :98.8%"?,The other parameter I didn't change.

predict two images

Hello.
I want to know the similarity of two images.
Your code is the evaluation output.
I want to know the code to predict.
E.g. Given two image inputs
Can you tell if this is the same or not?
Or get each features?

basic information gcnet

Thanks for sharing your codes. I want to ask what is the main reason for this gcnet? I feel that there is an increase in the map, but a decline on the rank. It may also be caused by fluctuations.Do you think that if you add or not, will it affect the network itself?Thanks.

VehicleID results maybe has a error

Hello, your work is great. But there may be some problems with the VehicleID.
I think you may have reversed probe set and galleryset.
Because probe set is much larger than gallery set, and you need to randomly select 10 probes for testing in probe set.

KeyError: 'classifier.weight'

WEIGHT: /home/cendelian/Study_CV/车辆检测/reid_baseline/save/trained_model/path/resnet101_ibn_a.pth.tar
Traceback (most recent call last):
File "tools/test.py", line 61, in
main()
File "tools/test.py", line 53, in main
model.load_params_wo_fc(torch.load(cfg.TEST.WEIGHT))
File "./modeling/baseline.py", line 63, in load_params_wo_fc
state_dict.pop('classifier.weight')
KeyError: 'classifier.weight'

请问这是什么错误呢?
我运行
sh test_model.sh
的以上错误

import error

Hello, the fire introduced in your main function has not been found where, could you please provide it?
if name == 'main':
import fire

fire.Fire()

Error when test train model

Hi,there.
I have been finished training market1501,and wanna try to run this command line
python tools/test.py --config_file='configs/softmax.yml' TEST.WEIGHT '/save/trained_model/path'
and I got the error:
Traceback (most recent call last):
File "tools/test.py", line 67, in
main()
File "tools/test.py", line 61, in main
model.load_state_dict(torch.load(cfg.TEST.WEIGHT))
File "/home/wyl/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 777, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Baseline:
Missing key(s) in state_dict: "base.conv1.weight", "base.bn1.weight", "base.bn1.bias", "base.bn1.running_mean", "base.bn1.running_var", "base.layer1.0.conv1.weight", "base.layer1.0.bn1.weight", "base.layer1.0.bn1.bias", "base.layer1.0.bn1.running_mean", "base.layer1.0.bn1.running_var", "base.layer1.0.conv2.weight", "base.layer1.0.bn2.weight", "base.layer1.0.bn2.bias", "base.layer1.0.bn2.running_mean", "base.layer1.0.bn2.running_var", "base.layer1.0.conv3.weight", "base.layer1.0.bn3.weight", "base.layer1.0.bn3.bias", "base.layer1.0.bn3.running_mean", ...."bottleneck.running_var", "classifier.weight".
Unexpected key(s) in state_dict: "state", "param_groups".
Don't know how to figure it out,I wish your help.
Thanks.

RuntimeError: cannot perform reduction function max on tensor with no elements because the operation does not have an identity

Hi L1aoXingyu

when I run command:
bash scripts/train_openset.sh
there have some error as below:

Traceback (most recent call last):
File "tools/train.py", line 86, in
main()
File "tools/train.py", line 70, in main
reid_system.train()
File "./engine/trainer.py", line 175, in train
self.training_step(batch)
File "./engine/trainer.py", line 85, in training_step
loss_dict = self.loss_fns(outputs, labels)
File "./engine/trainer.py", line 53, in loss_fns
triplet_loss = self.triplet(outputs[1], labels)[0]
File "/home/cendelian/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "./modeling/losses/triplet_loss.py", line 121, in forward
dist_ap, dist_an = hard_example_mining(dist_mat, labels)
File "./modeling/losses/triplet_loss.py", line 75, in hard_example_mining
dist_mat[is_neg].contiguous().view(N, -1), 1, keepdim=True)
RuntimeError: cannot perform reduction function max on tensor with no elements because the operation does not have an identity

why torch.min() have error, could you help me?

Test time augmentation

Hi,
thanks for your work.

From looking at your code, it seems like the augmentations during training and testing are the same. Is that correct?

Regards

stop train

this show:
Epoch 1 Iter 119/186 ce_loss: 6.609 triplet: 2.262 Total loss: 8.870

It's 119 all the time.

使用自己训练好的模型测试demo出错

encoding: utf-8

"""
@author: liaoxingyu
@contact: [email protected]
"""

import argparse
import os

import cv2
import numpy as np
import torch
from torch import nn
from torch.backends import cudnn
import sys
sys.path.append('..')

from fastreid.config import get_cfg
from fastreid.data.transforms import ToTensor
from fastreid.modeling import build_model
from fastreid.utils.checkpoint import Checkpointer
from fastreid.data.datasets import DATASET_REGISTRY

cudnn.benchmark = True

def setup_cfg(args):
# load config from file and command-line arguments
cfg = get_cfg()
cfg.merge_from_file(args.config_file)
cfg.merge_from_list(args.opts)
cfg.freeze()
return cfg

def get_parser():
parser = argparse.ArgumentParser(description="FastReID demo for builtin models")
parser.add_argument(
"--config-file",
default="/home/dongchaoqun/fast-reid/projects/StrongBaseline/home/dongchaoqun/config.yaml",
metavar="FILE",
help="path to config file",
)

parser.add_argument(
    "--input",
    default=["/home/dongchaoqun/fast-reid/fastreid/data/datasets/Market-1501-v15.09.15/bounding_box_test/1203_c5s3_076337_01.jpg",
             "/home/dongchaoqun/fast-reid/fastreid/data/datasets/Market-1501-v15.09.15/bounding_box_test/1182_c6s3_038217_01.jpg",
             "/home/dongchaoqun/fast-reid/fastreid/data/datasets/Market-1501-v15.09.15/bounding_box_test/1183_c5s3_006943_05.jpg"],
    nargs="+",
    help="A list of space separated input images; "
         "or a single glob pattern such as 'directory/*.jpg'",
)

parser.add_argument(
    "--output",
    default="traced_module/",
    help="A file or directory to save export jit module.",

)

parser.add_argument(
    "--export-jitmodule",
    action='store_true',
    help="If export reid model to traced jit module"
)

parser.add_argument(
    "--opts",
    help="Modify config options using the command-line 'KEY VALUE' pairs",
    default=[],
    nargs=argparse.REMAINDER,
)
return parser

class ReidDemo(object):
"""
ReID demo example
"""

def __init__(self, cfg):
    self.cfg = cfg.clone()
    #print(self.cfg)
    if cfg.MODEL.WEIGHTS.endswith('.pt'):
        self.model = torch.jit.load(cfg.MODEL.WEIGHTS)
    else:
        self.model = build_model(cfg)
        
        #print(self.model)
        # load pre-trained model
        Checkpointer(self.model).load(cfg.MODEL.WEIGHTS)

        self.model.eval()
        # self.model = nn.DataParallel(self.model)
        self.model.cuda()

    num_channels = len(cfg.MODEL.PIXEL_MEAN)
    self.mean = torch.tensor(cfg.MODEL.PIXEL_MEAN).view(1, num_channels, 1, 1)
    self.std = torch.tensor(cfg.MODEL.PIXEL_STD).view(1, num_channels, 1, 1)

def preprocess(self, img):
    img = cv2.resize(img, tuple(self.cfg.INPUT.SIZE_TEST[::-1]))
    img = ToTensor()(img)[None, :, :, :]
    return img.sub_(self.mean).div_(self.std)

@torch.no_grad()
def predict(self, img_path):
    img = cv2.imread(img_path)
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    data = self.preprocess(img)
    output = self.model.inference(data.cuda())
    feat = output.cpu().data.numpy()
    #print(feat)
    return feat

@classmethod
@torch.no_grad()
def export_jit_model(cls, cfg, model, output_dir):
    example = torch.rand(1, len(cfg.MODEL.PIXEL_MEAN), *cfg.INPUT.SIZE_TEST)
    example = example.cuda()
    # if isinstance(model, (nn.DistributedDataParallel, nn.DataParallel)):
    #     model = model.module
    # else:
    #     model = model
    traced_script_module = torch.jit.trace_module(model, {"inference": example})
    traced_script_module.save(os.path.join(output_dir, "traced_reid_module.pt"))

if name == 'main':
args = get_parser().parse_args()
cfg = setup_cfg(args)
reidSystem = ReidDemo(cfg)

if args.export_jitmodule and not isinstance(reidSystem.model, torch.jit.ScriptModule):
    reidSystem.export_jit_model(cfg, reidSystem.model, args.output)

feats = [reidSystem.predict(data) for data in args.input]
#print(feats)

cos_12 = np.dot(feats[0], feats[1].T).item()
cos_13 = np.dot(feats[0], feats[2].T).item()
cos_23 = np.dot(feats[1], feats[2].T).item()

print('cosine similarity is {:.4f}, {:.4f}, {:.4f}'.format(cos_12, cos_13, cos_23))

Traceback (most recent call last):
File "/home/dongchaoqun/fast-reid/demo/demo.py", line 133, in
reidSystem = ReidDemo(cfg)
File "/home/dongchaoqun/fast-reid/demo/demo.py", line 92, in init
Checkpointer(self.model).load(cfg.MODEL.WEIGHTS)
TypeError: init() missing 1 required positional argument: 'dataset'
请问这个问题应该如何解决,在注释掉Checkpointer(self.model).load(cfg.MODEL.WEIGHTS)这句话后却可以运行,同时请问如何使用自己的模型做测试,谢谢。

Can i help me to use demo.py

When i use python3 demo.py,reid_model.pt does not exist.
please help me to use /logs/market1501/combineall_bs256_mgn_plus/ckpts/model_best.pth
Thank you

epochs

你好,有个问题想请教下。
我的显卡是GTX1080的,内存只有11177M,然后我把p_size改为了原来的一半现在是12,我需要把迭代轮数改为原来的2倍吗,就是改为800epoch

can't reproduce the result of BoT

Hi Liao,
Thanks for your awesome implementation.
I met some problem when trying to reproduce the result showed in your model_zoo
e.g.
As for market1501, strictly follow your step without any modification using your script, I only get 92.5 R@1 using resnet and 93.3 R@1 using resnet-ibn-a. Seems like there are some problems in hyperparameters. Could you check it(resnet+market1501+Bot) for me please?

Thanks in advance.
Here I only change the max iteration to 28000 as I found there is small improvement on final performance after 18000 iterations.

MODEL:
HEADS:
NUM_CLASSES: 751

SOLVER:
MAX_ITER: 28000

STEPS: [8000, 14000]

WARMUP_ITERS: 2000

DATASETS:
NAMES: ("Market1501",)
TESTS: ("Market1501",)

OUTPUT_DIR: "logs/market1501/bagtricks"

�[32m[04/30 01:33:40 fastreid]: �[0mRunning with full config:
CUDNN_BENCHMARK: True
DATALOADER:
NUM_INSTANCE: 4
NUM_WORKERS: 16
PK_SAMPLER: True
DATASETS:
NAMES: ('Market1501',)
TESTS: ('Market1501',)
INPUT:
DO_AUGMIX: False
DO_AUTOAUG: False
DO_CJ: False
DO_FLIP: True
DO_PAD: True
FLIP_PROB: 0.5
PADDING: 10
PADDING_MODE: constant
REA:
ENABLED: True
MEAN: [123.675, 116.28, 103.53]
PROB: 0.5
RPT:
ENABLED: False
PROB: 0.5
SIZE_TEST: [256, 128]
SIZE_TRAIN: [256, 128]
MODEL:
BACKBONE:
DEPTH: 50
LAST_STRIDE: 1
NAME: build_resnet_backbone
PRETRAIN: True
WITH_IBN: False
WITH_NL: False
WITH_SE: False
HEADS:
CLS_LAYER: linear
IN_FEAT: 2048
MARGIN: 0.15
NAME: BNneckHead
NUM_CLASSES: 751
POOL_LAYER: avgpool
REDUCTION_DIM: 512
SCALE: 128
LOSSES:
CE:
ALPHA: 0.3
EPSILON: 0.1
SCALE: 1.0
FL:
ALPHA: 0.25
GAMMA: 2
SCALE: 1.0
NAME: ('CrossEntropyLoss', 'TripletLoss')
TRI:
HARD_MINING: True
MARGIN: 0.3
NORM_FEAT: False
SCALE: 1.0
USE_COSINE_DIST: False
META_ARCHITECTURE: Baseline
OPEN_LAYERS:
PIXEL_MEAN: [123.675, 116.28, 103.53]
PIXEL_STD: [58.395, 57.120000000000005, 57.375]
WEIGHTS:
OUTPUT_DIR: logs/market1501/bagtricks
SOLVER:
BASE_LR: 0.00035
BIAS_LR_FACTOR: 2.0
CHECKPOINT_PERIOD: 2000
DELAY_ITERS: 100
ETA_MIN_LR: 3e-07
FREEZE_ITERS: 0
GAMMA: 0.1
HEADS_LR_FACTOR: 1.0
IMS_PER_BATCH: 64
LOG_PERIOD: 200
MAX_ITER: 28000
MOMENTUM: 0.9
OPT: Adam
SCHED: WarmupMultiStepLR
STEPS: (8000, 14000)
WARMUP_FACTOR: 0.01
WARMUP_ITERS: 2000
WARMUP_METHOD: linear
WEIGHT_DECAY: 0.0005
WEIGHT_DECAY_BIAS: 0.0
TEST:
EVAL_PERIOD: 2000
IMS_PER_BATCH: 512
PRECISE_BN:
DATASET: Market1501
ENABLED: False
NUM_ITER: 300

What is len(train_dataloader) when using triplet?

When I use softmax, the batch size is 128 and len(train_dataloader) == len(train_num) / 128. But when I used softmax_triplet and I found that the batch is 128 and len(train_dataloader) is neither len(train_num) / 128 nor len(train_num) / 128 / 4. Is there something wrong?
Thank you very much.

train error

error:module 'tensorflow._api.v1.io' has no attribute 'gfile'
hope your reply.

mAP computation

From looking at your code, I'm not fully grasping your mAP computation and I am wondering if you ever compared them to the original market-1501 evaluation script? In my experience, results can vary a lot and in order to make a fair comparison, it would be good to ensure that your mAP computation matches the "official" evaluation method and does not skew the results.

KeyError: "No object named 'MGN' found in 'META_ARCH' registry!"

运行程序的时候出现这样的错误
KeyError: "No object named 'MGN' found in 'META_ARCH' registry!"
运行的是 visualize_result.py --config-file "../configs/Market1501/mgn_R50-ibn.yml" --parallel --vis-label --dataset-name "Market1501" --output "logs/mgn_market_vis" --opts MODEL.WEIGHTS "../../pretrained/resnet50_ibn_a.pth.tar"

cuhk03数据集

请问下,在cuhk03数据集上跑你的代码报这个路径没文件的错误。'/home/payne/Dataset/L1aoXingyu/cuhk03/cuhk03_new_protocol_config_detected.mat' is not available。cuhk03下载解压出来下面只有一个cuhk-03.mat

你好

好像你代码里面没用用到tensorboardX的可视化?

about distmat

看到作者把这些都注释掉了
# distmat = torch.pow(qf, 2).sum(dim=1, keepdim=True).expand(m, n) +
# torch.pow(gf, 2).sum(dim=1, keepdim=True).expand(n, m).t()
# distmat.addmm_(1, -2, qf, gf.t())
# distmat = distmat.cpu().numpy()
那么传到后面的distmat就是 distmat = torch.mm(qf, gf.t()).cpu().numpy(),
这仅仅是一个矩阵相乘怎么代替距离矩阵的愣是没看懂啊,

No module named 'torch.utils.tensorboard'

error: No module named 'torch.utils.tensorboard'

my env:
pytorch 1.0.0
tensorboard: 1.12.2-py36he6710b0_0
tensorflow: 1.12.0-gpu_py36he68c306_0
tensorflow-base: 1.12.0-gpu_py36h8e0ae2d_0
tensorflow-gpu: 1.12.0-h0d30ee6_0

track back:
Traceback (most recent call last):
File "tools/train_net.py", line 16, in
from fastreid.engine import DefaultTrainer, default_argument_parser, default_setup
File "./fastreid/engine/init.py", line 13, in
from .hooks import *
File "./fastreid/engine/hooks.py", line 17, in
from fastreid.solver import optim
File "./fastreid/solver/init.py", line 8, in
from .build import build_lr_scheduler, build_optimizer
File "./fastreid/solver/build.py", line 8, in
from . import optim
File "./fastreid/solver/optim/init.py", line 1, in
from .lamb import Lamb
File "./fastreid/solver/optim/lamb.py", line 9, in
from torch.utils.tensorboard import SummaryWriter
ModuleNotFoundError: No module named 'torch.utils.tensorboard'

Error: all query identities do not appear in gallery

Hi
when i use market1501 to trian the model .
after 30 epoch and test,
there are some error:
could you help me?

Epoch 29 Iter 78/83 ce_loss: 0.041 triplet: 0.011 Total loss: 0.053
2019-11-21 14:52:42,903 reid_baseline.train INFO: Epoch 29 Total loss: 0.116 lr: 3.50e-04 During 1min:30s
Epoch 30 Iter 78/83 ce_loss: 0.030 triplet: 0.015 Total loss: 0.045
2019-11-21 14:54:13,363 reid_baseline.train INFO: Epoch 30 Total loss: 0.120 lr: 3.50e-04 During 1min:30s
Traceback (most recent call last):
File "tools/train.py", line 81, in
main()
File "tools/train.py", line 65, in main
reid_system.train()
File "./engine/trainer.py", line 178, in train
metric_dict = self.test()
File "./engine/trainer.py", line 155, in test
cmc, mAP = evaluate(-distmat, q_pids, g_pids, q_camids, g_camids)
File "./data/datasets/eval_reid.py", line 163, in evaluate
return evaluate_py(distmat, q_pids, g_pids, q_camids, g_camids, max_rank, use_metric_cuhk03)
File "./data/datasets/eval_reid.py", line 156, in evaluate_py
return eval_market1501(distmat, q_pids, g_pids, q_camids, g_camids, max_rank)
File "./data/datasets/eval_reid.py", line 143, in eval_market1501
assert num_valid_q > 0, "Error: all query identities do not appear in gallery"
AssertionError: Error: all query identities do not appear in gallery

pre-model

hi,thank your code and model,whether have se_resnet50_ibn or sknet_resnet_ibn pre_model?

batchsize

为什么batchsize小结果还更好呢?

cuhk03训练

你好,请问如何在cuhk03数据集上训练

Train & Test on different dataset

Hi,y'all,
How can I train another dataset like duke or cuhk03?
How can train and test on different datasets?
Wish ur reply.
Thanks.

suprising baseline results!!

Your baseline almost achieve the sota results!!
I want to know Which trick is the most important for improving baseline so high?? Thanks~

关于最终的测试结果

感谢L1aoXingyu整理的这些trick,我跑了训练的代码(数据集是market1501),然后跑了测试的代码(120epoch),采用softmax.yml得到的测试结果是:
=> Market1501 loaded
Dataset statistics:

subset | # ids | # images | # cameras

train | 751 | 12936 | 6
query | 750 | 3368 | 6
gallery | 751 | 15913 | 6

2019-07-13 07:26:09,288 reid_baseline.inference INFO: Start inferencing
2019-07-13 07:26:09,288 reid_baseline.inference INFO: Start inferencing
2019-07-13 07:28:46,533 reid_baseline.inference INFO: Validation Results
2019-07-13 07:28:46,533 reid_baseline.inference INFO: Validation Results
2019-07-13 07:28:46,538 reid_baseline.inference INFO: mAP: 60.7%
2019-07-13 07:28:46,538 reid_baseline.inference INFO: mAP: 60.7%
2019-07-13 07:28:46,539 reid_baseline.inference INFO: CMC curve, Rank-1 :81.8%
2019-07-13 07:28:46,540 reid_baseline.inference INFO: CMC curve, Rank-5 :92.2%
2019-07-13 07:28:46,541 reid_baseline.inference INFO: CMC curve, Rank-10 :94.7%
采用softmax_triplet.yml得到的测试结果是:
Dataset statistics:

subset | # ids | # images | # cameras

train | 751 | 12936 | 6
query | 750 | 3368 | 6
gallery | 751 | 15913 | 6

2019-07-13 07:13:10,197 reid_baseline.inference INFO: Start inferencing
2019-07-13 07:15:51,814 reid_baseline.inference INFO: Validation Results
2019-07-13 07:15:51,818 reid_baseline.inference INFO: mAP: 66.0%
2019-07-13 07:15:51,819 reid_baseline.inference INFO: CMC curve, Rank-1 :84.5%
2019-07-13 07:15:51,819 reid_baseline.inference INFO: CMC curve, Rank-5 :93.5%
2019-07-13 07:15:51,820 reid_baseline.inference INFO: CMC curve, Rank-10 :95.6%
和主页上的results差很多,文件里面的参数什么的没有变动过,是不是我忽略了啥?

ablation study on stronger baseline results

Hi,
This is amazing source code for reid. I've been following the strong baseline results for a long time, and I saw here's stronger baseline results. Do you have ablation study on this stronger baseline? I am curious which one has the major impact.

Thanks ^ ^

Hello,why the gallery is 15913?

Hello, Thanks for your job. I would like to know gallery is 19732 pictures in market 1501 data set. Why is it 15913 in your code?

unable to reproduce the result of SBS(R50)

Hi, @L1aoXingyu ,
First, very appreciate for your excellent reid sota work.^_^. I am very interested in this project.
I have tried the two SBS(R50) models on dataset market1501. But I do not get the result as mentioned in models zone, which has the following accuracy. I don't know why. Could you please help me? Thanks in advance!


Method | Pretrained | Rank@1 | mAP | mINP
SBS(R50) | ImageNet | 95.4% | 88.2% | 64.8%


I will post my final config.yaml in the following.

My environment:
Python 3.6,
2 titan XP GPUs,
cuda 9.0,
pytorch 1.5,
torchvision 0.6.0

missing the adabound.py in the "/solver/"

Hi, Xingyu, thanks for you nice update! If a solver file is missed. I have seen the code in the /solver/init..py, which uses "from .adabound import *", but I can't find the .adabound .

Looking forward for your reply. Thank you.

Support of Video ReID datasets

Hi y'all,

I am benchmarking different ReID-Algorithms for my Thesis on my uni's macchine. I already trained most of them on cuhk and market1501 and i wonder whether i can use the proposed training algorithm on mars(dataset, not the planet :p) as well. I am a novice AI-developer so I'm kind of lacking the experience to develop my own assessment.

I'd be delighted if someone finds the time to share their insights on feasibiity and necessary steps.

Kind Regards

Multi dataset

Hi,In file data/build.py line 20~24, is there any difference in "if" and "else"?
thanks

Questions about the result of version committed on Sep 20th,2019

Description

Here is my configuration:

2019-09-24 23:04:53,360 reid_baseline.train INFO: Using 1 GPUs.                                                                                                                                                     
2019-09-24 23:04:53,360 reid_baseline.train INFO: Namespace(config_file='configs/softmax_triplet.yml', opts=['DATASETS.NAMES', '("market1501",)', 'DATASETS.TEST_NAMES', 'market1501', 'SOLVER.IMS_PER_BATCH', '64',
 'MODEL.WITH_IBN', 'True', 'MODEL.BACKBONE', 'resnet50', 'MODEL.PRETRAIN_PATH', '/home/lyh/.cache/torch/checkpoints/resnet50_ibn_a.pth.tar', 'MODEL.VERSION', 'resnet50_ibn_a'])                                    
2019-09-24 23:04:53,360 reid_baseline.train INFO: Loaded configuration file configs/softmax_triplet.yml                                                                                                             
2019-09-24 23:04:53,360 reid_baseline.train INFO: Running with config:                                                                                                                                              
DATALOADER:                                                                                                                                                                                                         
  NUM_INSTANCE: 4                                                                                                                                                                                                   
  NUM_WORKERS: 8                                                                                                                                                                                                    
  SAMPLER: triplet                                                                                                                                                                                                  
DATASETS:                                                                                                                                                                                                           
  NAMES: ('market1501',)                                                                                                                                                                                            
  TEST_NAMES: market1501                                                                                                                                                                                            
INPUT:                                                                                                                                                                                                              
  DO_FLIP: True                                                                                                                                                                                                     
  DO_LIGHTING: False                                                                                                                                                                                                
  DO_PAD: True                                                                                                                                                                                                      
  DO_RE: True                                                                                                                                                                                                       
  FLIP_PROB: 0.5                                                                                                                                                                                                    
  MAX_LIGHTING: 0.2                                                                                                                                                                                                 
  PADDING: 10                                                                                                                                                                                                       
  PADDING_MODE: constant                                                                                                                                                                                            
  PIXEL_MEAN: [0.485, 0.456, 0.406]                                                                                                                                                                                 
  PIXEL_STD: [0.229, 0.224, 0.225]                                                                                                                                                                                  
  P_LIGHTING: 0.75                                                                                                                                                                                                  
  RE_PROB: 0.5                                                                                                                                                                                                      
  SIZE_TEST: [256, 128]                                                                                                                                                                                             
  SIZE_TRAIN: [256, 128]                                                                                                                                                                                            
MODEL:                                                                                                                                                                                                              
  BACKBONE: resnet50                                                                                                                                                                                                
  CHECKPOINT:                                                                                                                                                                                                       
  DIST_BACKEND: dp                                                                                                                                                                                                  
  GCB:                                                                                                                                                                                                              
    ratio: 0.0625                                                                                                                                                                                                   
  LAST_STRIDE: 1                                                                                                                                                                                                                  [35/2062]
  NAME: baseline                                                                                                                                                                                                    
  PRETRAIN: True                                                                                                                                                                                                    
  PRETRAIN_PATH: /home/lyh/.cache/torch/checkpoints/resnet50_ibn_a.pth.tar                                                                                                                                          
  STAGE_WITH_GCB: (False, False, False, False)                                                                                                                                                                      
  VERSION: resnet50_ibn_a                                                                                                                                                                                           
  WITH_IBN: True
OUTPUT_DIR: logs/
SOLVER:
  BASE_LR: 0.00035
  BIAS_LR_FACTOR: 1
  DIST: False
  EVAL_PERIOD: 30
  GAMMA: 0.1
  IMS_PER_BATCH: 64
  LOG_INTERVAL: 30
  LOSSTYPE: ('softmax', 'triplet')
  MARGIN: 0.3
  MAX_EPOCHS: 120
  MOMENTUM: 0.9
  OPT: adam
  STEPS: (40, 90)
  WARMUP_FACTOR: 0.01
  WARMUP_ITERS: 10
  WARMUP_METHOD: linear
  WEIGHT_DECAY: 0.0005
  WEIGHT_DECAY_BIAS: 0.0005
TEST:
  IMS_PER_BATCH: 512
  NORM: True
  WEIGHT: path

I ran twice, however, the results are both depressive.

Fisrt one:
reid_baseline.train INFO: mAP: 79.3%
reid_baseline.train INFO: CMC curve, Rank-1  :92.2%
reid_baseline.train INFO: CMC curve, Rank-5  :97.2%
reid_baseline.train INFO: CMC curve, Rank-10 :98.2%

Second one:
reid_baseline.train INFO: mAP: 78.9%
reid_baseline.train INFO: CMC curve, Rank-1  :91.7%
reid_baseline.train INFO: CMC curve, Rank-5  :96.9%
reid_baseline.train INFO: CMC curve, Rank-10 :98.1%

The script I executed:

GPUS='0,1'

CUDA_VISIBLE_DEVICES=$GPUS python tools/train.py -cfg='configs/softmax_triplet.yml' \
DATASETS.NAMES '("market1501",)'  \
DATASETS.TEST_NAMES 'market1501' \
SOLVER.IMS_PER_BATCH '64' \
MODEL.WITH_IBN 'True' \
MODEL.BACKBONE 'resnet50' \
MODEL.PRETRAIN_PATH '/home/lyh/.cache/torch/checkpoints/resnet50_ibn_a.pth.tar' \
MODEL.VERSION 'resnet50_ibn_a' \

What's more, it seems that in this version, I could not directly utilize label smooth and gcnet just by editing the bash scripts?

Questions

  1. I changed no codes, so was there anything wrong with my script?
  2. If no fatal errors in my script, could you help me with some suggestions to make the result as good as you proposed? 92%(79%) Rank1 on Market1501 is much lower than the expected one, 94%(81%). It would be great if you would like to show your valid script~
  3. As I mentioned above, it seems no direct way to use label_smooth and gcnet that are necessary to get the best result (95% on Market1501). Do you have any good idea if I want to reach it with the least modification?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.