[New 2024] We have extended this work to form a journal version (submitted to PAMI) from the following aspects.
Unveiling the Power of Self-supervision for Multi-view Multi-human Association and Tracking
- First, we add a new spatial-temporal assignment matrix learning module, which shares the self-consistency rationale for the appearance feature learning module (in the previous conference paper) to together form a fully self-supervised end-to-end framework.
- Second, a new pseudo-label generation strategy with dummy nodes used for more general MvMHAT cases is introduced.
- Third, we include a new dataset MMP-MvMHAT and significantly extend the experimental comparisons and analyses.
Self-supervised Multi-view Multi-Human Association and Tracking (ACM MM 2021),
Yiyang Gan, Ruize Han†, Liqiang Yin, Wei Feng, Song Wang†
- A self-supervised learning framework for MvMHAT.
- A new benchmark for training and testing MvMHAT.
![example](https://github.com/RuizeHan/MvMHAT/raw/main/readme/mvmhat.png)
![example](https://github.com/RuizeHan/MvMHAT/raw/main/readme/2_00-min.png)
Part 1 (from Self-collected)
Link:https://pan.baidu.com/s/1gsYTHffmfRq84Hn-8XtzDQ
Password:2cfh
Part 2 (from Campus)
Link: https://pan.baidu.com/s/1Ts6xnESH-9UV8goiTrSuwQ
Password: 8sg9
Part 3 (from EPFL)
Link: https://pan.baidu.com/s/1G84npt61rYDUEPqnaHJUlg
Password: jjaw
Complete Dataset
Link: https://tjueducn-my.sharepoint.com/:f:/g/personal/han_ruize_tju_edu_cn/EuYKZsvYBvFBvewQPdjvRIoB20iQfMNr_c7_fMDXFRZ7uw?e=19rwJF
Password: MvMHAT
We add the evaluation code and the raw results of the proposed method in 'Eval_MvMHAT_public.zip'.
The code was tested on Ubuntu 16.04, with Anaconda Python 3.6 and PyTorch v1.7.1. NVIDIA GPUs are needed for both training and testing. After install Anaconda:
- [Optional but recommended] create a new conda environment:
conda create -n MVMHAT python=3.6
And activate the environment:
conda activate MVMHAT
- Install pytorch:
conda install pytorch=1.7.1 torchvision -c pytorch
- Clone the repository:
MVMHAT_ROOT=/path/to/clone/MVMHAT
git clone https://github.com/realgump/MvMHAT.git $MVMHAT_ROOT
- Install the requirements:
pip install -r requirements.txt
- Download the pretrained model to promote convergence:
cd $MVMHAT_ROOT/models
wget https://download.pytorch.org/models/resnet50-19c8e357.pth -O pretrained.pth
[Notes] The public code of the conference paper (ACM MM 21) can be found at https://github.com/realgump/MvMHAT.
If you find this project useful for your research, please use the following BibTeX entry.
@inproceedings{gan2021mvmhat,
title={Self-supervised Multi-view Multi-Human Association and Tracking},
author={Yiyang Gan, Ruize Han, Liqiang Yin, Wei Feng, Song Wang},
booktitle={ACM MM},
year={2021}
}
@inproceedings{MvMHAT++,
title={Unveiling the Power of Self-supervision for Multi-view Multi-human Association and Tracking},
author={Wei Feng, Feifan Wang, Ruize Han, Zekun Qian, Song Wang},
booktitle={arXiv},
year={2023}
}
Portions of the code are borrowed from Deep SORT, thanks for their great work.
More information is coming soon ...
Contact: [email protected] (Ruize Han), [email protected] (Yiyang Gan). Any questions or discussions are welcomed!