GithubHelp home page GithubHelp logo

ddz16 / tsaspc Goto Github PK

View Code? Open in Web Editor NEW
8.0 3.0 1.0 224 KB

[2023 IJCAI] The PyTorch implementation of the paper "Timestamp-Supervised Action Segmentation from the Perspective of Clustering".

Home Page: https://www.ijcai.org/proceedings/2023/77

Python 100.00%
clustering-algorithm deep-learning video-action-segmentation video-understanding pytorch tcn video-processing

tsaspc's Introduction

Timestamp-Supervised Action Segmentation from the Perspective of Clustering (TSASPC)

This repository provides a PyTorch implementation of the paper Timestamp-Supervised Action Segmentation from the Perspective of Clustering. (IJCAI 2023)

Environment

Our environment:

Ubuntu 16.04.7 LTS
CUDA Version: 11.1

Based on anaconda or miniconda, you can install the required packages as follows:

conda create --name timestamp python=3.9
conda activate timestamp
pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install matplotlib
pip install tensorboard
pip install xlsxwriter
pip install scikit-learn

Prepare Data

  • Download the three datasets, which contains the features and the ground truth labels. (~30GB) (try to download it from here))
  • Extract the compressed file to the data/ folder.
  • The three .npy files in the folder data/ are the timestamp annotations provided by Li et al..

Pseudo-Label Ensembling (PLE)

Before training, we regard PLE as a pre-processing step since it relies on only the visual features of frames. You can run the following commands to generate pseudo-label sequences by the PLE algorithm for all videos in the 50salads dataset:

python generate_pseudo.py --dataset 50salads --metric euclidean --feature 1024   # RGB features
python generate_pseudo.py --dataset 50salads --metric euclidean --feature 2048   # optical flow features
python intersection_pseudo.py --dataset 50salads

Afterwards, you can find the generated pseudo-label sequences in the folder data/I3D_merge/50salads/, and the console will also output the evaluation metrics for the pseudo-label sequences: labeling rate and accuracy of pseudo-labels.

label rate: 0.5117809793880469
label acc: 0.9549147886799857

You can also use above commands (change the --dataset argument) to generate pseudo-label sequences for other two datasets.

Iterative Clustering (IC)

After PLE, you can train the segmentation model with the IC algorithm.

python main.py --action=train --dataset=DS --split=SP

where DS is breakfast, 50salads or gtea, and SP is the split number (1-5) for 50salads and (1-4) for the other datasets.

  • The output of evaluation is saved in result/ folder as an excel file.
  • The models/ folder saves the trained model and the results/ folder saves the predicted action labels of each video in test dataset.

Here is an example:

python main.py --action=train --dataset=50salads --split=2
# [email protected]: 77.8032
# [email protected]: 75.0572
# [email protected]: 64.0732
# Edit: 68.2274
# Acc: 79.3653

Please note that we follow the protocol in MS-TCN++ when evaluating, which is to select the epoch number that can achieve the best average result for all the splits to report the performance.

If you get error: AttributeError: module 'distutils' has no attribute 'version', you can install a lower version of setuptools:

pip uninstall setuptools
pip install setuptools==59.5.0

Evaluation

Normally we get the prediction and evaluation after training and do not have to run this independently. In case you want to test the saved model again by prediction and evaluation, please change the time_data in main.py and run:

python main.py --action=predict --dataset=DS --split=SP

Acknowledgment

The model used in this paper is a refined MS-TCN model. Please refer to the paper MS-TCN: Multi-Stage Temporal Convolutional Network for Action Segmentation. We adapted the code of the PyTorch implementation of Li et al.. Thanks to the original authors for their works!

Citation

@inproceedings{du2023timestamp,
  title={Timestamp-Supervised Action Segmentation from the Perspective of Clustering},
  author={Du, Dazhao and Li, Enhan and Si, Lingyu and Xu, Fanjiang and Sun, Fuchun},
  booktitle={IJCAI},
  year={2023}
}

tsaspc's People

Contributors

ddz16 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

tsaspc's Issues

question about deepcopy in nn.ModuleList

Hello, I would like to know why deepcopy is used here. Thank you.

    self.single_stages = nn.ModuleList([copy.deepcopy(SingleStageModel(num_layers, num_f_maps, num_classes, num_classes, 3))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.