GithubHelp home page GithubHelp logo

online-action-detection's Introduction

Colar: Effective and Efficient Online Action Detection by Consulting Exemplars

This repository is the official implementation of Colar. In this work, we study the online action detection and develop an effective and efficient exemplar-consultation mechanism. Paper from arXiv.

Illustrating the architecture of the proposed I2Sim

Requirements

To install requirements:

conda env create -n env_name -f environment.yaml

Before running the code, please activate this conda environment.

Data Preparation

a. Download pre-extracted features from baiduyun (code:cola)

Please ensure the data structure is as below

├── data
   └── thumos14
       ├── Exemplar_Kinetics
       ├── thumos_all_feature_test_Kinetics.pickle
       ├── thumos_all_feature_val_Kinetics.pickle
       ├── thumos_test_anno.pickle
       ├── thumos_val_anno.pickle
       ├── data_info.json

Train

a. Config

Adjust configurations according to your machine.

./misc/init.py

c. Train

python main.py

Inference

a. You can download pre-trained models from baiduyun (code:cola), and put the weight file in the folder checkpoint.

  • The performance of our model is 66.9% mAP.

b. Test

python inference.py

Citation

@inproceedings{yang2022colar,
  title={Colar: Effective and Efficient Online Action Detection by Consulting Exemplars},
  author={Yang, Le and Han, Junwei and Zhang, Dingwen},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2022}
}

Related Projects

  • BackTAL: Background-Click Supervision for Temporal Action Localization.

Contact

For any discussions, please contact [email protected].

online-action-detection's People

Contributors

vividle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

online-action-detection's Issues

How to produce the pre-extracted features ?

Dear Author:
$\qquad$ I have a question about data preparation, how can I produce the pre-extracted feature?
$\qquad$ Is it produced by the backbone network? In your paper, "The feature extractor uses the two-stream network, whose spatial stream adopts ResNet-200 and temporal stream adopts BN-Inception." Then I looked into the "thumos_all_feature_test_Kinetics.pickle", and I found that either rgb or flow feature of every video has a 'Temporal length'x2048 shape.
$\qquad$ I think 'spatial stream' will produce one feature map, which corresponds to rgb feature. And 'temporal stream' will produce another feature map, which corresponds to flow feature. Is the correspondence right?
$\qquad$ Looking forward to your reply.

About exemplars of the static branch

Thank you for your excellent work! I would like to ask a question which is how can we get the exemplars of all categories, it seems like not be implemented in the codes. Thank you!

Baidu Cloud

Excuse me, what is the extraction code of Baidu Cloud?

The question about model precision

I trained your model on my device with THUMOS dataset without any modification, but the mAP is only 65.39. Since there is a gap between this result and the validation result (66.91) of the model you posted, I wonder if you are using some tricks. If not, can it be understood that the gap is only due to equipment differences.

[Epoch-7] [IDU-kinetics] mAP: 0.6539
BaseballPitch: 0.4485
BasketballDunk: 0.8277
Billiards: 0.2734
CleanAndJerk: 0.7331
CliffDiving: 0.8972
CricketBowling: 0.4617
CricketShot: 0.3120
Diving: 0.8743
FrisbeeCatch: 0.4104
GolfSwing: 0.7853
HammerThrow: 0.8585
HighJump: 0.7666
JavelinThrow: 0.7920
LongJump: 0.8093
PoleVault: 0.9041
Shotput: 0.6848
SoccerPenalty: 0.4923
TennisSwing: 0.6288
ThrowDiscus: 0.6688
VolleyballSpiking: 0.4487

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/amFNsyUBvm or add me on WeChat (van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x 、1.x、2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 1.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

Some Details of The Paper

Hi,
Thank you for sharing the code. When I was reading your papers, I met some problems~
In Tabel 4, I found that you write that given a one-minute video, the running time of extracting the RGB feature is 2.3 seconds; I wonder how you calculate the time? If it was the time cost by extracting feature that used Res-200?

features of HDD and TV

Thanks for sharing the code. Would you please provide the features of two other benchmarks or give some guidance to get the features.

HDD dataset visual features

Hi @VividLe,

I have a honda driving dataset that has images (3fps) and I would like to use only the RGB features or visual features of the honda dataset. How would it be possible?
I couldn't find the optical flow feature and am not sure how to get the dataset.
My task is to reconstruct one modality from the other one using a share multimodal representation (sensor and videos(images frames)

Thanks

the corresponding paper

Hi,I am searching the code of the paper ”Structured Attention Composition for Temporal Action Localization“ ,and the url provided by arXiv is linked to this project. But it seems no relation between the code and the paper.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.