GithubHelp home page GithubHelp logo

ruizehan / par Goto Github PK

View Code? Open in Web Editor NEW
10.0 2.0 3.0 25.02 MB

Panoramic Human Activity Recognition, in ECCV 2022.

Python 78.60% Shell 0.41% Jupyter Notebook 0.97% HTML 20.02%
action eccv2022 group-activity-recognition interaction video-analysis

par's Introduction

PAR

Panoramic Human Activity Recognition: A new problem integrating the classical human activity recognition, i.e., Instance action recognition, Social group activity recognition, Global activity recognition.


We have released the dataset and code to the public.

Dataset Link: https://pan.baidu.com/s/1K8RDNteaphYJY8YEAg5fyA Password: PHAR

[2022.11] We update the code at PAR-main.zip

[2023.04] We have updated the source code. We have also provided the individual action, group activity, and global activity categories with the corresponding IDs.

[2023.04] About the explanation of the group activity labels: The vector length of the social group activity label is 32. This is because when the model is dealing with social activity, a group with one person (i.e. a person who does not belong to any social group) is also considered. In this case, the group activity is assigned with the individual action label (27 categories). In our later study, we may update this annotation.

[2023.09] We upload the evaluation code of the group detection. We also update the code.

[2023.09] We upload the evaluation code of the group detection.

[2023.10] We uploaded the base model of stage I to the cloud storage. Put this file into the path ./data.

https://pan.baidu.com/s/1eW9uj7wO8vaFgWSoRD-UeA @ PHAR.

[2023.12] We have updated the source code in folder /PanoAct_source-code-12_23.

[2024.04] Because of the transformation of Github, this config file is missing before. We have fixed the code.

@inproceedings{han2022panoramic,
  title={Panoramic Human Activity Recognition},
  author={Han, Ruize and Yan, Haomin and Li, Jiacheng and Wang, Songmiao and Feng, Wei and Wang, Song},
  booktitle={Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part IV},
  pages={244--261},
  year={2022},
  organization={Springer}
}

Contact [email protected], and [email protected], for the details about the code or data.

par's People

Contributors

ruizehan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

par's Issues

no stage1 weight file yet

          > Yes. We updated it.

Thank you for reply :)
But I still can't find weight files in the directory ./PanoAct_source-code/data'.
I only see the excels and txt files.

캡처

Originally posted by @suminlee94 in #8 (comment)

difference between training stage 1 and stage2

Hi.
I'd like to ask about the training stage 1 and stage 2.
It seems that you used different annotation files for stage 1 and stage 2.
And the numbers of action and activity classes are also different.
I'm pretty sure that you used whole jrdb-par dataset for stage 2.
What about stage 1? Is the dataset used for stage 1 is just a subset of jrdb-par?
Can you explain more details of stage 1?

Looking for config.yaml

I wonder if you could provide the config.yaml . I'm really interested in your work and I want to run it !

标签描述

作者您好,恭喜您的论文被ECCV录用。
我想在您的数据集上继续开展工作,能否对您云盘中的标签做一个说明?
谢谢

Have the annotated_excel file?

Hello author, I think your article is very deep, and I would like to study it. However, when debugging the code, I found that I could not find the required excel annotation file. Is it not released? Look forward to your reply. Thank you

stage1 weight file

In the guideline, it says "Stage1 model weights file and annotation files are already in ./PanoAct_source-code/data".
But, I can't find pretrained model of stage 1 in './PanoAct_source-code/data'.
Only excels and txt files exist.
Can you check it is properly uploaded?

pre-trained model of stage 1

Hello author,

Thank you for providing the source code. I have a few questions:

Is the pre-trained model required in the first stage? Which pre-trained model did you use?
Where should the pre-trained model be placed for use during the first stage?

Do you have a 'test_net'? Could you please provide it?
Thank you.

Missing Social activity classes

Thank you for sharing your codes and annotations.
I found that social activity annotation labels are quite different from reported labels in the paper.
There are 11 social activity classes in the Fig 3 of the paper.
But in 'group.pbtxt' files, first 27 classes which are same as individual action classes,
and only 5 new social activity classes which are 'chatting', 'working together', "join/leave/expansion/narrow", "human object interaction", and "complicated".
Most of the classes reported in the paper are missing.

Considering single-person group, the number of social activity classes should be 27 (inidvidual action) + 11 (social activity) = 36.
But there are only 32.

Do I miss something???

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.