ruizehan / par Goto Github PK
View Code? Open in Web Editor NEWPanoramic Human Activity Recognition, in ECCV 2022.
Panoramic Human Activity Recognition, in ECCV 2022.
Thank you for sharing your codes and annotations.
I found that social activity annotation labels are quite different from reported labels in the paper.
There are 11 social activity classes in the Fig 3 of the paper.
But in 'group.pbtxt' files, first 27 classes which are same as individual action classes,
and only 5 new social activity classes which are 'chatting', 'working together', "join/leave/expansion/narrow", "human object interaction", and "complicated".
Most of the classes reported in the paper are missing.
Considering single-person group, the number of social activity classes should be 27 (inidvidual action) + 11 (social activity) = 36.
But there are only 32.
Do I miss something???
Hello @RuizeHan,
Could you please provide the "config.json" file? I would really appreciate your help!
Thanks!
I wonder if you could provide the config.yaml . I'm really interested in your work and I want to run it !
Hi.
I'd like to ask about the training stage 1 and stage 2.
It seems that you used different annotation files for stage 1 and stage 2.
And the numbers of action and activity classes are also different.
I'm pretty sure that you used whole jrdb-par dataset for stage 2.
What about stage 1? Is the dataset used for stage 1 is just a subset of jrdb-par?
Can you explain more details of stage 1?
Hi.
It seems that there isn't evaluation codes for group detection.
In the paper, you measured [email protected], IoU@AUC, and Mat. IoU for group detection.
Can you share the codes for those metrics?
Hello author, I think your article is very deep, and I would like to study it. However, when debugging the code, I found that I could not find the required excel annotation file. Is it not released? Look forward to your reply. Thank you
Hello author,
Thank you for providing the source code. I have a few questions:
Is the pre-trained model required in the first stage? Which pre-trained model did you use?
Where should the pre-trained model be placed for use during the first stage?
Do you have a 'test_net'? Could you please provide it?
Thank you.
作者您好,恭喜您的论文被ECCV录用。
我想在您的数据集上继续开展工作,能否对您云盘中的标签做一个说明?
谢谢
In the guideline, it says "Stage1 model weights file and annotation files are already in ./PanoAct_source-code/data".
But, I can't find pretrained model of stage 1 in './PanoAct_source-code/data'.
Only excels and txt files exist.
Can you check it is properly uploaded?
请问可以公开一下测试使用的权重文件吗?谢谢!
> Yes. We updated it.
Thank you for reply :)
But I still can't find weight files in the directory ./PanoAct_source-code/data'.
I only see the excels and txt files.
Originally posted by @suminlee94 in #8 (comment)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.