Comments (14)
I see, got your point.
I will try to implement this over the weekend.
from behave-dataset.
Hi,
I have added support for submitting joints and vertices, see this commit: be765d4
However, codalab can only accept files smaller than 300MB, which means you have to save the vertices as np.float16 data to reduce file size. Fortunately, the numeric difference between float64 and gloat16 is smaller than 0.02mm, which is negligible.
Let me know if you have further questions.
Have fun!
from behave-dataset.
Good catch, I updated the leaderboard with correct ranking.
You can find the intrinsics here: https://github.com/xiexh20/behave-dataset/blob/main/challenges/lib/config.py#L7
from behave-dataset.
Hi,
The synz test images are synthetic renderings of some interactions from BEHAVE test set. They are used only to understand the model performance difference between synthetic and real data.
They will NOT be used to determine the final ranking. All these images are randomized so it is not possible to identify the source from the released information.
from behave-dataset.
Hi,
Thank you for your interest in our challenge.
I checked BEDLAM leader board. They support submitting joints and vertices instead of pose parameters. Is that what you want?
from behave-dataset.
Thanks for your reply.
Yes I would like to submit joints and vertices. Is it possible for you to adapt the leaderbord website please?
from behave-dataset.
Great, looking forward this implementation. Thanks.
from behave-dataset.
Wow nice, thanks @xiexh20 for your commit. One remark, the shape here for joints
and vertices
shoulde be respectiely (24,3)
and (6890,3)
.
from behave-dataset.
oh I see, good catch. I have updated the doc, thanks!
from behave-dataset.
Hi @xiexh20 , I just made a first submission to your challenge for the human mesh recovery task.
I think that the v2v_behave
metric is not well ranked, for the moment higher is better while it should be lower is better if I am not mistaken?
And for AUC_*
we see only 1 digit after the comma, would it be possible to show at least two digits?
One more question: would it be possible to get the camera intrinsics for the test images?
Thanks for organizing this competition,
from behave-dataset.
Hi,
Yes, v2v is the lower the better, that is why you are ranked 2nd in v2v_behave.
I have changed the leaderboard to show AUC at 2 digits. Let me know if you have further questions.
from behave-dataset.
Hi, thanks for your reply. Sorry my bad.
However there is an error for MPJPE_PA_icap; it should be lower is better but 48.3 is ranked 2nd vs 54.9.
I go back to my "other question": do you plan to release the camera instrinsics (focal length, principal point) for the training sets and test sets?
I cannot find them for both BEHAVE and InterCap.
Thanks for your help,
from behave-dataset.
Nice! Thanks a lot @xiexh20
from behave-dataset.
Hi @xiexh20,
In the leaderboard you are reporting metrics from 3 different split (behave, icap and synz) however on the codlab website only BEHAVE and INTERCAP are introduced. Where teh SYNZ dataset/samples come from? And how can we know which images of the test sets come belong to SYNZ? Thanks for your answer.
from behave-dataset.
Related Issues (20)
- Extracting frames from videos HOT 1
- Editing animation of person+object interaction HOT 2
- Multiple registration results for one sequence HOT 1
- Fail to run video2image.py with -nodepth arg on Date06_Sub07_backpack_back HOT 2
- 30fps Registration result of several sequences are misaligned with the rgbd data HOT 1
- One frame more registration result than rgbd data HOT 1
- The keyboard and basketball's mask seems to be missing. HOT 1
- Different pose fit? HOT 1
- hi, person_fit.ply has 6890 points but readme said it was stored in SMPL+H format HOT 4
- Depth end_time bugfix HOT 1
- Inquiry about Undistortion of RGB and Depth Images HOT 3
- About the person_fit.pkl and person_fit.ply HOT 1
- Question: Detailed Computation of SMPL v2v (cm) and obj. v2v (cm) HOT 1
- Mismatch between number of frames for object_fit_all.pkl and smpl_fit_all.pkl HOT 3
- Object Pose Coordinate System in 30 fps Dataset HOT 2
- Any suggestions on getting the extrinsics of camera 1? HOT 1
- text description HOT 3
- Regarding the Anticipated Release Date for Hand Poses in the BEHAVE Dataset HOT 1
- Alignment of color and depth
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from behave-dataset.