GithubHelp home page GithubHelp logo

xr-egopose's Introduction

xR-EgoPose

The xR-EgoPose Dataset has been introduced in the paper "xR-EgoPose: Egocentric 3D Human Pose from an HMD Camera" (ICCV 2019, oral). It is a dataset of ~380 thousand photo-realistic egocentric camera images in a variety of indoor and outdoor spaces.

img

The code contained in this repository is a PyTorch implementation of the data loader with additional evaluation functions for comparison.

Citation

@inproceedings{tome2019xr,
  title={xR-EgoPose: Egocentric 3D Human Pose from an HMD Camera},
  author={Tome, Denis and Peluse, Patrick and Agapito, Lourdes and Badino, Hernan},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  pages={7728--7738},
  year={2019}
}
@ARTICLE{tome2020self,
  author={D. {Tome} and T. {Alldieck} and P. {Peluse} and G. {Pons-Moll} and L. {Agapito} and H. {Badino} and F. {De la Torre}},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  title={SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera},
  year={2020},
  volume={},
  number={},
  pages={1-1},
  doi={10.1109/TPAMI.2020.3029700}
}

The license agreement for the data usage implies citation of the paper. Please notice that citing the dataset URL instead of the publication would not be compliant with this license agreement.

Download on Mac OS and Linux

Make sure pigz and wget are installed:

# on Mac OS
brew install wget pigz
# on Ubuntu
sudo apt-get install pigz

To download and decompress the dataset use the download.sh script:

./download.sh

which will dowload and set-up the dataset folder for training and testing the model. Make sure to have ~1TB free space for storing the data. After that, run demo.py. This shows how to load and evaluate the model.

xR-EgoPose Dataset

Character names in the dataset follow the convention gender_id_body-type_height

  • gender: male/female
  • id: integer
  • body-type: a/f (average/full)
  • height: a/s (average/short)
Train-set Test-set Val-set
female_001_a_a female_004_a_a male_008_a_a
female_002_a_a female_008_a_a
female_002_f_s female_010_a_a
female_003_a_a female_012_a_a
female_005_a_a female_012_f_s
female_006_a_a male_001_a_a
female_007_a_a male_002_a_a
female_009_a_a male_004_f_s
female_011_a_a male_006_a_a
female_014_a_a male_007_f_s
female_015_a_a male_010_a_a
male_003_f_s male_014_f_s
male_004_a_a
male_005_a_a
male_006_f_s
male_007_a_a
male_008_f_s
male_009_a_a
male_010_f_s
male_011_f_s
male_014_a_a

Structure

For each set and for each character the structure is identical, and structured as follows

TrainSet
├── female_001_a_a
│   ├── env 01
│   │   └── cam_down
│   │   	├── depth
│   │   	├── json
│   │   	├── objectId
│   │   	├── rgba
│   │   	├── rot
│   │   	└── worldp
│   ├── ...
│   └── env 03
└── ...

Frame information is organized in different folders, each containing one file per frame

  • depth: 8-bit png per frame
  • json: json file with camera and pose information
  • objectId: semantic segmentation
  • rgba: 8-bit png per frame
  • rot: json file with joint rotations
  • worldp: world position per pixel

Actions

A set of nine broad action categories have been included in the dataset

Action Name
Gaming
Gesticulating
Greeting
Lower Stretching
Patting
Reacting
Talking
Upper Stretching
Walking

where each of those categories is the collection of many different and specific actions.

E.g. Gaming includes Boxing, Shooting Gun, Playing Golf, Playing Baseball just to cite a few.

Results

Action Martinez [1] Ours - single branch Ours - dual branch
Gaming 109.6 138.3 56.0
Gesticulating 105.4 108.5 50.2
Greeting 119.3 100.3 44.6
Lower Stretching 125.8 133.3 51.1
Patting 93.0 117.8 59.4
Reacting 119.7 175.6 60.8
Talking 111.1 93.5 43.9
Upper Stretching 124.5 129.0 53.9
Walking 130.5 131.9 57.7
All (mm) 122.1 130.4 58.2

[1] Julieta Martinez, Rayat Hossain, Javier Romero, and James JLittle. A simple yet effective baseline for 3d human pose estimation. In Proceedings of the International Conference on Computer Vision (ICCV), 2017

License

See the LICENSE file for details.

xr-egopose's People

Contributors

denistome avatar hbadino avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xr-egopose's Issues

Question about how to inverse 3D point cloud from RGB-D image.

Thank you for sharing your great work.

I noticed that your synthetic dataset contains depth images.
I would like to use depth information for my research purpose.

However, the scale of depth images is normalized [0, 255].
So, it is difficult to reconstruct XYZ information from your RGB-D dataset.

Could you tell me how to inverse 3-dimensional point clouds from RGB-D images?
I know the fisheye lens parameter that you mentioned in the previous issue.

Is the fine-grained action label annotated in this dataset?

Thanks for sharing your great work!
I notice that "Each of those action categories is the collection of many different and specific actions.
E.g. Gaming includes Boxing, Shooting Gun, Playing Golf, Playing Baseball just to cite a few."

I run the demo.py file and it seems the action label is within the nine action categories mentioned in the table. May I ask if there's any fine-grained action label for each frame? say, boxing instead of gaming?

Is real world test data available?

Hi,

I do not find the ~10K xR-EgoPose^{R} data included in the download script.
May I ask if it is available now, if not, do you have a plan to release it?

Thanks,
Zhe

2d keypoint visibility

Hi sir, is there any way to know that, for example, a feet key-point is occluded by the body part or not? i.e. to get the "visibility" label from the 2d/3d kpts information? thanks.

Question about ground truth heatmap & embedding size

Hi. First of all, thank you for sharing your great work.

I have two questions about your proposed method described in paper.

  1. In Eq (1), L_2D is calculated as mean square error between heatmaps. But how could one can get ground truth heatmap? As far as I understand, heatmaps are the probability-encoded data for joints, and usually it is a byproduct(?) of pose estimation.

I think you may assume for instance normal distribution around ground truth 3D joints and project it to generate ground truth heatmap. And I guess your approach is similar as I described, since you mentioned in the paper "... the 3D lifting module can be trained independently using 3D mocap data and its projected heatmaps." (p.7732, 5.Architecture second paragraph)

I wonder If my guess is right and whether is it okay to use fixed size(standard deviation) to generate ground truth heatmaps.

  1. Did you conduct a comparative study on embeddings dimension? I think it's quite small(20D).

I wonder if it has serious impact on performance and/or inference speed if I change the dimension of embeddings.

Missing training data.

Hi. Thanks for the code and the dataset.

I was going through the training data, and it seems there are some missing files in the rgba and json files. By missing files I mean that there is no rgba-json naming correspondence.
The missing files are:
female_003_a_a_002380.json
female_003_a_a.rgba.005000.png
male_003_f_s_000303.json
male_003_f_s.rgba.004981.png
male_006_f_s_000368.json
male_006_f_s.rgba.004981.png
male_008_f_s_000332.json
male_008_f_s.rgba.004981.png
male_010_f_s_000628.json
male_010_f_s.rgba.004981.png
male_011_f_s_000805.json
male_011_f_s.rgba.004981.png

Could you please let me know where can I find them?
Many thanks!

Question on the definitions of rotation matrix in dataset

Thanks for the great work!

After I downloaded the whole dataset, I found that there are two definitions of rotation matrix about one frame, one is in the "rot" directory, and the other is in the "json" directory, what's the difference between these two definitions?

Also, I found that the angle range of the rotation matrix in the "json" directory is from above -200 to above +200 degrees, what's the definition of that degrees?

Question on worldp

Thanks for contributing this great work. I'm a bit confused about how to use the worldp as 3D locations for each pixel. Could you explain how this map was computed and what the values at each pixel location represent? Thanks a lot.

Results on Human3.6M

Thanks for sharing your great work!
I notice that your proposd method is also evaluated on Human 3.6M dataset. Do you use the subjects of four camera views to train and test your model or only using front-views subject?

Question about downloading synthetic dataset

Hi. Thank you for sharing your great work.

I ask you about "download.sh".
Now I am downloading your synthetic dataset.
But I got some errors like below

download.sh

cat $s.tar.gz.part?? | unpigz -p 32 | tar -xvC ./

Error

unpigz: skipping: <stdin>: corrupted -- incomplete deflate data
unpigz: abort: internal threads error
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: female_002_a_a/env_003/cam_down/rot: Cannot create symlink to ‘../../env_001/cam_down/rot’: Operation not supported
tar: female_002_a_a/env_002/cam_down/rot: Cannot create symlink to ‘../../env_001/cam_down/rot’: Operation not supported
tar: Error is not recoverable: exiting now

I think some "TrainSet/*/env_00?/cam_down/rot" folders might haven't been compressed correctly.

However, only 2 folders like below was correctly unzipped.
・TrainSet/female_001_a_a/env_001/cam_down/rot
・TrainSet/female_001_a_a/env_002/cam_down/rot

My "pigz" command version is 2.4.

Could you tell me how to unzip your synthetic dataset?

Understanding of camera parameters

Hi,
I find that you have provided camera parameters in the json file.
The parameters of the camera are fov, trans and rot.

I guess the rot are Euler angles, but what is the axis order of it?

Another question is that how we could transform pts3d_fisheye into pts2d_fisheye, could you please provide a demo code

Thanks~!

question about pts3d_fisheye

i try to recovery 3d pose
image
by using "pts3d_fisheye" in "female_001_a_a_000001.json"
and this is my code

temp_3d_coords=temp_json['pts3d_fisheye']
x=temp_3d_coords[0]
y=temp_3d_coords[1]
z=temp_3d_coords[2]
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x,y,z,marker='o',s=15)

for i,(x,y,z) in enumerate(zip(x,y,z)):
    ax.text(x,y,z,str(i))

ax.legend()
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.show()

i wondering why 3d figure not look like that compared to
image
and this
image

How to understand the Joint rotations (Eulers angles)

If I understand correctly, the rotation angles are Euler angles normally represented as
Rotation about the x-axis = roll angle = α
Rotation about the y-axis = pitch angle = β
Rotation about the z-axis = yaw angle = γ

"rot": [0.2110171152869712, 0.7265798018474156, 0.03091111571684459]
how do I get the order of alpha, beta, and gamma or, roll pitch and yaw?
is it safe to assume that the first element of the above array is roll, the second is pitch and the third is yaw?

Also is there documentation available for the dataset to understand it better?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.