GithubHelp home page GithubHelp logo

jta-dataset's People

Contributors

bstandaert avatar fabbrimatteo avatar fabiolanzi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jta-dataset's Issues

Can I get the bounding box?

I need the bounding box of each person. But there is no bounding box.
Should I get my own bounding box using other models or key points?

And I have one more question.
you wrote like below in the readme

''WARNING: on 21 February 2019 we have changed the annotations, converting the 3D coordinates to the standard camera coordinate system. Here you can download the new annotations separately.''

But the link doesn't work. Can I get the new 3D coordinates?

Thanks

About the camera's extrinsic parameters

Hello, thanks for your great work, and I am curious about the extrinsic parameters of the cameras, specifically I want to know the rotation and translation parameters of each sequence of the JTA datasets, and then I want to transform the 3d joints from world coordinate to camera coordinate, can you provide any help?
thanks :)

I want to get the data set please!

Dear author, I want to get the data set by sending the email to you, but the email fails to be sent. May I ask what should I do? Can you give me your email address? I appreciate it.

Location of different spines

Hello,

Thank you very much for taking the time to create this dataset and making it available to the community.

I have a doubt regarding the different joints that are annotated in the dataset. As mentioned in your README page, the joint indices 11 until 15 denote annotations on the spine. I was not able to understand where exactly these joints are located and the motivation behind having so many spine annotations?

My guess is that so many spine annotations might be needed for representing the spine as seen from the front (stomach, etc.) or as seen from the rear view of a person (back, kidneys, etc.), but I am not too sure about this; and feel that this kind of information might be helpful for others too.

Converting 3D camera coordinates to 2D pixels coordinates

Hello, I converted the 3D camera coordinates to 2D pixel coordinates using the camera intrinsic matrix based on bounding boxes. I noticed that the pixel coordinates found from the 3D camera coordinates do not exactly match the provided 2D pixel coordinates. I thought this might be due to rounded values. I was wondering if this normal and if there is a solution? Thank you.

image

It is slow to read data with json

Hi !
First, thank you for your useful work !

In your wiki, you suggest to use json to load the file like this:

import json
 import numpy as np

 json_file_path = 'annotations/training/seq_42.json'

  with open(json_file_path, 'r') as json_file:
      matrix = json.load(json_file)
      matrix = np.array(matrix)

However, I find it very slow... ~7s to load one matrix. I've saved np.save and loaded with np.load. it has taken ~0.3s. So, in my opinion, it is better to store the annotation file using npy.

I think that it would be great if we could use COCO api to load the annotations(like Posetrack).
Also, I want to use JTA and Posetrack jointly but the joint types are not the same(nose and head bottom are missing)

[Solution] visualize.py errors: "** On entry to DGEBAL parameter number 3 had an illegal value"

FYI I just stumbled over it - Feel free to close this issue when numpy fixes this or if you find this issue trivial

Problem

Running visualize.py on Windows 10 errors with:

 ** On entry to DGEBAL parameter number  3 had an illegal value
 ** On entry to DGEHRD  parameter number  2 had an illegal value
 ** On entry to DORGHR DORGQR parameter number  2 had an illegal value
 ** On entry to DHSEQR parameter number  4 had an illegal value
 . . .
RuntimeError: The current Numpy installation ('C:\\Users\\Simon\\.conda\\envs\\jta-dataset\\lib\\site-packages\\numpy\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: https://tinyurl.com/y3dm3h86

This is caused by (currently latest) numpy v1.19.4 - more specific by underlying fmod() on Windows 2004 or later

Solution

Suggested solution in numpy/issues/17746 is to install numpy==1.19.3 (e.g. via requirements.txt#L4 )

Inaccurate 2d poses

Sometimes, 2d pose annotations are off beyond tolerance. Here's an example of a far away person. Here's an example:

Zoomed in version for the person on the left side far away:

I wonder if the projection is a bit off here or what could be the problem?

Extrinsics and 4 second action sequences not showing much movement

Can you please explain how the three dimensional coordinates for your dataset were generated and also share the extrinsic parameters of camera used?

I have extracted 5 second action sequences for a specific person from different video files, and the first issue to generate nice gifs the subjects need to be roto translated. Have the camera coordinates been normalized in any way. If you could shed any light on how this was generated I would be thankful.

Camera Intrinsic Matrix

Hi @fabbrimatteo,

Thanks for the great work! I need camera intrinsic matrix for a project. Do you have that? Do you use the same camera to render all of the videos?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.