fabbrimatteo / jta-dataset Goto Github PK
View Code? Open in Web Editor NEWHome Page: http://imagelab.ing.unimore.it/jta
License: Other
Home Page: http://imagelab.ing.unimore.it/jta
License: Other
I need the bounding box of each person. But there is no bounding box.
Should I get my own bounding box using other models or key points?
And I have one more question.
you wrote like below in the readme
''WARNING: on 21 February 2019 we have changed the annotations, converting the 3D coordinates to the standard camera coordinate system. Here you can download the new annotations separately.''
But the link doesn't work. Can I get the new 3D coordinates?
Thanks
Hello, thanks for your great work, and I am curious about the extrinsic parameters of the cameras, specifically I want to know the rotation and translation parameters of each sequence of the JTA datasets, and then I want to transform the 3d joints from world coordinate to camera coordinate, can you provide any help?
thanks :)
Dear author, I want to get the data set by sending the email to you, but the email fails to be sent. May I ask what should I do? Can you give me your email address? I appreciate it.
When running visualize.py
on Windows 10 I get the error
RuntimeError: imageio.ffmpeg.download() has been deprecated. Use 'pip install imageio-ffmpeg' instead.'
But imageio-ffmpeg
requires code adjustments due to changed API.
As a workaround you can nail down imageio==2.4.1
(as suggested by superuser.com/a/1403025/417244).
Hello,
Thank you very much for taking the time to create this dataset and making it available to the community.
I have a doubt regarding the different joints that are annotated in the dataset. As mentioned in your README page, the joint indices 11 until 15 denote annotations on the spine. I was not able to understand where exactly these joints are located and the motivation behind having so many spine annotations?
My guess is that so many spine annotations might be needed for representing the spine as seen from the front (stomach, etc.) or as seen from the rear view of a person (back, kidneys, etc.), but I am not too sure about this; and feel that this kind of information might be helpful for others too.
Thanks for your works! And how can I get the JTA-Key?
Hello, I converted the 3D camera coordinates to 2D pixel coordinates using the camera intrinsic matrix based on bounding boxes. I noticed that the pixel coordinates found from the 3D camera coordinates do not exactly match the provided 2D pixel coordinates. I thought this might be due to rounded values. I was wondering if this normal and if there is a solution? Thank you.
Hi !
First, thank you for your useful work !
In your wiki, you suggest to use json to load the file like this:
import json
import numpy as np
json_file_path = 'annotations/training/seq_42.json'
with open(json_file_path, 'r') as json_file:
matrix = json.load(json_file)
matrix = np.array(matrix)
However, I find it very slow... ~7s to load one matrix. I've saved np.save and loaded with np.load. it has taken ~0.3s. So, in my opinion, it is better to store the annotation file using npy.
I think that it would be great if we could use COCO api to load the annotations(like Posetrack).
Also, I want to use JTA and Posetrack jointly but the joint types are not the same(nose and head bottom are missing)
Quick question have the new 3d coordinates that have been converted to the camera reference frame, have they had any normalization applied to them?
Hello,
first thank you for this amazing dataset.
However, I find it kind of troublesome to train the data sequencewise. Is there a trick where all images are stored in one images
folder with the appropriate ID in one annotation
file?
Is it possible that the mask groundtruth will be provided someday?
FYI I just stumbled over it - Feel free to close this issue when numpy fixes this or if you find this issue trivial
Running visualize.py
on Windows 10 errors with:
** On entry to DGEBAL parameter number 3 had an illegal value
** On entry to DGEHRD parameter number 2 had an illegal value
** On entry to DORGHR DORGQR parameter number 2 had an illegal value
** On entry to DHSEQR parameter number 4 had an illegal value
. . .
RuntimeError: The current Numpy installation ('C:\\Users\\Simon\\.conda\\envs\\jta-dataset\\lib\\site-packages\\numpy\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: https://tinyurl.com/y3dm3h86
This is caused by (currently latest) numpy v1.19.4
- more specific by underlying fmod()
on Windows 2004 or later
Suggested solution in numpy/issues/17746 is to install numpy==1.19.3
(e.g. via requirements.txt#L4 )
Can you please explain how the three dimensional coordinates for your dataset were generated and also share the extrinsic parameters of camera used?
I have extracted 5 second action sequences for a specific person from different video files, and the first issue to generate nice gifs the subjects need to be roto translated. Have the camera coordinates been normalized in any way. If you could shed any light on how this was generated I would be thankful.
Hi @fabbrimatteo,
Thanks for the great work! I need camera intrinsic matrix for a project. Do you have that? Do you use the same camera to render all of the videos?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.