GithubHelp home page GithubHelp logo

xinghaochen / awesome-hand-pose-estimation Goto Github PK

View Code? Open in Web Editor NEW
3.0K 185.0 531.0 400.05 MB

Awesome work on hand pose estimation/tracking

Home Page: https://xinghaochen.github.io/awesome-hand-pose-estimation/

Shell 19.86% Python 80.14%
computer-vision hand-pose-estimation deep-learning human-computer-interaction 3d-hand hand-pose hand-tracking keypoints hand-pose-regression hand-keypoints

awesome-hand-pose-estimation's People

Contributors

adwardlee avatar amundra15 avatar ataboukhadra avatar davidpengiupui avatar dihuangdh avatar egemenertugrul avatar eldentse avatar elody-07 avatar fastmetro avatar guohengkai avatar guptajakala avatar icaruswizard avatar janus-shiau avatar jyunlee avatar lixiny avatar lyuj1998 avatar menghao666 avatar mrezaei92 avatar nik123 avatar pengfeiren96 avatar rohitdavas avatar samarth-robo avatar seanchenxy avatar thefloe1995 avatar varchita-beena avatar walsvid avatar xinghaochen avatar ygwangthu avatar yuziwei91 avatar zc-alexfan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

awesome-hand-pose-estimation's Issues

Annotated data with visibility labeling

Is there a method or database with labels on visibility of annotated joints, as there are many cases that joints are annotated but not visible (enclosed by other objects). The labeling in Coco data format allows such labeling as "not labeled", "labeled but invisible", and "labeled and visible" types of annotations.

Hi I think there is a bug in MSRA evaluation code

Hi i think there is a bug in MSRA evaluation code
The groundtruth file and the result of REN files has 76375 lines while there are 76391 jpg files in MSRA dataset.
Can you check?
Also, are there no additional results of other methods on MSRA dataset? Only two results are provided.

RGB image datasets

@xinghaochen Hello! Thank you very much for your sharing. Do you have the evaluation of RGB data set and the summary of related papers?

I have some question about MSRA dataset.

I find the "msra_test_list.txt" contain 76375 file, could you provide train list? What training files should I use?I found in many papers that they all said that they use theLeave one method, so they tend to train multiple models, so which model should be used?

How to get these values?

Hello!Thank you for sharing your great job! I am trying to predict with the CrossInfoNet on a kinect2 online, but I don't konw how to get these values

BMVC 2018 oral

Hi Xinghao,

Thanks for adding my paper entitled "3D Hand Pose Estimation using Simulation and Partial-Supervision with a Shared Latent Space" to your list.

Just wondering if you could please update it to Oral?

Cheers.

Proposal : add license for the datasets

I think it would be helpful to mention the license, with the update of the license date , reference to the license file.

What do you think ?
I can start adding license for the rgb datasets to begin with.

A dataset omitted

A dataset called Obman Dataset is omitted
Its properties are as follows:

  1. Synthetic (S)
  2. RGB+ Depth
  3. Obj: Interaction with objects
    4 ... ...

codes for the arXiv paper

Hi, Thank you for your collections on hand pose estimation. I find there is your paper in the list of the arXiv papers, "Pose Guided Structured Region Ensemble Network for Cascaded Hand Pose Estimation". Can you share the related codes on the Github. Thank you again for your kindness!
regards,
weiguo

about one paper for citation

Hi , thanks for making such repo.
I have one question here:
Why do you mark "HOT-Net: Non-Autoregressive Transformer for 3D Hand-Object Pose Estimation. " as MM20 paper. I could not find the citation format -Bib Tex in Google Schoolar.
Could u explain it ? Thanks a lot.

I found an error

Hi, Depth-Based Hand Pose Estimation: Methods, Data, and Challenges is not from IJCV2018.

THOR-Net Paper

Hi,

My name is Ahmed Aboukhadra from the Augmented Vision group at DFKI, I recently published a WACV paper about hand object reconstruction. Could you please add our WACV23 paper to your amazing repo on hand pose estimation? The name of the paper is THOR-Net and it's published here:

https://openaccess.thecvf.com/content/WACV2023/html/Aboukhadra_THOR-Net_End-to-End_Graformer-Based_Realistic_Two_Hands_and_Object_Reconstruction_With_WACV_2023_paper.html

There is a Github Repo for the code as well: https://github.com/ATAboukhadra/THOR-Net

Let me know if anything is missing.

Kind regards,

Ahmed

SHOWMe Paper (ICCV 2023 /ACVR Oral Paper)

Hello,

This paper will be in the proceedings of ICCVW and it will be presented in ACVR workshop oral.
Could you please add it to the ICCV 2023 list? Below are the required links

SHOWMe: Benchmarking Object-agnostic Hand-Object 3D Reconstruction
Anilkumar Swamy, Vincent Leroy, Philippe Weinzaepfel, Fabien Baradel, Salma Galaaoui, Romain Bregier, Matthieu Armando, Jean-Sebastien Franco, Gregory Rogez

[Paper] [Project Page] [Code] [Dataset]

Thank you in Advance!

Released code

Hello,
Can you release the code of paper SHPR-Net: Deep Semantic Hand Pose Regression From Point Clouds. I am especially interesting about the dataset preprocessed that convert from hand depth map to point cloud data. I am looking forward to your reply! Thank you very much!

Correction for two papers

Hi Xinghao,

Thanks for maintaining this repo. It is very helpful for researchers. I want to correct two errors in your citation of my papers.

  1. ECCV 2018: I typed the wrong tile on my homepage. The correct title is: "Point-to-Point Regression PointNet for 3D Hand Pose Estimation".

  2. CVPR 2018: The correct title and author list are as follow:
    Hand PointNet: 3D Hand Pose Estimation using Point Sets
    Liuhao Ge, Yujun Cai, Junwu Weng, Junsong Yuan

Thank you!

Datasets list

Hi, thank you for the collection of papers! This is more of suggestion than an issue. Could you also provide a comprehensive list of datasets and challenges available for hand pose estimation. Thank again for this page, it is helpful!

Evaluation on rgb hands?

Great work! And thanks a lot.
Here, I want to ask if there any evaluation and comparison code for RGB hands datasets such as the code for the NYU/ICVL/MSRA datasets?

Hands 2017 paper

Hi, xinghao, I have found there are a lot of paper form Hands 2017 challenge. Could you be able to download those papers? Thanks!!!

two hands dataset

Hi,
thanks for your nice work.Do you know any dataset for two interact hands?Thanks!

SynHand5M Dataset Joint Information

Hello Xinghao,

Thanks for maintaining this useful repo. There seems to be a typo for SynHand5M dataset.
In the list of datasets, the number of joints for the SynHand5M dataset is listed as 21. However, upon further looking, I noticed they provide 1193 3D hand mesh vertices and couldn't find a mapping to extract the location of hand joints from the provided hand mesh vertices. Are you aware of the mapping between the two?

Thanks!

add paper for ICCV2021

PeCLR: Self-Supervised 3D Hand Pose Estimation from monocular RGB viaContrastive Learning

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.