GithubHelp home page GithubHelp logo

xiexh20 / vistracker Goto Github PK

View Code? Open in Web Editor NEW
67.0 67.0 2.0 12.77 MB

Official implementation for the CVPR'23 paper: Visibility Aware Human-Object Interaction Tracking from Single RGB Camera

Home Page: http://virtualhumans.mpi-inf.mpg.de/VisTracker/

Python 99.72% Shell 0.28%
3d-reconstruction human-object-interaction

vistracker's People

Contributors

xiexh20 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

bruinxiong

vistracker's Issues

Issue about hand pose

Many thanks for your great work!
I've run the demo scripts successfully and generate the video, however I found that the human hand pose is not predicted correctly, it seems that the pose of hand is initial all the time. And I also notice that you just predict 25 body keypoints by openpose without hand keypoints. I wonder if it's caused by Frankmocap or you just not fit the hand pose.
Thanks again and looking forward to your reply!
image

Object template mesh

Great work!

Maybe I am missing something, but how/where is the object template loaded? It's not specified in the dataset preparation section.

Also, does it need to be textured? Are there any other conventions that need to be followed?

psbody library

Hello everyone!

I'm trying to install all the dependencies, but following the psbody-mesh library READ.me for the requirements, this error appears:

"Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
libboost-all-dev is already the newest version (1.74.0.3ubuntu7).
0 upgraded, 0 newly installed, 0 to remove and 42 not upgraded.
The virtual environment was not created successfully because ensurepip is not
available. On Debian/Ubuntu systems, you need to install the python3-venv
package using the following command.

apt install python3.10-venv
You may need to use sudo with that command. After installing the python3-venv
package, recreate your virtual environment.

Failing command: /content/mesh/psbody-mesh-namespace/my_venv/bin/python3

/bin/bash: line 1: my_venv/bin/activate: No such file or directory
Cloning into 'psbody.mesh'...
fatal: could not read Username for 'https://github.com/': No such device or address
[Errno 2] No such file or directory: 'psbody.mesh'
/content/mesh/psbody-mesh-namespace
make: *** No rule to make target 'all'. Stop."

Can anybody help me?

About implementation details on InterCap dataset.

Thanks for your great work!

In your paper, "Visibility Aware Human-Object Interaction Tracking from Single RGB Camera", you train the model on InterCap dataset.

We train our model on sequences from subject 01-08 (173 sequences) and test on sequences from subject 09-10 (38 sequences).

After I download the InterCap dataset, I found there are 40 sequences from subject 09-10. Did you skip some sequences? Would you tell me why you skipped some sequences?

Some test sequences cannot found

Thanks for your great work!

I'm currently running your codes for the evaluation on BEHAVE dataset.
I downloaded all files following the instruction (https://github.com/xiexh20/VisTracker#dataset-preparation)

But, I cannot find below test sequences of 'behave-test-30fps.json' in provided data:
"Date03_Sub04_boxtiny_part2", "Date03_Sub04_yogaball_play2", "Date03_Sub05_chairwood_part2"

Is there additional data to download? Or do I need some other pre-processing?

Here is the test sequences of 'behave-test-30fps.json'.

{
  "seqs": [
    "Date03_Sub03_backpack_back",
    "Date03_Sub03_backpack_hand",
    "Date03_Sub03_backpack_hug",
    "Date03_Sub03_boxlarge",
    "Date03_Sub03_boxlong",
    "Date03_Sub03_boxmedium",
    "Date03_Sub03_boxsmall",
    "Date03_Sub03_boxtiny",
    "Date03_Sub03_chairblack_hand",
    "Date03_Sub03_chairblack_lift",
    "Date03_Sub03_chairblack_sit",
    "Date03_Sub03_chairblack_sitstand",
    "Date03_Sub03_chairwood_hand",
    "Date03_Sub03_chairwood_lift",
    "Date03_Sub03_chairwood_sit",
    "Date03_Sub03_monitor_move",
    "Date03_Sub03_plasticcontainer",
    "Date03_Sub03_stool_lift",
    "Date03_Sub03_stool_sit",
    "Date03_Sub03_suitcase_lift",
    "Date03_Sub03_suitcase_move",
    "Date03_Sub03_tablesmall_lean",
    "Date03_Sub03_tablesmall_lift",
    "Date03_Sub03_tablesmall_move",
    "Date03_Sub03_tablesquare_lift",
    "Date03_Sub03_tablesquare_move",
    "Date03_Sub03_tablesquare_sit",
    "Date03_Sub03_toolbox",
    "Date03_Sub03_trashbin",
    "Date03_Sub03_yogaball_play",
    "Date03_Sub03_yogaball_sit",
    "Date03_Sub03_yogamat",
    "Date03_Sub04_backpack_back",
    "Date03_Sub04_backpack_hand",
    "Date03_Sub04_backpack_hug",
    "Date03_Sub04_boxlarge",
    "Date03_Sub04_boxlong",
    "Date03_Sub04_boxmedium",
    "Date03_Sub04_boxsmall",
    "Date03_Sub04_boxtiny",
    "Date03_Sub04_boxtiny_part2",
    "Date03_Sub04_chairblack_hand",
    "Date03_Sub04_chairblack_liftreal",
    "Date03_Sub04_chairblack_sit",
    "Date03_Sub04_chairwood_hand",
    "Date03_Sub04_chairwood_lift",
    "Date03_Sub04_chairwood_sit",
    "Date03_Sub04_monitor_hand",
    "Date03_Sub04_monitor_move",
    "Date03_Sub04_plasticcontainer_lift",
    "Date03_Sub04_stool_move",
    "Date03_Sub04_stool_sit",
    "Date03_Sub04_suitcase_ground",
    "Date03_Sub04_suitcase_lift",
    "Date03_Sub04_tablesmall_hand",
    "Date03_Sub04_tablesmall_lean",
    "Date03_Sub04_tablesmall_lift",
    "Date03_Sub04_tablesquare_hand",
    "Date03_Sub04_tablesquare_lift",
    "Date03_Sub04_tablesquare_sit",
    "Date03_Sub04_toolbox",
    "Date03_Sub04_trashbin",
    "Date03_Sub04_yogaball_play",
    "Date03_Sub04_yogaball_play2",
    "Date03_Sub04_yogaball_sit",
    "Date03_Sub04_yogamat",
    "Date03_Sub05_backpack",
    "Date03_Sub05_boxlarge",
    "Date03_Sub05_boxlong",
    "Date03_Sub05_boxmedium",
    "Date03_Sub05_boxsmall",
    "Date03_Sub05_boxtiny",
    "Date03_Sub05_chairblack",
    "Date03_Sub05_chairwood",
    "Date03_Sub05_chairwood_part2",
    "Date03_Sub05_monitor",
    "Date03_Sub05_plasticcontainer",
    "Date03_Sub05_stool",
    "Date03_Sub05_suitcase",
    "Date03_Sub05_tablesmall",
    "Date03_Sub05_tablesquare",
    "Date03_Sub05_toolbox",
    "Date03_Sub05_trashbin",
    "Date03_Sub05_yogaball",
    "Date03_Sub05_yogamat"
  ]
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.