GithubHelp home page GithubHelp logo

rchi-lab / bodymap Goto Github PK

View Code? Open in Web Editor NEW
7.0 0.0 2.0 819 KB

Official implementation of BodyMAP

Python 100.00%
3d-human-mesh 3d-human-shape-and-pose-estimation pose-estimation pressure-estimation pressure-sensing

bodymap's Introduction

BodyMAP

BodyMAP leverages a depth and pressure image of a person in bed covered by blankets to jointly predict the body mesh (3D pose & shape) and a 3D pressure map of pressure distributed along the human body.

BodyPressure

Installation

  1. Install requirements
pip install -r requirements.txt
  1. Follow instructions from shapy for shape metrics calculation. (Optional)

Data Setup

  1. Follow instructions from BodyPressure to download and setup SLP dataset and BodyPressureSD dataset.

  2. Download the 3D pressure maps for the two datasets and put in BodyPressure/data_BP. ([Link](Public drive link: TODO))

  3. Download SMPL human models. (Link). Place the models (SMPL_MALE.pkl and SMPL_FEAMLE.pkl) in BodyMAP/smpl_models/smpl directory.

  4. Download the parsed data (part segmented faces indexes, v2vP 1EA and v2vP 2EA indexes) and put in BodyPressure/data_BP. ([Link](Public drive link: TODO))

  5. Change BASE_PATH constant in constants.py based on your file structrure. The BASE_PATH folder should look like:

    BodyPressure
    ├── data_BP
    │   ├── SLP
    │   │   └── danaLab
    │   │       ├── 00001
    │   │       .
    │   │       └── 00102
    │   │   
    │   ├── slp_real_cleaned
    │   │   ├── depth_uncover_cleaned_0to102.npy
    │   │   ├── depth_cover1_cleaned_0to102.npy
    │   │   ├── depth_cover2_cleaned_0to102.npy
    │   │   ├── depth_onlyhuman_0to102.npy
    │   │   ├── O_T_slp_0to102.npy
    │   │   ├── slp_T_cam_0to102.npy
    │   │   ├── pressure_recon_Pplus_gt_0to102.npy
    │   │   └── pressure_recon_C_Pplus_gt_0to102.npy
    │   │   
    │   ├── SLP_SMPL_fits
    │   │   └── fits
    │   │       ├── p001
    │   │       .
    │   │       └── p102
    │   │   
    │   ├── synth
    │   │   ├── train_slp_lay_f_1to40_8549.p
    │   │   .
    │   │   └── train_slp_rside_m_71to80_1939.p
    │   │   
    │   ├── synth_depth
    │   │   ├── train_slp_lay_f_1to40_8549_depthims.p
    │   │   .
    │   │   └── train_slp_rside_m_71to80_1939_depthims.p
    │   │   
    │   ├── GT_BP_DATA
    |   |   ├── bp2
    |   |       ├── train_slp_lay_f_1to40_8549_gt_pmaps.npy
    |   |       ├── train_slp_lay_f_1to40_8549_gt_vertices.npy
    |   |       .
    |   |       ├── train_slp_rside_m_71to80_1939_gt_pmaps.npy
    |   |       └── train_slp_rside_m_71to80_1939_gt_vertices.npy
    │   |   └── slp2
    │   │       ├── 00001
    │   │       .
    │   │       └── 00102
    |   └── parsed
    |   |   ├── segmented_mesh_idx_faces.p
    |   |   ├── EA1.npy
    |   |   └── EA2.npy
    .
    .
    └── BodyMAP
        ├── assets
        ├── data_files
        ├── model_options
        ├── PMM
        ├── smpl
        └── smpl_models
            └── smpl 
                ├── SMPL_MALE.pkl
                ├── SMPL_FEMALE.pkl
    

Model Training

  • cd PMM
python main.py FULL_PATH_TO_MODEL_CONFIG

The config files for BodyMAP-PointNet and BodyMAP-Conv are provided in the model_config folder. The models are saved in PMM_exps/normal by default. (outside of BodyMAP directory)

Model Training Without Supervision

  1. Train mesh regressor used for BodyMAP-WS
cd PMM python main.py ../model_config/WS_mesh.json
  1. Update path of saved model weights in model_config/WS_Pressure.json file.

  2. Train BodyMAP-WS: 3D pressure map regressor

python main.py ../model_config/WS_Pressure.json

The models are saved in PMM_exps/normal by default. (outside of BodyMAP directory)

Model Testing

  1. Save model inferences on the real data
cd PMM && python save_inference.py --model_path FULL_PATH_TO_MODEL_WEIGHTS --opts_path FULL_PATH_TO_MODEL_EXP_JSON --save_path FULL_PATH_TO_SAVE_INFERENCES
  • model_path: Full path of model weights.
  • opts_path: Full path of the exp.json file created when model is trained.
  • save_path: Full path of the directory to save model inferences.
  1. Calculate 3D Pose, 3D Shape and 3D Pressure Map metrics.
cd ../scripts && python metrics.py --files_dir FULL_PATH_OF_SAVED_RESULTS_DIR --save_path FULL_PATH_TO_SAVE_METRICS
  • files_dir: Full path of the directory where model inferences are saved (save_path argument from step 1).
  • save_path: Full path of the directory to save metric results. The metric results are saved in a tensorboard file in this directory.

Visualization

To visualize body mesh and 3D applied pressure map for the SLP dataset:

cd scripts && python plot.py --save_path FULL_PATH_TO_SAVE_VIZ --cover_type COVER_TYPE --p_idx PARTICIPANT_IDX --pose_idx POSE_IDX --viz_type image --files_dir FULL_PATH_OF_MODEL_INFEFERENCES 
  • save_path: Full path of the directory to save visualization results.
  • cover_type: Blanket cover configuration. Default: cover1. Choices: uncover, cover1 and cover2.
  • p_idx: Participant number to visualize. Default: 81. Choices: p_idx should be between 81 and 102 (included).
  • pose_idx: Pose number to visualize. Default: 1. Choices: pose_idx should be between 1 and 45 (included).
  • viz_type: Visualization Type. Default: imae. Choices: image and video.
  • files_dir: Full path of the directory where model inferences are saved. When this argument is passed it plots the model inferences. Otherwise, it plots the ground truth data.

viz for pariticpant: 81, pose: 1, cover_type: cover1

Trained Models

The trained BodyMAP-PointNet model and BodyMAP-Conv models are available for research purposes. ([Link](Public drive link: TODO))

Acknowledgements

We are grateful for the BodyPressure project from which we have borrowed specific elements of the code base.

Authors

bodymap's People

Contributors

tandon-a avatar

Stargazers

cc avatar  avatar Stevezanto avatar  avatar Nguyễn Quí Vinh Quang avatar MTamam avatar VARUN SAKUNIA avatar

Forkers

hadryan forexwiki

bodymap's Issues

Confused by the insight and pipeline

Hello Abhishek! Congradulations for your new CVPR! I am very interested in BodyMAP but I am still confused by the pipeline of BodyMap.

In the paper, you mentioned the importance of getting the appiled 3D pressure map and proposed a network to predict 3d human mesh and 3D pressure map simultaneously. However, as the 3D pressure map ground truth is generated (projected) from the 2D pressure map collected by a pressure sensing mat (or simulation) and the 2D pressure map is one of the network inputs, why do we need to predict the 3D pressure map instead of simply predicting the human mesh and then projecting the 2D pressure map input to the human mesh in the same manner to obtain the 3D ground truth (which is a similar pipeline as Bodies at Rest and Bodypressure but has both input modalities)? If the pressure distribution of predicted 3D pressure map is not consistent with the 2D input ( which is prone to occur), which one should I believe in?

In short, given that the 3D pressure map ground truths are generated from 2D pressure map during model training, why don't we directly generate 3D pressure map outputs from 2D inputs instead of predicting them through the network?

Thanks very much for your reply!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.