GithubHelp home page GithubHelp logo

ollieboyne / found Goto Github PK

View Code? Open in Web Editor NEW
35.0 3.0 5.0 27.84 MB

Official code for FOUND: Foot Optimisation with Uncertain Normals for Surface Deformation using Synthetic Data

Home Page: http://www.ollieboyne.com/FOUND/

License: MIT License

Python 100.00%
3d-reconstruction computer-vision surface-normals surface-normals-estimation surface-reconstruction

found's Introduction

This repository contains the code for 3D shape fitting to predicted surface normals, as shown in our paper:

FOUND: Foot Optimisation with Uncertain Normals for Surface Deformation using Synthetic Data
Winter Conference on Applications of Computer Vision 2024
Oliver Boyne, Gwangbin Bae, James Charles, and Roberto Cipolla
[arXiv] [project page]

Quickstart

  1. git clone --recurse-submodules http://github.com/OllieBoyne/FOUND
  2. Install dependencies: pip install -r requirements.txt
  3. Download the pretrained FIND model to data/find_nfap
  4. Download our benchmark foot dataset to data/scans
  5. Fit a single scan:
python FOUND/fit.py --exp_name <exp_name> --data_folder <data_folder>

You can use --cfg <file>.yaml to use a config file to set parameters. See args.py for all arguments, and example-cfg.yaml for an example config file.

  1. Evaluate all of our reconstruction dataset:
python FOUND/eval.py --data_folder <data_folder> --gpus <gpu_indices>

gpu_indices is a space separated list, e.g. --gpus 0 1 2 3

Data

We provide our synthetic foot dataset, SynFoot, which contains 50K synthetic foot scans, with RGB, normals, and masks.

We also provide a benchmark multiview evaluative dataset, Foot3D.

Related work

Please check out all of our projects that built into this work!

Citation

If you use our work, please cite:

@inproceedings{boyne2024found,
            title={FOUND: {F}oot {O}ptimisation with {U}ncertain {N}ormals for Surface {D}eformation using Synthetic Data},
            author={Boyne, Oliver and Bae, Gwangbin and Charles, James and Cipolla, Roberto},
            booktitle={Winter Conference on Applications of Computer Vision (WACV)},
            year={2024}
}

Troubleshooting

If you have any issues with trimesh and shapely, see misc/shapely.md.

found's People

Contributors

ollieboyne avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

found's Issues

Requirements.

py-fbx==0.1.1 is depending on sip==4.19.7.
image
v4 kinda outdated.

Get colmap.json and keypoints.json

Hello, I want to ask the the colmap parameters used by this project.
I notice the camera model may be SIMPLE_RADIAL or SIMPLE_RADIAL_FISHEYE because the colmap.json only has ccxcyk, and camera model shared for all images.

Thank you very much!😊

This is my project.ini:

log_to_stderr=true
random_seed=0
log_level=0
database_path=C:/Users/ESA/Desktop/NTPU/Colmap/database/synthesis_0035.db
image_path=C:/Users/ESA/Desktop/NTPU/Colmap/rgb/synthesis_0035
[ImageReader]
single_camera=true
single_camera_per_folder=false
single_camera_per_image=false
existing_camera_id=-1
default_focal_length_factor=1.2
mask_path=
camera_model=SIMPLE_RADIAL
camera_params=
camera_mask_path=
[SiftExtraction]
use_gpu=false
estimate_affine_shape=false
upright=false
domain_size_pooling=false
num_threads=-1
max_image_size=3200
max_num_features=8192
first_octave=-1
num_octaves=4
octave_resolution=3
max_num_orientations=2
dsp_num_scales=10
peak_threshold=0.0066699999999999997
edge_threshold=10
dsp_min_scale=0.16667000000000001
dsp_max_scale=3
gpu_index=-1
[SiftMatching]
use_gpu=false
cross_check=true
guided_matching=false
num_threads=-1
max_num_matches=32768
max_ratio=0.80000000000000004
max_distance=0.69999999999999996
gpu_index=-1
[TwoViewGeometry]
multiple_models=false
compute_relative_pose=false
min_num_inliers=15
max_num_trials=10000
max_error=4
confidence=0.999
min_inlier_ratio=0.25
[SequentialMatching]
quadratic_overlap=true
loop_detection=false
overlap=10
loop_detection_period=10
loop_detection_num_images=50
loop_detection_num_nearest_neighbors=1
loop_detection_num_checks=256
loop_detection_num_images_after_verification=0
loop_detection_max_num_features=-1
vocab_tree_path=
[SpatialMatching]
is_gps=true
ignore_z=true
max_num_neighbors=50
max_distance=100
[BundleAdjustment]
refine_focal_length=true
refine_principal_point=false
refine_extra_params=true
refine_extrinsics=true
max_num_iterations=100
max_linear_solver_iterations=200
function_tolerance=0
gradient_tolerance=0
parameter_tolerance=0
[Mapper]
ignore_watermarks=false
multiple_models=true
extract_colors=true
ba_refine_focal_length=true
ba_refine_principal_point=false
ba_refine_extra_params=true
fix_existing_images=false
tri_ignore_two_view_tracks=true
min_num_matches=15
max_num_models=50
max_model_overlap=20
min_model_size=10
init_image_id1=-1
init_image_id2=-1
init_num_trials=200
num_threads=-1
ba_min_num_residuals_for_multi_threading=50000
ba_local_num_images=6
ba_local_max_num_iterations=25
ba_global_images_freq=500
ba_global_points_freq=250000
ba_global_max_num_iterations=50
ba_global_max_refinements=5
ba_local_max_refinements=2
snapshot_images_freq=0
init_min_num_inliers=100
init_max_reg_trials=2
abs_pose_min_num_inliers=30
max_reg_trials=3
tri_max_transitivity=1
tri_complete_max_transitivity=5
tri_re_max_trials=1
min_focal_length_ratio=0.10000000000000001
max_focal_length_ratio=10
max_extra_param=1
ba_local_function_tolerance=0
ba_global_images_ratio=1.1000000000000001
ba_global_points_ratio=1.1000000000000001
ba_global_function_tolerance=0
ba_global_max_refinement_change=0.00050000000000000001
ba_local_max_refinement_change=0.001
init_max_error=4
init_max_forward_motion=0.94999999999999996
init_min_tri_angle=16
abs_pose_max_error=12
abs_pose_min_inlier_ratio=0.25
filter_max_reproj_error=4
filter_min_tri_angle=1.5
local_ba_min_tri_angle=6
tri_create_max_angle_error=2
tri_continue_max_angle_error=2
tri_merge_max_reproj_error=4
tri_complete_max_reproj_error=4
tri_re_max_angle_error=5
tri_re_min_ratio=0.20000000000000001
tri_min_angle=1.5
snapshot_path=
[PatchMatchStereo]
geom_consistency=true
filter=true
allow_missing_files=false
write_consistency_graph=false
max_image_size=2000
window_radius=5
window_step=1
num_samples=15
num_iterations=5
filter_min_num_consistent=2
depth_min=-1
depth_max=-1
sigma_spatial=-1
sigma_color=0.20000000298023224
ncc_sigma=0.60000002384185791
min_triangulation_angle=1
incident_angle_sigma=0.89999997615814209
geom_consistency_regularizer=0.30000001192092896
geom_consistency_max_cost=3
filter_min_ncc=0.10000000149011612
filter_min_triangulation_angle=3
filter_geom_consistency_max_cost=1
cache_size=32
gpu_index=-1
[StereoFusion]
use_cache=false
num_threads=-1
max_image_size=-1
min_num_pixels=5
max_num_pixels=10000
max_traversal_depth=100
check_num_images=50
max_reproj_error=2
max_depth_error=0.0099999997764825821
max_normal_error=10
cache_size=32
mask_path=
[Render]
adapt_refresh_rate=true
image_connections=false
min_track_len=3
refresh_rate=1
projection_type=0
max_error=2
[ExhaustiveMatching]
block_size=50
[VocabTreeMatching]
num_images=100
num_nearest_neighbors=5
num_checks=256
num_images_after_verification=0
max_num_features=-1
vocab_tree_path=
match_list_path=
[TransitiveMatching]
batch_size=1000
num_iterations=3
[ImagePairsMatching]
block_size=1225
[PoissonMeshing]
depth=13
num_threads=-1
point_weight=1
color=32
trim=10
[DelaunayMeshing]
num_threads=-1
max_proj_dist=20
max_depth_dist=0.050000000000000003
visibility_sigma=3
distance_sigma_factor=1
quality_regularization=1
max_side_length_factor=25
max_side_length_percentile=95

The result I export the sparse model as text(cameras.txt)

# Camera list with one line of data per camera:
#   CAMERA_ID, MODEL, WIDTH, HEIGHT, PARAMS[]
# Number of cameras: 1
1 SIMPLE_RADIAL 480 640 489.51368606257324 240 320 0.0068949103364020492

Synthesis data 0035 RGB (colmap.json):

{
    "camera": {
        "width": 480,
        "height": 640,
        "f": 488.9267622633821,
        "cx": 240.0,
        "cy": 320.0,
        "k": 0.007923233772083796
    },...

Can i use collab?

How tf can i use it in de Google Collab or elsewhere.
If its not ever installed reqs with switched to 3.9 py version.
image
Or its only collab platform error?
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.