GithubHelp home page GithubHelp logo

aibluefisher / dbarf Goto Github PK

View Code? Open in Web Editor NEW
96.0 14.0 5.0 277.09 MB

Official Implementation of our CVPR 2023 paper: "DBARF: Deep Bundle-Adjusting Generalizable Neural Radiance Fields"

Python 96.92% Shell 3.08%

dbarf's Introduction

Deep Bundle Adjusting Generalizable Neural Radiance Fields

Official Implementation of our CVPR 2023 paper: "DBARF: Deep Bundle-Adjusting Generalizable Neural Radiance Fields"

[Project Page | arXiv]

1. Installation

conda create -n dbarf python=3.9
conda activate dbarf

# install pytorch
# # CUDA 10.2
# conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=10.2 -c pytorch

# CUDA 11.3
conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch

# HLoc is used for extracting keypoints and matching features.
git clone --recursive https://github.com/cvg/Hierarchical-Localization/
cd Hierarchical-Localization/
python -m pip install -e .
cd ..

pip install opencv-python matplotlib easydict tqdm networkx einops imageio visdom tensorboardX configargparse lpips

2. Preprocessing

1) Extracting Scene Graph

After installing HLoc, we can extract the scene graph for each scene:

python3 -m scripts.preprocess_dbarf_dataset --dataset_dir $image_dir --outputs $output_dir --gpu_idx 0 --min_track_length 2 --max_track_length 15 --recon False --disambiguate False --visualize False

For debugging, we can also enable incremental SfM (not necessary for dbarf since our method does not rely on ground-truth camera poses) by using --recon True, removing ambiguous wrong matches by --disambiguate True, and visualizing reconstruction results by --visualize True.

2) Post-processing COLMAP Model

Also, we need to convert colmap's model into the .npy format with post-processing:

python3 -m scripts.colmap_model_to_poses_bounds --input_dir $colmap_model_dir

3. Dataset Structure

IBRNet                
├── train
│   ├── real_iconic_noface
│   │   ├── airplants
│   │   │   ├── images/
│   │   │   ├── images_4/
│   │   │   ├── images_8/
│   │   │   ├── database.db
│   │   │   ├── poses_bounds.npy
│   │   │   ├── VG_N_M.g2o
│   │   ├── ...
│   ├── ibrnet_collected_1
│   │   ├── ...
│   ├── ibrnet_collected_2
│   │   ├── ...
│   ├── ...     
├── eval
│   ├── nerf_llff_data
│   │   ├── ...
│   ├── ibrnet_collected_more
│   ├── ...   

4. Training & Evaluation

Once your data is prepared, you can train IBRNet and DBARF. At first, you need to edit the corresponding configuration file to train the desired dataset. Note that for IBRNet, we only train a coarse nerf instead of a coarse nerf + a fine nerf. To reproduce our results, it is recommended to use our pretrained model for further finetuning.

Training

cd scripts/shell
cd scripts/shell

# Train coarse-only IBRNet
./train_coarse_ibrnet.sh pretrain False 0 # For pretraining
./train_coarse_ibrnet.sh finetune False 0 # For finetuning

# Train DBARF
./train_dbarf.sh pretrain False 0 # For pretraining
./train_dbarf.sh finetune False 0 # For finetuning

Evaluation

cd scripts/shell
ITER_NUMBER=200000
GPU_IDX=0
# For coarse-only IBRNet
./eval_coarse_llff_all.sh $ITER_NUMBER $GPU_IDX
./eval_coarse_scannet.sh $ITER_NUMBER $GPU_IDX

# For DBARF
./eval_dbarf_llff_all.sh $ITER_NUMBER $GPU_IDX
./eval_dbarf_ibr_collected_all.sh $GPU_IDX
./eval_dbarf_scannet.sh $ITER_NUMBER $GPU_IDX

Rendering videos

cd scripts/shell
ITER_NUMBER=200000
GPU_IDX=0
# For coarse ibrnet
./render_coarse_llff_all.sh $ITER_NUMBER $GPU_IDX

# For dbarf
./render_dbarf_llff_all.sh $ITER_NUMBER $GPU_IDX

Citation

If you find our code is useful for your research, consider cite our paper as following:

@InProceedings{Chen_2023_CVPR,
    author    = {Chen, Yu and Lee, Gim Hee},
    title     = {DBARF: Deep Bundle-Adjusting Generalizable Neural Radiance Fields},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {24-34}
}

dbarf's People

Contributors

aibluefisher avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbarf's Issues

extract_features file missing

I am getting the following error after commenting the disambiguation module dependency:

ImportError: cannot import name 'extract_features' from 'scripts' (/home/nishantjn/Dbarf/dbarf/scripts/init.py)

Got Error of Extracting Scene Graph

Hi @AIBluefisher, when I ran the preprocess_dbarf_dataset.py, I got the error:

Traceback (most recent call last):
  File "/import/networks/lzl_workspace/dbarf/./scripts/preprocess_dbarf_dataset.py", line 54, in <module>
    main()
  File "/import/networks/lzl_workspace/dbarf/./scripts/preprocess_dbarf_dataset.py", line 40, in main
    view_graph_path, database_path, num_view_pairs = extract_relative_poses.main(args=args)
  File "/import/networks/lzl_workspace/dbarf/scripts/extract_relative_poses.py", line 195, in main
    import_features(image_ids, database_path, local_features_path)
  File "/import/networks/lzl_workspace/dbarf/Hierarchical-Localization/hloc/triangulation.py", line 64, in import_features
    keypoints = get_keypoints(features_path, image_name)
  File "/import/networks/lzl_workspace/dbarf/Hierarchical-Localization/hloc/utils/io.py", line 36, in get_keypoints
    dset = hfile[name]['keypoints']
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "/homes/al315/anaconda3/envs/dbarf/lib/python3.9/site-packages/h5py/_hl/group.py", line 357, in __getitem__
    oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5o.pyx", line 189, in h5py.h5o.open
KeyError: 'Unable to synchronously open object (component not found)'

Questions on Dataset

I would like to ask what datasets are used for training and how to allocate the proportions for pretraining to achieve the good results of the pretrain parameter model you provide on your website.

Not find the 'disambiguation'module

image
I follow the readme install envirment,but not find the 'disambiguation'module in the code.Now the opensourced code is lacked this module?Or the 'disambiguation' module is come from Hloc Project?Thanks for your help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.