GithubHelp home page GithubHelp logo

zju3dv / neusc Goto Github PK

View Code? Open in Web Editor NEW
112.0 17.0 4.0 1.5 MB

A Temporal Voyage: Code for "Neural Scene Chronology" [CVPR 2023]

Home Page: https://zju3dv.github.io/neusc

License: Other

Python 92.49% C++ 0.11% Cuda 7.04% C 0.36%
4d-reconstruction view-synthesis cvpr2023

neusc's Introduction

Neural Scene Chronology

teaser

Neural Scene Chronology
Haotong Lin, Qianqian Wang, Ruojin Cai, Sida Peng, Hadar Averbuch-Elor, Xiaowei Zhou, Noah Snavely
CVPR 2023

Installation

Set up the python environment

conda create -n nsc python=3.10
conda activate nsc
conda install conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
pip install -r requirements.txt
pip install git+https://github.com/haotongl/kaolin.git@ht 

Set up datasets

0. Set up workspace

The workspace is the disk directory that stores datasets, training logs, checkpoints and results. Please ensure it has enough space.

export workspace=$PATH_TO_YOUR_WORKSPACE

1. Pre-trained model

Download the pretrained model from this link.

Or you can use the following command to download it.

cd $workspace
mkdir trained_model/nsc/5pointz/base
cd trained_model/nsc/5pointz/base
gdown 1edfa_pYk1m_wxC7dmiHcs40BA89z-kW6

2. Chronology dataset (Optional; download it if you want do training)

Download the 5PointZ dataset from this link.

Or you can use the following command to download it.

cd $workspace
mkdir chronology
gdown 1ytPDh5s5bzVnPLU01jcQ4xOA9_DDgliD 
unzip 5pointz.zip -d chronology
rm 5pointz.zip

3. Running neural scene chronology on your custom chronology dataset

Make sure you have prepared images with timestamps. The timestamp refers to the prefix of the image; please refer to the 5PointZ example dataset.

  1. The first step is to run colmap SfM, and undistort the images, followed by dense reconstruction to obtain meshed-poisson.ply. Use Meshlab to select the region of interest from meshed-poisson.ply, and save it as meshed-poisson-clean.ply.

  2. Generate semantic maps. The purpose of this step is to segment the sky for training the environment map, and to segment pedestrians and vehicles to avoid these pixels during training. Please download the semantic model from this link.

python scripts/semantic/prepare_data.py --root_dir ${CUSTOM_DATA_PATH} --gpu 0 
  1. Generate annots.

First, generate cam_dict.npy and train.txt, which store the camera pose and training list, respectively. You can modify train.txt to select specific time periods, or keep some images for testing purposes.

python scripts/colmap/gen_annots.py --input ${CUSTOM_DATA_PATH}

Then, generate trash.txt to filter out some of the more noisy images. For example, images where more than two-thirds of the pixels are portraits, or images with very few points registered during the SfM process. These images usually provide little help to the model.

python scripts/colmap/gen_trash.py --data_root ${CUSTOM_DATA_PATH}

Inference and Training

Reproducing the demo video

The following command will render the demo video with the specified camera path and time.

python run.py --type evaluate --cfg_file configs/exps/nsc/5pointz_renderdemo.yaml test_dataset.input_ratio 1.

Training a model

python train_net.py --cfg_file configs/exps/nsc/5pointz.yaml

Our code also supports multi-gpu training. The published pretrained model was trained for 400k iterations with 2 A6000 GPUs.

python -m torch.distributed.launch --nproc_per_node=2 train_net.py --cfg_file configs/exps/nsc/5pointz.yaml distributed True gpus 0,1

Visualization

python run.py --type evaluate --cfg_file configs/exps/nsc/5pointz.yaml save_result True

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{lin2023neural,
  title={Neural Scene Chronology},
  author={Lin, Haotong and Wang, Qianqian and Cai, Ruojin and Peng, Sida and Averbuch-Elor, Hadar and Zhou, Xiaowei and Snavely, Noah},
  booktitle={CVPR},
  year={2023}
}

Acknowledgement

We would like to thank Shangzhan Zhang for the helpful disscussion. Some of the code releated to NGP in this repo is borrowed from torch-ngp, thanks Jiaxiang Tang!

neusc's People

Contributors

haotongl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neusc's Issues

How to run colmap sfm on a collection of images with different timestamps

In my understanding, sparse reconstruction models vary for images captured at different timestamps. I'm not understand about the process of running COLMAP to execute Structure-from-Motion (SfM) on an image dataset. Should all images with different timestamps be placed in a single folder?

Code Release

Hi Haotong,
Thank you for the great work.
Any estimate on the code release date ?

How does Colmap perform dense reconstruction with a collection of images captured at different timestamps?

When using Colmap for reconstruction, images captured at the same location but at different timestamps can vary. How can dense reconstruction be achieved in this case, and will the point cloud models from different time points overlap?
I ran colmap for dense reconstruction and the patterns that were supposed to be in the same position were not reconstructed to the same position. Instead, it was rebuilt to another location.

data processing scripts

I'm interested in obtaining the data processing scripts mentioned in the paper. Could you please provide information on where I can find or access these scripts?
Thank you very much

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.