GithubHelp home page GithubHelp logo

dylanhu7 / eznerf Goto Github PK

View Code? Open in Web Editor NEW
6.0 3.0 0.0 92 KB

An organized, documented, and educational implementation of NeRF in PyTorch

License: MIT License

Shell 1.26% Python 98.74%
nerf pytorch

eznerf's Introduction

EZNeRF: Neural Radiance Fields Explained!

What is a NeRF?

In simplest terms, a neural radiance field (NeRF) represents a 3D scene as a neural network which can then be queried to synthesize novel views of the scene. In other words, given a set of images of a scene captured from different angles, a NeRF learns to represent the scene and can produce images of the scene from any angle. This is how we generate the videos of the scenes above from a limited set of images!

More specifically, NeRF learns to predict the color and volume density corresponding to any point and space and a viewing direction. By then accumulating color and volume density for samples along rays in space, we can volume render novel views of the scene.

NeRF was first introduced by Mildenhall et. al. in the seminal paper NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. The original code for the paper can be found here.

What is EZNeRF?

EZNeRF is an implementation of NeRF largely following the original paper designed with the goal of having organized, documented, and easy-to-understand code.

Markdown explanations corresponding to each code file are provided, and they explain both higher-level intuition as well as per-line intention for each part of the implementation.

NeRF is not only promising in the fields of computer vision and computer graphics with important applications such as in AR/VR, but it is also a fantastic opportunity to learn about and integrate many concepts in computer graphics, computer vision, deep learning, and statistics.

The goal of EZNeRF is to thoroughly communicate all of these ideas so that a reader (who may be a student, researcher, engineer, or any curious individual) may understand how all of these concepts tie together and produce the results you see above.

EZNeRF does not attempt to be the most efficient or performant implementation of NeRF, nor does it attempt to be the most complete. In fact, EZNeRF currently only supports synthetic scenes with provided camera poses. If you are looking for implementations that are suited for real applications, there are many other implementations and variants of NeRF that produce better results much faster. Check out Nerfstudio and InstantNGP for some great examples.

Getting Started

EZNeRF is implemented in Python with PyTorch and a couple of other libraries. We recommend using Conda for managing dependencies in a virtual environment. Miniforge is recommended, but any Conda installation should work. Alternatively, you can install the dependencies manually with pip.

Installing Dependencies

After cloning the repository, you can install the required dependencies by navigating to the root directory of the repository and running:

conda env create -f environment.yaml

This environment.yaml file includes pytorch-cuda so that machines with NVIDIA GPUs may leverage those resources. However, if this package fails to be found or install for your system (if you are using a Mac, for example) you can remove it from the environment.yaml file.

After creating the environment, activate it by running:

conda activate eznerf

Downloading Data

We use the same data as the original implementation. From the root directory of the repository, run:

./util/download_example_data.sh

There should now be a data directory at the root of the repository.

Running EZNeRF

eznerf.py

In most cases, all you will probably need to run is the combined eznerf.py script, which can take a variety of arguments. However, config files are supported, and we provide a basic config file for training the example LEGO scene in config/lego.yaml.

Example Usage

To train a model on the example data, run:

python eznerf.py --config config/lego.yaml

WandB

Once you initiate a training run, the logging tool WandB will provide you with a link to a dashboard where you can monitor the training progress and view the results of the model. WandB may prompt you to create an account if you do not already have one, but you can proceed anonymously if you wish.

Pre-trained Weights

To be made available soon.

Contributing

Contributions are welcome! Please open an issue or submit a pull request if you have any suggestions or would like to contribute.

eznerf's People

Contributors

dylanhu7 avatar

Stargazers

 avatar James Tompkin avatar Min H. Kim avatar  avatar Nicholas Vadasz avatar

Watchers

Kostas Georgiou avatar  avatar Nicholas Vadasz avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.