GithubHelp home page GithubHelp logo

nerf_bridge's Introduction

NerfBridge

For a complete video see https://youtu.be/EH0SLn-RcDg.

Introduction

This package implements a bridge between the Robot Operating System (ROS), and the excellent Nerfstudio package. Our goal with this package, the NerfBridge Bridge, is to provide a minimal and flexible starting point for robotics researchers to explore possible applications of neural implicit representations.

In our experience, when it comes to software in robotics, solutions are rarely one size fits all. To that end we cannot provide meticulous installation and implementation instructions that we will be sure will work for every robotics platform. Rather we will try to outline the core components that you will need to get ROS working with Nerfstudio, and hopefully that will get you started on the road to creating some cool NeRFs with your robot.

The core functionality of NerfBridge is fairly simple. At runtime the user provides some basic information about the camera sensor, the name of a ROS Topic that publishes images, and the name of a ROS Topic that publishes a camera pose that corresponds to each image. Using this information NerfBridge starts an instance of Nerfstudio, initializes a ROS Node that listens to the image and pose topics, and pre-allocates two PyTorch Tensors of fixed size (one for the images and one for the corresponding poses). As training commences, images and poses that are received by the NerfBridge node are copied into the pre-allocated data tensors, and in turn pixels are sampled from these data tensors and used to create a NeRF with Nerfstudio. This process continues until the limit of the pre-allocated tensors is reached at which point the NerfBridge stops copying in new images, and training proceeds on the fixed data until completion.

Requirements

  • A Linux machine (tested with Ubuntu 22.04)
    • This should also have a CUDA capable GPU, and fairly strong CPU.
  • ROS2 Humble installed on your Linux machine
  • A camera that is compatible with ROS2 Humble
  • Some means by which to estimate pose of the camera (SLAM, motion capture system, etc)

Installation

The first step to getting NerfBridge working is to install just the dependencies for Nerfstudio using the installation guide. Then once the dependencies are installed, install Nerfbridge (and Nerfstudio v1.0) using pip install -e . in the root of this repository.

This will add NerfBridge to the list of available methods for on the Nerfstudio CLI. To test if NerfBridge is being registered by the CLI after installation run ns-train -h, and if installation was successful then you should see ros-nerfacto in the list of available methods.

These instructions assume that ROS2 is already installed on the machine that you will be running NerfBridge on, and that the appropriate ROS packages to provide a stream of color images and poses from your camera are installed and working. For details on the packages that we use to provide this data see the section below on Our Setup.

Installation

Installing NerfBridge is a rather involved process because it requires using Anaconda alongside ROS2. To that end, below is a guide for installing NerfBridge on an x86_64 Ubuntu 22.04 machine with an NVIDIA GPU.

  1. Install ROS2 Humble using the installation guide.

  2. Install Miniconda (or Anaconda) using the installation guide.

  3. Create a conda environment for NerfBridge. Take note of the optional procedure for completely isolating the conda environment from your machine's base python site packages. For more details see this StackOverflow post and PEP-0370.

    conda create --name nerfbridge -y python=3.10
    
    # OPTIONAL, but recommended when using 22.04
    conda activate nerfbridge
    conda env config vars set PYTHONNOUSERSITE=1
    conda deactivate
  4. Activate the conda environment and install Nerfstudio dependencies.

    # Activate conda environment, and upgrade pip
    conda activate nerfbridge
    python -m pip install --upgrade pip
    
    # PyTorch, Torchvision dependency
    pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
    
    # CUDA dependency (by far the easiest way to manage cuda versions)
    conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit
    
    # TinyCUDANN dependency (takes a while!)
    pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
    
    # GSPLAT dependency (takes a while!)
    # Avoids building at runtime of first splatfacto training.
    pip install git+https://github.com/nerfstudio-project/gsplat.git
  5. Clone, and install NerfBridge.

    # Clone the NerfBridge Repo
    git clone https://github.com/javieryu/nerf_bridge.git
    
    # Install NerfBridge (and Nerfstudio as a dependency)
    # Make sure your conda env is activated!
    cd nerf_bridge
    pip install -e . 

Now you should be setup to run NerfBridge. In the next section is a basic tutorial on training your first NeRF using NerfBridge.

Example using a ROSBag

This example simulates streaming data from a robot by replaying a ROS2 bag, and using NerfBridge to train a Nerfacto model on that data. This example is a great way to check that your installation of NerfBridge is working correctly.

These instructions are designed to be run in two different terminal sessions. They will be reffered to as terminals 1 and 2.

  1. [Terminal 1] Download the example desk rosbag using the provided download script. This script creates a folder nerf_bridge/rosbags, and downloads a rosbag to that directory.

    # Activate nerfbirdge conda env
    activate nerfbridge
    
    # Run download script
    cd nerf_bridge
    python scripts/download_data.py
  2. [Terminal 2] Start the desk rosbag paused. The three important ROS topics in this rosbag are an image topic (/camera/color/image_raw/compressed), a depth topic (/camera/aligned_depth_to_color/image_raw), and a camera pose topic (/visual_slam/tracking/odometry). These will be the data streams that will be used to train the Nerfacto model using NerfBridge.

    # NOTE: for this example, Terminal 2 does not need to use the conda env.
    
    # Play the rosbag, but start it paused.
    cd nerf_bridge/rosbags
    ros2 bag play desk --start-paused
  3. [Terminal 1] Start NerfBridge using the Nerfstudio ns-train CLI. Notice, included are some parameters that must be changed from the base NerfBridge configuration to accomodate the scene. The function of these parameters is outlined in the next section.

    ns-train ros-depth-nerfacto --data configs/desk.json --pipeline.datamanager.use-compressed-rgb True --pipeline.datamanager.dataparser.scene-scale-factor 0.5 --pipeline.datamanager.data-update-freq 8.0

    After some initialization a message will appear stating that (NerfBridge) Images recieved: 0, at this point you should open the Nerfstudio viewer in a browser tab to visualize the Nerfacto model as it trains. Training will start after completing the next step.

  4. [Terminal 2] Press the SPACE key to start playing the rosbag. Once the pre-training image buffer is filled (defaults to 10 images) then training should commence, and the usual Nerfstudio print messages will appear in Terminal 1. After a few seconds the Nerfstudio viewer should start to show the recieved images as camera frames, and the Nerfacto model should begin be filled out.

  5. After the rosbag in Terminal 2 finishes playing NerfBridge will continue training the Nerfacto model on all of the data that it has recieved, but no new data will be added. You can use CTRL+c to kill NerfBridge after you are done inspecting the Nerfacto model in the viewer.

Running and Configuring NerfBridge

The design and configuration of NerfBridge is heavily inspired by Nerfstudio, and our recommendation is to become familiar with how that repository works before jumping into your own custom implementation of NerfBridge.

Nerfstudio needs three key sources of data to train a NeRF: (1) color images, (2) camera poses corresponding to the color images, and (3) a camera model matching that of the camera that is being used. NerfBridge expects that (1) and (2) are published to corresponding ROS image and pose topics, and that the names of these topics as well as (3) are provided in a JSON configuration file at launch. A sample NerfBridge configuration JSON is provided in the root of the repository, nerf_bridge/configs/desk.json (this is the config used for the example above). We recommend using the camera_calibration package to determine the camera model parameters.

At present we support two Nerfstudio architectures out of the box nerfacto and splatfacto. For each of these architectures we support both RGB only training, and RGBD training. To use any of the methods simply provide the correct JSON configuration file (if using a depth supervised model then also specify the appropriate depth related configurations). The method names provided by NerfBridge are ros-nerfacto (Nerfacto RGB), ros-depth-nerfacto (Nerfacto RGBD), ros-splatfacto (Splatfacto RGB), and ros-depth-splatfacto (Splatfacto RGBD).

Configuring the runtime functionality of NerfBridge is done through the Nerfstudio CLI configuration system, and information about the various settings can be found using the command ns-train ros-nerfacto -h. However, since this returns the configurable settings for both Nerfstudio and NerfBridge we provide a brief outline of the NerfBridge specific settings below.

Option Description Default Setting
msg-timeout Before training starts NerfBridge will wait for a set number of posed images to be published on the topics specified in the configuration JSON. This value measures the time (in seconds) that NerfBridge will wait before timing out, and aborting training. 300.0 (float, s)
num-msgs-to-start Number of messages (images and poses) that must have been successfully streamed to Nerfstudio before training will start. 10 (int)
pipeline.datamanager.data-update-freq Frequency, in Hz, that images are added to the training set (allows for sub-sampling of the ROS stream). 5.0 (float, Hz)
pipeline.datamanager.num-training-images The final size of the training dataset. 500 (int)
pipeline.datamanager.slam-method Used to select the correct coordinate transformations and message types for the streamed poses with options [cuvslam, mocap, orbslam3]. Note: Currently only tested on ROS2 with the cuvslam option while using the /visual_slam/tracking/odometry topic from ISAAC VSLAM. cuvslam (string)
pipeline.datamanager.topic-sync Selects between [exact, approx] which correspond to the variety of TimeSynchronizer used to subscribe to topics. exact (string)
pipeline.datamanager.topic-slop If an approximate time synchronization is used, then this parameters controls the allowable slop, in seconds, between the image and pose topics to consider a match. 0.05 (float, s)
pipeline.datamanager.use-compressed-rgb Whether the RGB image topic is a CompressedImage topic or not. False (bool)
pipeline.datamanager.dataparser.scene-scale-factor How much to scale the origins by to ensure that the scene fits within [-1, 1] box used for the Nerfacto models. 1.0 (float)
pipeline.model.depth_seed_pts [ONLY for ros-depth-splatfacto] Configures how many gaussians to create from each depth image. 2000 (int)

To launch NerfBridge we use the Nerfstudio CLI with the command command below.

ns-train [METHOD-NAME] --data /path/to/config.json [OPTIONS]

After initializing the Nerfstudio, NerfBridge will show a prompt that it is waiting to receive the appropriate number of images before training starts. When that goal has been reached another prompt will indicate the beginning of training, and then its off to the races!

To set the options above replace [OPTIONS] with the option name and value. For example:

ns-train ros-nerfacto --data /path/to/config.json --pipeline.datamanager.data_update_freq 1.0

will set the data update frequency to 1 Hz.

Our Setup

The following is a description of the setup that we at the Stanford Multi-robot Systems Lab have been using to train NeRFs online with NerfBridge from images captured by a camera mounted to a quadrotor.

Camera

We use a Intel Realsense D455 camera to provide our stream of images. This camera has quite a few features that make it really nice for working with NeRFs.

  • Integrated IMU: Vision only SLAM tends to be highly susceptible to visual artifacts and lighting conditions requiring more "babysitting". Nicely, the D455 has an integrated IMU which can be used in conjunction with the camera to provide much more reliable pose estimates.
  • Global Shutter: We haven't done a lot of tests with rolling shutter cameras, but since we are interested in drone mounted cameras this helps with managing motion blur from vibrations and movement of the camera.
  • A Great ROS Package: The manufacturers of this camera provide a ROS package that is reasonably easy to work with, and is easy to install, Intel Realsense ROS.
  • Widely Used: RealSense cameras are widely used throughout the robotics industry which means there are a lot of already available resources for troubleshooting and calibration.

We currently use this camera mounted on a quadrotor that publishes images via WiFi to a ground station where the bulk of the computation takes place (Visual SLAM and NeRF training).

Alternatives: RealSense cameras are expensive, and require more powerful computers to run. For a more economical choice see this camera from Arducam/UCTronics OV9782. This is a bare-bones USB 2.0 camera which can be used in conjunction with the usb_cam ROS2 Package. It has the same RGB sensor that is used in the D455, but at a fraction of the cost. However, using this camera will require a different SLAM algorithm than the one that is used in our Testing Setup.

SLAM

To pose our images we use the ISAAC ROS Visual SLAM package from NVIDIA. This choice is both out of necessity, and performance. At time of writing, there are few existing and well documented Visual(-Inertial) Odometry packages available for ROS2. Our choice of ISAAC VSLAM is mainly motivated by a few reasons.

  • Relatively easy to install, and has works out of the box with the D455.
  • In our experience, ISAAC VSLAM provides relatively robust performance under a variety of lighting conditions when used with a D455.
  • Directly leverages the compute capability of our on-board computer (see section below for more details).

Training Machine and On-board Computer

Our current computing setup is composed of two machines a training computer and an on-board computer which are connected via WiFi. The training computer is used to run NerfBridge, and is a powerful workstation with a Ryzen 9 5900X CPU, NVIDIA RTX 3090, and 32 GB of DDR4 RAM. The on-board computer is an NVIDIA Jetson Orin Nano DevKit directly mounted on a custom quadrotor, and is used to run the D455 and SLAM algorithm. At runtime, the on-board camera and training computer communicate over a local wireless network.

Alternatively, everything can run on a single machine with a camera, where the machine runs both the SLAM algorithm and the NerfBridge training. Due to compute requirements this setup will likely not be very "mobile", but can be a good way to verify that everything is running smoothly before testing on robot hardware.

Drone Workflow

In our typical workflow, we deploy the drone and start the RealSense and SLAM on-board. Then, once the drone is in a steady hover we start NerfBridge on the training machine, and begin the mapping flight orienting the camera towards areas of interest.

Acknowledgements

NerfBridge is entirely enabled by the first-class work of the Nerfstudio Development Team and community.

Citation

In case anyone does use the NerfBridge as a starting point for any research please cite both the Nerfstudio and this repository.

# --------------------------- Nerfstudio -----------------------
@article{nerfstudio,
    author = {Tancik, Matthew and Weber, Ethan and Ng, Evonne and Li, Ruilong and Yi,
            Brent and Kerr, Justin and Wang, Terrance and Kristoffersen, Alexander and Austin,
            Jake and Salahi, Kamyar and Ahuja, Abhik and McAllister, David and Kanazawa, Angjoo},
    title = {Nerfstudio: A Modular Framework for Neural Radiance Field Development},
    journal = {arXiv preprint arXiv:2302.04264},
    year = {2023},
}


# --------------------------- NerfBridge ---------------------
@article{yu2023nerfbridge,
  title={NerfBridge: Bringing Real-time, Online Neural Radiance Field Training to Robotics},
  author={Yu, Javier and Low, Jun En and Nagami, Keiko and Schwager, Mac},
  journal={arXiv preprint arXiv:2305.09761},
  year={2023}
}

nerf_bridge's People

Contributors

javieryu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

nerf_bridge's Issues

Issue loading checkpoints

Hello!
checkpoints

While trying to use the viewer to load trained models, it gives an error that the directory does not exist. Even after specifying the checkpoint file directly, it is not able to load. Perhaps it could be a basic error with the directory path. But the path seems to be correct.
Would appreciate any insight on this !

Do you have any idea on how to update camera view in latest nerfstudio?

I try to install the nerfstudio in your repo, but the requirements for torch and cuda might be old, and I cannot successfully install tiny-cuda-nn for it. So the older version cannot run.
So I install the latest nerfstudio, and it can run as well, except the viewer's camera updating logics, because the code ViewerState in nerfstudio has changed so much. I haven't found a proper way to update the cameras till now.

ros2 humble with turtlebot3 sim and rosbag

i'm trying to test the package with gazebo sim before moving on to actual bot and im facing the following issue
this is my JSON file for sim

{
    "camera_model": "PINHOLE",
    "fx": 1696.802685832259,
    "fy": 1696.802685832259,
    "cx": 960.5,
    "cy": 540.5,
    "W": 1920,
    "H": 1080,
    "image_topic": "/camera/image_raw",
    "pose_topic": "/odom"
}

Screenshot from 2024-02-09 14-24-49
it loads only the first image and a blank white frame in the middle.

Things i tried:

  • syncing image and the odom topics (did not work)
  • tuning update_freq, topic_sync(to approx) and topic_slop
  • custom rosbag data, it loads two images as shown below,
{
    "camera_model": "PINHOLE",
    "fx": 532.9841918945312,
    "fy": 532.9841918945312,
    "cx": 550.3889770507812,
    "cy": 303.86602783203125,
    "W": 1104,
    "H": 621,
    "image_topic": "/zed_node/rgb/image_rect_color",
    "pose_topic": "/zed_node/odom"
}

Screenshot_from_2024-02-08_17-03-38

heres the bag i used.

Can't get a right nerf resualt

Hi, I'm confusied when I run orb_slam3_ros with nerf_brige, the viewer 's image stay still in the center and never have another image shows up, and the nerf nerver work good, is there anything I do wrong? Or is there any rosbag could verify the configuration is right?
nerf_studio

Checkpoints are not being saved

Hi,
While config.yml files are created in the correct locations, checkpoints are not being saved. Training is running fine but cannot view previous runs.
image

The nerfstudio_models directory is not being created.

Any help would be appreciated!
thank you!

Hello, about the choice of camera

I am a student from China, I can not buy the camera you introduced, could you please tell me what other type of camera I can choose.

ROS Bag?

When trying to install the ROS bag file to test the build of this repo on my machine the python file provided ends up creating a .db3 file is there a simple way around this or is there any other place to download a ROS bag.

Thanks

Rendering Results

Thank you for your good job! I have two questions:
1) When I run orbslam3 calling nerfstudio, only two image frames are displayed, but it seems to be able to render the approximate scene. Is this normal?

 2) Can the intrinsic parameters provided by the dataset be copied directly into the JSON file?

Thank you very much if you can help answer.

Problems related to ros2 execution

Hi.
I am trying to execute nerf-bridge in ros2 using the ros2 branch that was recently posted in this repo. I already have the json filled with the intrinsic camera parameters as well as the topics. I am publishing the topics:

  • "image_topic": "/camera/color/image_raw", that is a message of type sensor_msgs/Image.msg
  • "pose_topic": "/visual_slam/tracking/odometry", that is a message of type nav_msgs/Odometry.msg

Both topics are published at the same time, every second (for now, only for testing purposes). The info of these topics are obtained from the poster dataset in nerfstudio (the json containing camera intrinsics, paths to the images and transformation matrices).

I use ns-train ros-nerfacto --data /path/to/config.json as you recommend in the readme.

It shows "Killed" before even starting to train:
image

By doing some logging, I've found out that the process dies when it does the self.pipeline config setup inside trainer.py (inside nerfstudio itself), in the setup function.
This is what the data parser shows in the config:
image

Any hint on what could be going on? Could it be related to the version of nerfstudio I'm using? (I just installed it so it should be the newest available). Or maybe could it be related to how I'm launching the process? Or the data?

Thanks :)

Do you plan to update to ROS2?

Hi, I was wondering if you have any plans to update this to ROS2 eventually as ROS1 will eventually become deprecated. Thanks!

Getting error while trying to start training: "Unrecognized arguments: ros_nerfacto"

Hi!

  • Cloned the project
  • Initialized "Nerfstudio" submodule
  • Installed all of Nerfstudio's dependencies
  • Executed "pip install -e ." inside the nerfstudio directory
  • Run: "python ros_train.py ros_nerfacto --data nsros_config_sample.json"
  • Getting parsing error:
    image

It seems that the ros_nerfacto method is not recognized.
Does anybody know the fix for this?

Much appreciated!

Quality after exporting pointcloud or mesh

Great work! It seems that nerfStudio's pointcloud or mesh export does not retain the best quality when viewed in the web viewer. Can you show me the exported mesh file of the demo video? I need them to display on mesh file readers like meshlab or threejs.
Thanks.

No module named 'nerfstudio.data.scene_box'

Hi, thanks for your great work.
I`m trying to deploy NeRFBridge on an edge computing Nvidia Jetson AGX Orin 64GB platform.
firstly, I git clone the repo by

git clone --recursive https://github.com/javieryu/nerf_bridge.git

and then install the nerfstudio module successfully.
However when I continue to work and got some trouble.
To my personal understanding, I need to

roslaunch orb_slam3_ros euroc_mono_inertial.launch
rosbag play ~/data/MH_01_easy.bag

, and then

python ros_train.py ros_nerfacto --data /home/hello/code/nerf_bridge/ns_orb3_euroc.json

but I got this error

(nerf_bridge) hello@hello-desktop:~/code/nerf_bridge$ python ros_train.py ros_nerfacto --data /home/hello/code/nerf_bridge/ns_orb3_euroc.json 
/home/hello/mambaforge/envs/nerf_bridge/lib/python3.8/site-packages/tinycudann/modules.py:53: UserWarning: tinycudann was built for lower compute capability (86) than the system's (87). Performance may be suboptimal.
  warnings.warn(f"tinycudann was built for lower compute capability ({cc}) than the system's ({system_compute_capability}). Performance may be suboptimal.")
Traceback (most recent call last):
  File "ros_train.py", line 29, in <module>
    from nsros.method_configs import AnnotatedBaseConfigUnion
  File "/home/hello/code/nerf_bridge/nsros/method_configs.py", line 17, in <module>
    from nerfstudio.models.nerfacto import NerfactoModelConfig
  File "/home/hello/code/nerf_bridge/nerfstudio/nerfstudio/models/nerfacto.py", line 35, in <module>
    from nerfstudio.fields.density_fields import HashMLPDensityField
  File "/home/hello/code/nerf_bridge/nerfstudio/nerfstudio/fields/density_fields.py", line 26, in <module>
    from nerfstudio.data.scene_box import SceneBox
ModuleNotFoundError: No module named 'nerfstudio.data.scene_box'

Could you give me some suggestions? Thanks very much.

Please write some detailed deployment process

So far I have only uploaded the nerf_bridge code to the server and installed Ubuntu20.04 and ROS Noetic on the Raspberry PI in the machine car. What should I do next? How to configure orb_slam3_ros? How do you connect nerf_brudge to nerfstudio?
Please help me,please!

(NSROS) Waiting for for image streaming to begin ....

Thanks for building this interesting project and thanks for opening source.

When I try to deploy this with realsense d455 camera with Vins-Fusion for localization. bridge_nerf shows (NSROS) Waiting for for image streaming to begin ....

image

Since Vins-Fusion doesn't come up with stamped_pose, I manually modify the source code to make it publish stamped_pose with same freq as image information published by realsense camera node (30 Hz). (check it out how to modify pose publish rate for VINS_FUSION here HKUST-Aerial-Robotics/VINS-Fusion#92 (comment))

image

My environment come up with:
ubuntu noetic
ros1
exact version of Nerfstudio as README

renderer disconnected

After running the ros_train. it trains but i can visualize the rendering. I am running the nerf_bridge in a docker container. Any solution availablle
Screenshot from 2024-03-07 17-00-55

Error Encountered in ros-depth-splatfacto Due to Changes in gsplat Package Behavior

Hi, thank you for your great work!

While utilizing the ros-depth-splatfacto, I encountered an issue triggered by the latest gsplat package when executing the following command:

ns-train ros-depth-splatfacto --data configs/desk.json --pipeline.datamanager.use-compressed-rgb True --pipeline.datamanager.dataparser.scene-scale-factor 0.5 --pipeline.datamanager.data-update-freq 8.0

This resulted in the following error:

AssertionError: block_width must be between 2 and 16

This error has been addressed in the issue raised on the gsplat GitHub repository, indicating changes in the code behavior.

The solution is to modify the installation command from:

pip install git+https://github.com/nerfstudio-project/gsplat.git

to

pip install git+https://github.com/nerfstudio-project/[email protected]

as documented in the last line of Installation section 4.

GSPLAT Torch Module Missing

Hello, when trying to train the the NeRF Bridge model using the command given to train the ROS2 bag I get an error that looks like the one pasted below. To install GSPLAT I used the recommended pip command to install the repo from source is there any way to resolve this?

(nerfbridge) charl@charl-XPS-8950:~$ ns-train ros-depth-nerfacto --data configs/desk.json --pipeline.datamanager.use-compressed-rgb True --pipeline.datamanager.dataparser.scene-scale-factor 0.5 --pipeline.datamanager.data-update-freq 8.0
Traceback (most recent call last):
File "/home/charl/anaconda3/envs/nerfbridge/bin/ns-train", line 5, in
from nerfstudio.scripts.train import entrypoint
File "/home/charl/anaconda3/envs/nerfbridge/lib/python3.10/site-packages/nerfstudio/scripts/train.py", line 62, in
from nerfstudio.configs.method_configs import AnnotatedBaseConfigUnion
File "/home/charl/anaconda3/envs/nerfbridge/lib/python3.10/site-packages/nerfstudio/configs/method_configs.py", line 50, in
from nerfstudio.engine.trainer import TrainerConfig
File "/home/charl/anaconda3/envs/nerfbridge/lib/python3.10/site-packages/nerfstudio/engine/trainer.py", line 45, in
from nerfstudio.viewer.viewer import Viewer as ViewerState
File "/home/charl/anaconda3/envs/nerfbridge/lib/python3.10/site-packages/nerfstudio/viewer/viewer.py", line 35, in
from nerfstudio.models.splatfacto import SplatfactoModel
File "/home/charl/anaconda3/envs/nerfbridge/lib/python3.10/site-packages/nerfstudio/models/splatfacto.py", line 28, in
from gsplat._torch_impl import quat_to_rotmat
ModuleNotFoundError: No module named 'gsplat._torch_impl'

error during training

optimizer is no longer specified in the CameraOptimizerConfig, it is now defined with the rest of the param groups
inside the config file under the name 'camera_opt'

/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nsros/method_configs.py:42: FutureWarning: above message coming from
camera_optimizer=CameraOptimizerConfig(

CameraOptimizerConfig has been moved from the DataManager to the Model.

/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nsros/method_configs.py:36: FutureWarning: above message coming from
datamanager=ROSDataManagerConfig(

CameraOptimizerConfig has been moved from the DataManager to the Model.

/usr/local/lib/python3.8/dist-packages/tyro/_calling.py:245: FutureWarning: above message coming from
return unwrapped_f(*positional_args, **kwargs), consumed_keywords # type: ignore
[01:16:16] Using --data alias for --data.pipeline.datamanager.dataparser.data ros_train.py:56
──────────────────────────────────────────────────────── Config ────────────────────────────────────────────────────────
ROSTrainerConfig(
_target=<class 'nsros.ros_trainer.ROSTrainer'>,
output_dir=PosixPath('outputs'),
method_name='ros_nerfacto',
experiment_name=None,
project_name='nerfstudio-project',
timestamp='2024-02-27_011616',
machine=MachineConfig(seed=42, num_devices=1, num_machines=1, machine_rank=0, dist_url='auto', device_type='cuda'),
logging=LoggingConfig(
relative_log_dir=PosixPath('.'),
steps_per_log=10,
max_buffer_size=20,
local_writer=LocalWriterConfig(
_target=<class 'nerfstudio.utils.writer.LocalWriter'>,
enable=True,
stats_to_track=(
<EventName.ITER_TRAIN_TIME: 'Train Iter (time)'>,
<EventName.TRAIN_RAYS_PER_SEC: 'Train Rays / Sec'>,
<EventName.CURR_TEST_PSNR: 'Test PSNR'>,
<EventName.VIS_RAYS_PER_SEC: 'Vis Rays / Sec'>,
<EventName.TEST_RAYS_PER_SEC: 'Test Rays / Sec'>,
<EventName.ETA: 'ETA (time)'>
),
max_log_size=10
),
profiler='basic'
),
viewer=ViewerConfig(
relative_log_filename='viewer_log_filename.txt',
websocket_port=None,
websocket_port_default=7007,
websocket_host='0.0.0.0',
num_rays_per_chunk=20000,
max_num_display_images=512,
quit_on_train_completion=False,
image_format='jpeg',
jpeg_quality=75,
make_share_url=False,
camera_frustum_scale=0.1,
default_composite_depth=True
),
pipeline=VanillaPipelineConfig(
_target=<class 'nerfstudio.pipelines.base_pipeline.VanillaPipeline'>,
datamanager=ROSDataManagerConfig(
_target=<class 'nsros.ros_datamanager.ROSDataManager'>,
data=None,
masks_on_gpu=False,
images_on_gpu=False,
dataparser=ROSDataParserConfig(
_target=<class 'nsros.ros_dataparser.ROSDataParser'>,
data=PosixPath('/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nsros_config_sample.json'),
scale_factor=1.0,
aabb_scale=0.8
),
train_num_rays_per_batch=4096,
train_num_images_to_sample_from=-1,
train_num_times_to_repeat_images=-1,
eval_num_rays_per_batch=4096,
eval_num_images_to_sample_from=-1,
eval_num_times_to_repeat_images=-1,
eval_image_indices=(0,),
collate_fn=<function nerfstudio_collate at 0x7fa967f4e670>,
camera_res_scale_factor=1.0,
patch_size=1,
camera_optimizer=CameraOptimizerConfig(
_target=<class 'nerfstudio.cameras.camera_optimizers.CameraOptimizer'>,
mode='SO3xR3',
trans_l2_penalty=0.01,
rot_l2_penalty=0.001,
optimizer=AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.0006,
eps=1e-08,
max_norm=None,
weight_decay=0.01
),
scheduler=None
),
pixel_sampler=PixelSamplerConfig(
_target=<class 'nerfstudio.data.pixel_samplers.PixelSampler'>,
num_rays_per_batch=4096,
keep_full_image=False,
is_equirectangular=False,
ignore_mask=False,
fisheye_crop_radius=None,
rejection_sample_mask=True,
max_num_iterations=100
),
publish_training_posearray=True,
data_update_freq=5.0,
num_training_images=500
),
model=NerfactoModelConfig(
_target=<class 'nerfstudio.models.nerfacto.NerfactoModel'>,
enable_collider=True,
collider_params={'near_plane': 2.0, 'far_plane': 6.0},
loss_coefficients={'rgb_loss_coarse': 1.0, 'rgb_loss_fine': 1.0},
eval_num_rays_per_chunk=32768,
prompt=None,
near_plane=0.05,
far_plane=1000.0,
background_color='last_sample',
hidden_dim=64,
hidden_dim_color=64,
hidden_dim_transient=64,
num_levels=16,
base_res=16,
max_res=2048,
log2_hashmap_size=19,
features_per_level=2,
num_proposal_samples_per_ray=(256, 96),
num_nerf_samples_per_ray=48,
proposal_update_every=5,
proposal_warmup=5000,
num_proposal_iterations=2,
use_same_proposal_network=False,
proposal_net_args_list=[
{'hidden_dim': 16, 'log2_hashmap_size': 17, 'num_levels': 5, 'max_res': 128, 'use_linear': False},
{'hidden_dim': 16, 'log2_hashmap_size': 17, 'num_levels': 5, 'max_res': 256, 'use_linear': False}
],
proposal_initial_sampler='piecewise',
interlevel_loss_mult=1.0,
distortion_loss_mult=0.002,
orientation_loss_mult=0.0001,
pred_normal_loss_mult=0.001,
use_proposal_weight_anneal=True,
use_appearance_embedding=True,
use_average_appearance_embedding=True,
proposal_weights_anneal_slope=10.0,
proposal_weights_anneal_max_num_iters=1000,
use_single_jitter=True,
predict_normals=False,
disable_scene_contraction=False,
use_gradient_scaling=False,
implementation='tcnn',
appearance_embed_dim=32,
average_init_density=1.0,
camera_optimizer=CameraOptimizerConfig(
_target=<class 'nerfstudio.cameras.camera_optimizers.CameraOptimizer'>,
mode='SO3xR3',
trans_l2_penalty=0.01,
rot_l2_penalty=0.001,
optimizer=None,
scheduler=None
)
)
),
optimizers={
'proposal_networks': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.01,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': None
},
'fields': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.01,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': None
}
},
vis='viewer',
data=PosixPath('/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nsros_config_sample.json'),
prompt=None,
relative_model_dir=PosixPath('nerfstudio_models'),
load_scheduler=True,
steps_per_save=30000,
steps_per_eval_batch=500,
steps_per_eval_image=500,
steps_per_eval_all_images=25000,
max_num_iterations=30000,
mixed_precision=True,
use_grad_scaler=False,
save_only_latest_checkpoint=True,
load_dir=None,
load_step=None,
load_config=None,
load_checkpoint=None,
log_gradients=False,
gradient_accumulation_steps={},
msg_timeout=60.0,
num_msgs_to_start=3,
draw_training_images=False
)
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
[01:16:16] Saving config to: outputs/unnamed/ros_nerfacto/2024-02-27_011616/config.yml experiment_config.py:136
Saving checkpoints to: outputs/unnamed/ros_nerfacto/2024-02-27_011616/nerfstudio_models trainer.py:136
Traceback (most recent call last):
File "ros_train.py", line 90, in
entrypoint()
File "ros_train.py", line 80, in entrypoint
main(
File "ros_train.py", line 67, in main
trainer.setup()
File "/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nsros/ros_trainer.py", line 50, in setup
super().setup(test_mode=test_mode)
File "/usr/local/lib/python3.8/dist-packages/nerfstudio/engine/trainer.py", line 149, in setup
self.pipeline = self.config.pipeline.setup(
File "/usr/local/lib/python3.8/dist-packages/nerfstudio/configs/base_config.py", line 54, in setup
return self._target(self, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/nerfstudio/pipelines/base_pipeline.py", line 254, in init
self.datamanager: DataManager = config.datamanager.setup(
File "/usr/local/lib/python3.8/dist-packages/nerfstudio/configs/base_config.py", line 54, in setup
return self._target(self, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/nerfstudio/data/datamanagers/base_datamanager.py", line 422, in init
super().init()
File "/usr/local/lib/python3.8/dist-packages/nerfstudio/data/datamanagers/base_datamanager.py", line 181, in init
self.setup_train()
File "/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nsros/ros_datamanager.py", line 77, in setup_train
self.train_ray_generator = RayGenerator(
TypeError: init() takes 2 positional arguments but 3 were given
(nerfstudio2) root@eiyike-Precision-5820-Tower-X-Series:/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge#

Some confusion about "self.pipeline.datamanager.train_image_dataloader.msg_status".

Hello, about the code, there are some parts that I don't understand and I hope to get your help. At line 60 of ros_trainer.py, "self.pipeline.datamanager.train_image_dataloader.msg_status", we know that msg_status is a method of the ROSDataloader class. The ROSDataManager class instantiates an instance of ROSDataloader as train_image_dataloader in its setup_train method to call msg_status, but I don't understand why the instance of ROSDataloader, train_image_dataloader, can be called by datamanager because there doesn't seem to be any connection between datamanager and train_image_dataloader.
图片

ns-export

Is it possible to export the mesh or point cloud after reconstruction.

THE NERF BRIDGE WONT TRain

Screenshot 2023-11-22 225910
Screenshot 2023-11-22 225910

(nerfstudio) root@eiyike-Precision-5820-Tower-X-Series:/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge# python ros_train.py --method_name ros_nerfacto --data /home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nsros_config_sample.json --pipeline.datamanager.data_update_freq 1.0

optimizer is no longer specified in the CameraOptimizerConfig, it is now defined with the rest of the param groups
inside the config file under the name 'camera_opt'

/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nsros/method_configs.py:42: FutureWarning: above message coming from
camera_optimizer=CameraOptimizerConfig(

CameraOptimizerConfig has been moved from the DataManager to the Model.

/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nsros/method_configs.py:36: FutureWarning: above message coming from
datamanager=ROSDataManagerConfig(

CameraOptimizerConfig has been moved from the DataManager to the Model.

/root/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/tyro/_calling.py:242: FutureWarning: above message coming from
return unwrapped_f(*positional_args, **kwargs), consumed_keywords # type: ignore
[04:23:25] Using --data alias for --data.pipeline.datamanager.dataparser.data ros_train.py:56
──────────────────────────────────────────────────────── Config ────────────────────────────────────────────────────────
ROSTrainerConfig(
_target=<class 'nsros.ros_trainer.ROSTrainer'>,
output_dir=PosixPath('outputs'),
method_name='ros_nerfacto',
experiment_name=None,
project_name='nerfstudio-project',
timestamp='2023-11-23_042325',
machine=MachineConfig(seed=42, num_devices=1, num_machines=1, machine_rank=0, dist_url='auto', device_type='cuda'),
logging=LoggingConfig(
relative_log_dir=PosixPath('.'),
steps_per_log=10,
max_buffer_size=20,
local_writer=LocalWriterConfig(
_target=<class 'nerfstudio.utils.writer.LocalWriter'>,
enable=True,
stats_to_track=(
<EventName.ITER_TRAIN_TIME: 'Train Iter (time)'>,
<EventName.TRAIN_RAYS_PER_SEC: 'Train Rays / Sec'>,
<EventName.CURR_TEST_PSNR: 'Test PSNR'>,
<EventName.VIS_RAYS_PER_SEC: 'Vis Rays / Sec'>,
<EventName.TEST_RAYS_PER_SEC: 'Test Rays / Sec'>,
<EventName.ETA: 'ETA (time)'>
),
max_log_size=10
),
profiler='basic'
),
viewer=ViewerConfig(
relative_log_filename='viewer_log_filename.txt',
websocket_port=None,
websocket_port_default=7007,
websocket_host='0.0.0.0',
num_rays_per_chunk=20000,
max_num_display_images=512,
quit_on_train_completion=False,
image_format='jpeg',
jpeg_quality=90,
make_share_url=False,
camera_frustum_scale=0.1,
default_composite_depth=True
),
pipeline=VanillaPipelineConfig(
_target=<class 'nerfstudio.pipelines.base_pipeline.VanillaPipeline'>,
datamanager=ROSDataManagerConfig(
_target=<class 'nsros.ros_datamanager.ROSDataManager'>,
data=None,
masks_on_gpu=False,
images_on_gpu=False,
dataparser=ROSDataParserConfig(
_target=<class 'nsros.ros_dataparser.ROSDataParser'>,
data=PosixPath('/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nsros_config_sample.json'),
scale_factor=1.0,
aabb_scale=0.8
),
train_num_rays_per_batch=4096,
train_num_images_to_sample_from=-1,
train_num_times_to_repeat_images=-1,
eval_num_rays_per_batch=4096,
eval_num_images_to_sample_from=-1,
eval_num_times_to_repeat_images=-1,
eval_image_indices=(0,),
collate_fn=<function nerfstudio_collate at 0x7f814dae9310>,
camera_res_scale_factor=1.0,
patch_size=1,
camera_optimizer=CameraOptimizerConfig(
_target=<class 'nerfstudio.cameras.camera_optimizers.CameraOptimizer'>,
mode='SO3xR3',
trans_l2_penalty=0.01,
rot_l2_penalty=0.001,
optimizer=AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.0006,
eps=1e-08,
max_norm=None,
weight_decay=0.01
),
scheduler=None
),
pixel_sampler=PixelSamplerConfig(
_target=<class 'nerfstudio.data.pixel_samplers.PixelSampler'>,
num_rays_per_batch=4096,
keep_full_image=False,
is_equirectangular=False
),
publish_training_posearray=True,
data_update_freq=1.0,
num_training_images=500
),
model=NerfactoModelConfig(
_target=<class 'nerfstudio.models.nerfacto.NerfactoModel'>,
enable_collider=True,
collider_params={'near_plane': 2.0, 'far_plane': 6.0},
loss_coefficients={'rgb_loss_coarse': 1.0, 'rgb_loss_fine': 1.0},
eval_num_rays_per_chunk=32768,
prompt=None,
near_plane=0.05,
far_plane=1000.0,
background_color='last_sample',
hidden_dim=64,
hidden_dim_color=64,
hidden_dim_transient=64,
num_levels=16,
base_res=16,
max_res=2048,
log2_hashmap_size=19,
features_per_level=2,
num_proposal_samples_per_ray=(256, 96),
num_nerf_samples_per_ray=48,
proposal_update_every=5,
proposal_warmup=5000,
num_proposal_iterations=2,
use_same_proposal_network=False,
proposal_net_args_list=[
{'hidden_dim': 16, 'log2_hashmap_size': 17, 'num_levels': 5, 'max_res': 128, 'use_linear': False},
{'hidden_dim': 16, 'log2_hashmap_size': 17, 'num_levels': 5, 'max_res': 256, 'use_linear': False}
],
proposal_initial_sampler='piecewise',
interlevel_loss_mult=1.0,
distortion_loss_mult=0.002,
orientation_loss_mult=0.0001,
pred_normal_loss_mult=0.001,
use_proposal_weight_anneal=True,
use_average_appearance_embedding=True,
proposal_weights_anneal_slope=10.0,
proposal_weights_anneal_max_num_iters=1000,
use_single_jitter=True,
predict_normals=False,
disable_scene_contraction=False,
use_gradient_scaling=False,
implementation='tcnn',
appearance_embed_dim=32,
camera_optimizer=CameraOptimizerConfig(
_target=<class 'nerfstudio.cameras.camera_optimizers.CameraOptimizer'>,
mode='SO3xR3',
trans_l2_penalty=0.01,
rot_l2_penalty=0.001,
optimizer=None,
scheduler=None
)
)
),
optimizers={
'proposal_networks': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.01,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': None
},
'fields': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.01,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': None
}
},
vis='viewer',
data=PosixPath('/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nsros_config_sample.json'),
prompt=None,
relative_model_dir=PosixPath('nerfstudio_models'),
load_scheduler=True,
steps_per_save=30000,
steps_per_eval_batch=500,
steps_per_eval_image=500,
steps_per_eval_all_images=25000,
max_num_iterations=30000,
mixed_precision=True,
use_grad_scaler=False,
save_only_latest_checkpoint=True,
load_dir=None,
load_step=None,
load_config=None,
load_checkpoint=None,
log_gradients=False,
gradient_accumulation_steps=1,
msg_timeout=60.0,
num_msgs_to_start=3,
draw_training_images=False
)
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
[04:23:26] Saving config to: outputs/unnamed/ros_nerfacto/2023-11-23_042325/config.yml experiment_config.py:141
Saving checkpoints to: outputs/unnamed/ros_nerfacto/2023-11-23_042325/nerfstudio_models trainer.py:134
Traceback (most recent call last):
File "ros_train.py", line 90, in
entrypoint()
File "ros_train.py", line 80, in entrypoint
main(
File "ros_train.py", line 67, in main
trainer.setup()
File "/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nsros/ros_trainer.py", line 50, in setup
super().setup(test_mode=test_mode)
File "/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nerfstudio/nerfstudio/engine/trainer.py", line 147, in setup
self.pipeline = self.config.pipeline.setup(
File "/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nerfstudio/nerfstudio/configs/base_config.py", line 54, in setup
return self._target(self, **kwargs)
File "/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nerfstudio/nerfstudio/pipelines/base_pipeline.py", line 264, in init
self.datamanager: DataManager = config.datamanager.setup(
File "/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nerfstudio/nerfstudio/configs/base_config.py", line 54, in setup
return self._target(self, **kwargs)
File "/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nerfstudio/nerfstudio/data/datamanagers/base_datamanager.py", line 428, in init
super().init()
File "/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nerfstudio/nerfstudio/data/datamanagers/base_datamanager.py", line 189, in init
self.setup_train()
File "/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge/nsros/ros_datamanager.py", line 77, in setup_train
self.train_ray_generator = RayGenerator(
TypeError: init() takes 2 positional arguments but 3 were given
(nerfstudio) root@eiyike-Precision-5820-Tower-X-Series:/home/eiyike/NERF_BRIDGE_WRKSPACE/nerf_bridge#

ros_train.py: error: unrecognized arguments: ros_nerfacto

Hello, I have started the SLAM algorithm and played back the rosbag with images. Then, I entered the command 'python ros_train.py ros_nerfacto --data /home/pf/CODE/NERF/nerf_bridge/nsros_config_sample.json', but I received an error message saying 'ros_train.py: error: unrecognized arguments: ros_nerfacto'. The image publishing frequency is 15Hz and the odometry frequency is 10Hz. Also, I have modified the camera intrinsic parameters and topics accordingly in the json file.May I know what are the possible solutions to this issue? Thank you!
图片

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.