GithubHelp home page GithubHelp logo

vcl3d / deepdepthdenoising Goto Github PK

View Code? Open in Web Editor NEW
131.0 12.0 20.0 6.34 MB

This repo includes the source code of the fully convolutional depth denoising model presented in https://arxiv.org/pdf/1909.01193.pdf (ICCV19)

Home Page: https://vcl3d.github.io/DeepDepthDenoising/

License: MIT License

Python 100.00%
depth-denoising rgbd rgb-d autoencoder self-supervised-learning multi-view-learning realsense2

deepdepthdenoising's Introduction

Self-supervised Deep Depth Denoising

Paper Conference Project Page

Created by Vladimiros Sterzentsenko*, Leonidas Saroglou*, Anargyros Chatzitofis*, Spyridon Thermos*, Nikolaos Zioulis*, Alexandros Doumanoglou, Dimitrios Zarpalas, and Petros Daras from the Visual Computing Lab @ CERTH

poisson

About this repo

This repo includes the training and evaluation scripts for the fully convolutional autoencoder presented in our paper "Self-Supervised Deep Depth Denoising" (to appear in ICCV 2019). The autoencoder is trained in a self-supervised manner, exploiting RGB-D data captured by Intel RealSense D415 sensors. During inference, the model is used for depthmap denoising, without the need of RGB data.

Installation

The code has been tested with the following setup:

  • Pytorch 1.0.1
  • Python 3.7.2
  • CUDA 9.1
  • Visdom

Model Architecture

network

Encoder: 9 CONV layers, input is downsampled 3 times prior to the latent space, number of channels doubled after each downsampling.

Bottleneck: 2 residual blocks, ELU-CONV-ELU-CONV structure, pre-activation.

Decoder: 9 CONV layers, input is upsampled 3 times using interpolation followed by a CONV layer.

Train

To see the available training parameters:

python train.py -h

Training example:

python train.py --batchsize 2 --epochs 20 --lr 0.00002 --visdom --visdom_iters 500 --disp_iters 10 --train_path /path/to/train/set

Inference

The weights of pretrained models can be downloaded from here:

  • ddd --> trained with multi-view supervision (as presented in the paper):
  • ddd_ae --> same model architecture, no multi-view supervision (for comparison purposes)

To denoise a RealSense D415 depth sample using a pretrained model:

python inference.py --model_path /path/to/pretrained/model --input_path /path/to/noisy/sample --output_path /path/to/save/denoised/sample

In order to save the input (noisy) and the output (denoised) samples as pointclouds add the following flag to the inference script execution:

--pointclouds True

To denoise a sample using the pretrained autoencoder (same model trained without splatting) add the following flag to the inference script (and make sure you load the "ddd_ae" model):

--autoencoder True

Benchmarking: the mean inference time on a GeForce GTX 1080 GPU is 11ms.

Citation

If you use this code and/or models, please cite the following:

@inproceedings{sterzentsenko2019denoising,
  author       = "Vladimiros Sterzentsenko and Leonidas Saroglou and Anargyros Chatzitofis and Spyridon Thermos and Nikolaos Zioulis and Alexandros Doumanoglou and Dimitrios Zarpalas and Petros Daras",
  title        = "Self-Supervised Deep Depth Denoising",
  booktitle    = "ICCV",
  year         = "2019"
}

License

Our code is released under MIT License (see LICENSE file for details)

deepdepthdenoising's People

Contributors

kosuke55 avatar leosarog avatar tofis avatar vladsterz avatar zokin avatar zuru avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepdepthdenoising's Issues

How to train my own dataset?

Hello, thank you very much for the code, is it possible to train the model with only single view and depth map? If it is possible, how to load the dataset and modify the code?

Weight Load

Hi, first of all, thanks for your sharing this great code.
I'd like to follow the code. But, I couldn't understand the pretrained weight format.
You loaded the 6files,
ddd
DeepDepthDenoising-ddd.tar.gz
DeepDepthDenoising-ddd.zip
ddd_ae
DeepDepthDenoising-ddd_ae.tar.gz
DeepDepthDenoising-ddd_ae.zip
But I don't know the format ddd and ddd_ae.
And Which one is the weight that that I have to torch.load?
Sorry for easy one. But I'd like you to tell me.
Thanks.

RuntimeError

Hello,
I am receiving errors when I test (using inference.py) with some custom images of different size and the error is as follows -

RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 15 and 90 in dimension 2 at c:\a\w\1\s\tmp_conda_3.7_104535\conda\conda-bld\pytorch_1550400486030\work\aten\src\thc\generic/THCTensorMath.cu:83

This particular moment I had an image with dimensions 216*120 but I have had similar error for different sized images as well which were comparatively bigger.
Will you kindly tell what is causing this error and how can I get rid of it?

Interiornet Dataset (Noise)

Hey guys, great work you have here and thank you for providing the code. I would just like to ask if you could upload the scripts that you use for adding artificial noise to Interiornet dataset so that other researchers are able to benefit from your work.

I would be very grateful if you could do it as soon as possible!

Thank you!

Error with the ddd model

Thanks for your code and weight.
When I want to use your ddd weight, bad thing occurs:
hape torch.Size([16, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 64, 3, 3]).
size mismatch for decoder_conv_id_1.0.weight: copying a param with shape torch.Size([16, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 64, 1, 1]).
size mismatch for decoder_deconv1_2.conv.weight: copying a param with shape torch.Size([8, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 32, 3, 3]).
size mismatch for decoder_deconv1_1.conv.weight: copying a param with shape torch.Size([1, 8, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 16, 3, 3]).
Is there any wrong with my operation? Or Do you have provided the code for ddd weight now?
And another thing, I use the ddd_ae model, and it works. However, the ply file resulted seems upgly?
企业微信截图_1650023863743

Interiornet ( Test split, example image)

Hey,

Thanks for the code.

I would like to ask if the reported results were evaluated on a test split from within the first 300 interior net scenes or was it tested on other scenes?

Could you also share which scene was Figure 11 in the paper taken from since there are millions of images within the dataset? :)

Thank you!

Scaling of learning rate

In the train.py there is a code of scaling the learning rate:

utils.opt.adjust_learning_rate(optimizer, epoch)

however in the comments you wrote scale it by 1/10 every 30 epoches but your scale is set to 2 by default. is this the way used in the research paper? the scale is 2 epochs or 30 epochs?

def adjust_learning_rate(optimizer, epoch, scale=2):
    # Sets the learning rate to the initial LR decayed by 10 every 30 epochs
    for param_group in optimizer.param_groups:
        lr =  param_group['lr']
        lr = lr * (0.1 ** (epoch // scale))
        param_group['lr'] = lr

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.