GithubHelp home page GithubHelp logo

o0t1ng0o / depth-aware-endoscopy-sr Goto Github PK

View Code? Open in Web Editor NEW

This project forked from cuhk-aim-group/depth-aware-endoscopy-sr

0.0 0.0 0.0 113.46 MB

Dynamic Depth-Aware Network for Endoscopy Super-Resolution (JBHI 2022)

Shell 0.04% Python 58.59% MATLAB 1.36% Jupyter Notebook 40.02%

depth-aware-endoscopy-sr's Introduction

Depth-Aware-Endoscopy-SR

This repository is an official PyTorch implementation of the paper "Dynamic Depth-Aware Network for Endoscopy Super-Resolution"[paper] from JBHI 2022

Environment

Please follow the requirements.txt

Dataset

In this work, we use the Kvasir and the EndoScene datasets.

  1. We provide the HR and LR images (factor=8) for the Kvasir dataset, which can be downloaded from google drive. This includes the GT depth map of LR imaiges (LR_depth.targ.gz). For the factor = 2 or 4, please manually downscale the HR images according to the target factor.
  2. To download EndoScene dataset, please see here. The corresponding depth maps for LR images can be obtained through the following depth estimation part to predict depth maps.

Training & Testing Model

Here, we give an example to train the SR model for x8 Kvasir dataset.

  1. Configuration:
    Please modify the data path dataroot_GT, dataroot_LQ, dataroot_depthMap in codes/options/train/train_depthNet_SEAN_depthMask_x8.yml
  2. Training:
sh ./launch/train.sh
  1. Testing:
    Please modify the model and data path pretrain_model_G, dataroot_GT, dataroot_LQ, dataroot_depthMap in codes/options/test/test_depthNet.yml
sh ./launch/test.sh

Depth Estimation

Here, we pre-trained a depth estimator based on monodepth2 and use this model to generate the depth map as ground-truth depth map for our proposed method.
Please download the pre-trained model, unzip weights_19.tar.gz and place it to ./codes/depth_estimation/pretrained_models/weights_19

cd ./codes/depth_estimation

Please provide --image_path data_root/img_path in ./launch/test.sh
Run the following command to obtain the depth maps:

sh ./launch/test.sh

Cite

If you find our work useful in your research or publication, please cite our work:

@article{chen2022dynamic,
  title={Dynamic depth-aware network for endoscopy super-resolution},
  author={Chen, Wenting and Liu, Yifan and Hu, Jiancong and Yuan, Yixuan},
  journal={IEEE Journal of Biomedical and Health Informatics},
  volume={26},
  number={10},
  pages={5189--5200},
  year={2022},
  publisher={IEEE}
}

Contact

Please contact us here if you have any question.

depth-aware-endoscopy-sr's People

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.