GithubHelp home page GithubHelp logo

matsuren / crownconv360depth Goto Github PK

View Code? Open in Web Editor NEW
53.0 53.0 7.0 55 KB

360° Depth Estimation from Multiple Fisheye Images with Origami Crown Representation of Icosahedron (IROS2020)

License: MIT License

Python 100.00%

crownconv360depth's People

Contributors

matsuren avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

crownconv360depth's Issues

How to get depth image from prediction of model

Hi Dear.
I'm trying to run your code. I can run your code successfully and I can see prediction result.
Prediction result is just one dimension array. How can I see depth image from prediction of model?
If I tried to find some code how to convert depth index into depth image, but I couldn't find it.
Please advise me for that.

How to adapt to different sensor setups?

Hi,
First, thanks for making the code available, it's a very cool method!

I'm wondering if it could be adapted to less than 360° setups or other arrangements of cameras (e.g. single/multiple binocular setup). Looking at the code, it's not obvious to me whether it would work as is or where adaptations would be required. Would you be able to share a few pointers on that?
Thanks!

Depth value acquisition

Dear author, I am amazed at your experimental results, but I found that the format of the officially obtained data is tiff. I wonder if it is convenient for you to share data in PNG format. Thank you very much for your call back!

About pre-trained models

Hi,

I appreciate for this nice work.

I was about to try your code, and I noticed that you have not provided any pre-trained models.

Could you please push one of the checkpoints you have?

Or at least could you let me know the required GPU memory or some more information about training?

Thank you.

Issues with importing test dataset

I downloaded the linked datasets to try out the model, and I get the following error

TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/timtitan/anaconda3/envs/pytorch3d/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/timtitan/anaconda3/envs/pytorch3d/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/timtitan/anaconda3/envs/pytorch3d/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/timtitan/Documents/10-19-Research-Projects/16-Spatial Fingerprinting/16.02 Spatial Mapping/crownconvolution/crownconv360depth/dataloader/icosahedron_dataset.py", line 82, in getitem
sample = self.root_dataset[idx]
File "/home/timtitan/Documents/10-19-Research-Projects/16-Spatial Fingerprinting/16.02 Spatial Mapping/crownconvolution/crownconv360depth/dataloader/omnistereo_dataset.py", line 59, in getitem
sample['idepth'] = load_invdepth(depth_path)
File "/home/timtitan/Documents/10-19-Research-Projects/16-Spatial Fingerprinting/16.02 Spatial Mapping/crownconvolution/crownconv360depth/dataloader/omnistereo_dataset.py", line 76, in load_invdepth
invdepth = (invd_value / 100.0) / (min_depth * 655) + np.finfo(np.float32).eps
TypeError: unsupported operand type(s) for /: 'NoneType' and 'float'

This happens both when I try to train the model fresh, or when I try using the downloadable checkpoints. It seems like there is an import issue when the model checks for invd_value = cv2.imread(filename, cv2.IMREAD_ANYDEPTH) and for some files it returns None.

Question for a PhD student

I sincerely apologise for abusing 'Issues' to ask a personal question. Its just that I really want to know more about doing research and I think this is the easiest way to get a response from a PhD student him/herself. I'm 20 and will only enter University next year but have been learning deep learning/computer vision on my own (nothing great). My question is:

I have been going through code from several papers on GitHub and they only have .py scripts. At the same time, (because I'm using a MacBook Air) there seems to be only Jupyter Notebook-based cloud solutions (eg. AWS SageMaker, PaperSpace, Google Colab, Kaggle Kernels etc.). I want to learn to code like what researchers are doing now, specifically with regards to how you and other reseachers structure the project: It is typical to see a folder for model and utils respectively. This is something that is made tedious with Jupyter Notebooks as you can't simply import code from another python script. I would like to ask for details on your workflow when it comes to coding in your research project. I would even appreciate if you could just refer me to other resources where you perhaps may have learnt about this yourself.

Once again, I am sorry for abusing 'Issues'. If there is another way to communicate with researchers on GitHub, please do tell.

Yours Sincerely,
A person aspiring to be a deep learning researcher
Gordon

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.