matsuren / omnimvs_pytorch Goto Github PK
View Code? Open in Web Editor NEWAn unofficial PyTorch implementation of ICCV 2019 paper "OmniMVS: End-to-End Learning for Omnidirectional Stereo Matching"
An unofficial PyTorch implementation of ICCV 2019 paper "OmniMVS: End-to-End Learning for Omnidirectional Stereo Matching"
Hi,
Thank you for sharing the code, I think they've made a mistake in their datasets. For cameras' extrinsic, the config.yaml is more corresponding than the poses.txt.
For example, pose_cam2 should be : [0.000000, 1.570796326794897, 0.000000, 60.00000, 0.000000, 0.000000] (should in centimetres btw)
Rather than : [0.000000 1.570796326794897 0.000000 20.000000 0.000000 -20.000000]
Cheers
Now the given gt depthmap is .tiff format. But I find that using the official code to read, the depthmap is 3-channels and uint8 data type. Could you offer the PNG version you previously download?
Hi,
I modified your implementation for applying binocular systems, I hope you don't mind my borrowing.
In haste
The loss value gets saturated to 0.0010 after one epoch and does not change after that. How can I resolve this issue? Please help
In the dataset, There are several images' name was not matched with omnithings_train.txt. For example, In textfile, it named 00002.png but image name is 0002.png.
Change text file or image file name to escape this problem.
Thank you very much for sharing.
I encountered a problem that the resulting predicted disparity maps were completely black, while training the data without pre-trained weight file. By printing the data, I found that all the data in tensor "gt_invd_idx" from training dataset were 0, and the data in tensor "pred" which is the output of network model was 0~1 . Can I ask what is wrong? Thank you!
Thank you very much for sharing.
I encountered a problem that the loss keep 1.0000 or 0.5000 when I train the network using these default parameters. I try to modify learn rate but it's not work.
Hello, I want to know where the dataset should be placed, why it keeps saying that I don't have a DATA_DIR file.
Thank you very much for your reply
hi, I want to test this network but i found download this dataset is to slow, can you give a pre-trained model so I could test on my data, thanks a lot :-)
Hi, I have a trouble with omnihouse groundtruth. Sorry for ask you, but I cant contact with the author of dataset.
I download the omnihouse dataset from omnistereo, but the groudtruth I got is in tiff
format, as follows:
The value is arroud in 0-255, not 0-65500, as described in the dataset.
And I have tried to use your code:
def load_invdepth(filename, min_depth=55):
'''
min_depth in [cm]
'''
invd_value = cv2.imread(filename, cv2.IMREAD_ANYDEPTH)
invdepth = (invd_value / 100.0) / (min_depth * 655) + np.finfo(np.float32).eps
invdepth *= 100 # unit conversion from cm to m
return invdepth
but I got wrong result.
Is there something I was wrong? If I download wrong groudtruth, could you tell me where to find the right one?
Thanks a lot!
The parameters (image size, output depth size, the number of the disparity) can't be set to the ones reported in their original paper due to GPU memory consumption. Does anyone know how to reduce GPU memory consumption for this model?
Thanks,
I have a one question about depth image format.
I cannot find depth_train_640/
in omnithings directory which is downloaded from here. But, I can see the gt depth images in Omnidirectional Stereo Dataset. This depth format looks tiff
.
Your depth_train_640
meant to be the above depth images? In this case, should I convert from tiff
to png
by myself?
BTW, thank you for all of your awesome work!
Thanks for your contribution!
Have you tried your implementation in Synthetic Urban Datasets? I found the result is poor with your default configurations. Can you provide a training config. file or a pretrained model on this dataset?
Hi,
I would like to know under what license you would like to publish this code, to know if/how it can be used.
Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.