GithubHelp home page GithubHelp logo

omnihouse gt about omnimvs_pytorch HOT 8 CLOSED

matsuren avatar matsuren commented on July 23, 2024
omnihouse gt

from omnimvs_pytorch.

Comments (8)

imuncle avatar imuncle commented on July 23, 2024

I think the groundtruth can't be used now, it's broken. I also tried the method in #5 ,but I also can't train to abtain a good result. However the dataset in http://vclab.kaist.ac.kr/cvpr2021p1/ is able to use, so the problem is not in my code.

Do you still have the ground truth you downloaded in png format? Could you share it to me?

from omnimvs_pytorch.

wangyushen avatar wangyushen commented on July 23, 2024

I think the groundtruth can't be used now, it's broken. I also tried the method in #5 ,but I also can't train to abtain a good result. However the dataset in http://vclab.kaist.ac.kr/cvpr2021p1/ is able to use, so the problem is not in my code.

Do you still have the ground truth you downloaded in png format? Could you share it to me?

I have the same problem as you. In the omnimvs (CVPR) has code to extract the TIFF format Ground-Truth, but the disparity map looks wrong after using. In the second layer of the current dataset, you can see the GT as the example.

from omnimvs_pytorch.

imuncle avatar imuncle commented on July 23, 2024

I think the groundtruth can't be used now, it's broken. I also tried the method in #5 ,but I also can't train to abtain a good result. However the dataset in http://vclab.kaist.ac.kr/cvpr2021p1/ is able to use, so the problem is not in my code.
Do you still have the ground truth you downloaded in png format? Could you share it to me?

I have the same problem as you. In the omnimvs (CVPR) has code to extract the TIFF format Ground-Truth, but the disparity map looks wrong after using. In the second layer of the current dataset, you can see the GT as the example.

Sorry, what do you mean by 'the second layer' ?

from omnimvs_pytorch.

wangyushen avatar wangyushen commented on July 23, 2024

I think the groundtruth can't be used now, it's broken. I also tried the method in #5 ,but I also can't train to abtain a good result. However the dataset in http://vclab.kaist.ac.kr/cvpr2021p1/ is able to use, so the problem is not in my code.
Do you still have the ground truth you downloaded in png format? Could you share it to me?

I have the same problem as you. In the omnimvs (CVPR) has code to extract the TIFF format Ground-Truth, but the disparity map looks wrong after using. In the second layer of the current dataset, you can see the GT as the example.

Sorry, what do you mean by 'the second layer' ?

A TIFF format file contains multiple pictures, and the TIFF format file in this dataset contains two pictures, the first is a 3-channel picture and the second is a single channel picture. The second image is very similar to the disparity map, but it is treated as a depth map in OmniMVS (CVPR). The processed image is not like the correct disparity map.
QQ截图20211214210603

from omnimvs_pytorch.

imuncle avatar imuncle commented on July 23, 2024

@wangyushen Great! You have solved my problem. Thanks bro!

from omnimvs_pytorch.

wangyushen avatar wangyushen commented on July 23, 2024

@wangyushen Great! You have solved my problem. Thanks bro!

How did you solve this problem? Could you please share the code of data loading?

from omnimvs_pytorch.

imuncle avatar imuncle commented on July 23, 2024

The second layer is exactly the invdepth, and it does not need any more transformation.

from PIL import Image
import numpy as np
gt = Image.open('00001.tiff')
gt.seek(1)    # get the second layer
gt = np.array(gt) # this is the invdepth
# by using gt = 1/gt you can get the depth in meters

What's more, the distance between camera and the center of rig is 0.6m, not 0.2m. So the config.yaml is true, and the poses.txt is wrong.

I verify it by wraping the groudtruth from Equidistant projection to the fisheye projection. When I suppose the distance is 0.6m, I got the following result:

image

As you can see, the depth overlaps perfectly with the image

from omnimvs_pytorch.

matsuren avatar matsuren commented on July 23, 2024

Sorry for late reply, and thank you for great discussion!
It's better to mention in #8 about wrong extrinsic parameters. I should've been more skeptical about the extrinsic parameters.

from omnimvs_pytorch.

Related Issues (14)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.