GithubHelp home page GithubHelp logo

Comments (13)

oliver-batchelor avatar oliver-batchelor commented on June 8, 2024

I've uploaded the half-resolution version of the images from this to here:

https://drive.google.com/drive/folders/1JnoUS4rPdubV-6dh94t69PXC0aiaLa58?usp=sharing

from calico.

amy-tabb avatar amy-tabb commented on June 8, 2024

Hi, Thanks for the images -- I did not have access, so I requested it.

The focal-px flag is only if you want to initialize the intrinsic matrix to something other than max dimension *1.2.

--focal-px=[float]            Initial focal length in pixels for the camera.  Default is max dimension * 1.2 

I'm using the OpenCV calibrateCamera function, with the CALIB_USE_INTRINSIC_GUESS flag on. Technically, one could go into the code in file camera_calibration.cpp and turn it off.

But I think the takeaway for your scenario is: if you know the value of focal-px -- indicate. Here, you would use --focal-px=4600 .

We might have a little back and forth, because there may be some things I'm not picking up on in your email. Some other things to try would be to turn off all some of the radial distortion parameters and see if the default settings for focal-px get better for your images. Here, leaving the later radial distortion parameters on (--non-zero-k3 --non-zero-tangent) gives me a bigger RMS.

Finally, if the RMS is still large -- and once I have access to the images I can play around -- I'll see if fiddling with the CALIB_USE_INTRINSIC_GUESS flag changes anything. If it does, I can push a new version in a week or so as I am working on calico at the moment.

from calico.

oliver-batchelor avatar oliver-batchelor commented on June 8, 2024

Hi!

Given that I see calico uses the OpenCV, and I can succesfully calibrate intrinsics on the same images (with OpenCV), this seems strange. Will have a closer look..

I've granted you access, sorry about that - I thought they were accessible with the link only.

Cheers,
Oliver

from calico.

amy-tabb avatar amy-tabb commented on June 8, 2024

Hi,

So I was able to run the dataset you sent over. First thing -- the dataset you sent had missing images. Calico expects that the images acquired will match across folders, so if there is an image '0000.jpg' in cam1, there should be a corresponding image '0000.jpg' in folder cam2. There is no test for this. So I filled in the blanks in the directories with blank images. To be pedantic -- the cameras need to snap the images across all cameras at the same time for each pattern pose, or be synchronized.

Ok, so given that, I was able to get this dataset to calibrate fine. You said that focal pixel length was 4460, and that these were resized by a factor of 2, so that should give 2230. I get 2246.21, 2244.49, 2249.36, 2247.66 , etc. with rms per camera at 0.291874 and 0.332593, which is pretty close. The views were not optimal, so I thought this was ok.

Then, the reprojection error for calibrating the whole set is 9.19083, and reconstruction accuracy error is 0.553381 mm. Reprojection error I think can be improved with better, and more views of the pattern. So moving back from the cameras, and acquiring images from different distances could help.

This is what the 6 camera positions look like with respect to the pattern, at one of its positions.
snapshot00

Here are the results computed on my machine, using

./calico --network --output /home/atabb/DemoData/calico-data/oliver-data/results/ --input /home/atabb/DemoData/calico-data/oliver-data/pattern22x16

(I did fix a little OpenMP bug that caused occasional crashes here, so you might want to clone again. I also tested using the Docker container, and I got identical results.)

Here is the altered dataset to generate those results, with the missing files filled in.

Let me know if you have any questions with the above.

from calico.

oliver-batchelor avatar oliver-batchelor commented on June 8, 2024

from calico.

oliver-batchelor avatar oliver-batchelor commented on June 8, 2024

from calico.

oliver-batchelor avatar oliver-batchelor commented on June 8, 2024

from calico.

oliver-batchelor avatar oliver-batchelor commented on June 8, 2024

I found at least one offending image, in report/data/cam1/ext52.png - would this be caused by a bad charuco detection?
image

from calico.

amy-tabb avatar amy-tabb commented on June 8, 2024

Hi,

Ok, so glad you got it to work. Going back to last month's issue with LAPACK, I'm wondering if you re-pull this repo / docker image and try to run on the 1300 images, how things will go. As to why ... I don't know. The minimization is more efficient in the last update.

Going backwards to comment4-- good catch, yes the aruco tag was misidentified. I have had that happen very occasionally. It is likely throwing off the estimation some, if you have enough data it will just be reflected in error. But you could place a big black rectangle over the part of the image with the misidentified tag. If you don't mind sending me the original image, I might add it to my problem dataset and over 6 months create some tests to validate the IDs.

Then where to find the information - -planning to update the README, because in hindsight this is not obvious.

individual intrinsic camera calibration info, per camera: output-directory/data/cam#/cali_results.txt

rms 0.291874 is the reprojection root mean square error from camera calibration for intrinsic camera calibration parameters using OpenCV, nothing fancy.

Then to find the error of the multi-camera calibration using calico, see output-directory/total_results.txt.

Algebraic error cost function error, averaged by number of FRs (equation 16) is equation 16 on page 5 of the current document on arXiv (note the code is ahead of the document at the moment) paper.

incremental: Reprojection error, rrmse: is equation 17 in the paper. So this is reprojection error if we use the calibration from this multi-camera calibration that we computed with calico.

Then RAE is reconstruction accuracy error, equations 18 and 19, or section 5.1.3 in the paper. Units are mm, and what happens is: take the calibration computed by calico, and all of the corners detected on the patterns. Reconstruct the corners in 3D space. Then, since the location of the pattern is known (we set it to the world coordinate system to start the calibration in the first place), calculate the distance between the reconstructed 3D points and ideal 3D points. Report average standard dev, and median. During the first draft of the paper, I had some unknown issues in the data (similar to your bad detections, but yet different), and the average was really bad. So I also reported the median. Now the average is comparable to the median.

FINALLY to read off the calibration as a result of the multi-camera calibration, go to output-directory/cameras-incremental/variables.txt The first variables will be the camera variables (extrinsic matrices). You would grab the intrinsic information from the data folder as described above. You can also see in this folder the reprojection of the corners as a result of the multi-camera calibration, per equation. I have it set that the default maximum number of corners used is 10 (the points with circles are the ones chosen for the minimization), but since your pattern is closer to the camera, you may want to use more. flag --k=20, say). As you found, in the data/cam#/ext#.png images, it shows the corners detected and reprojection results from the per-camera calibration.

That was a lot, but thanks for sending me some data and letting me know how this works for real people out there :) Let me know how it goes.

Best A

from calico.

amy-tabb avatar amy-tabb commented on June 8, 2024

And Google announced some changed to Google Photos -- it might be that your 'low interest' photos were pruned away. I couldn't find the right reference for this. Anyway, a zip file might be the way to get around this in the future.

Distance from the camera -- I have also noticed that what seems blurry to me sometimes does fine for computer vision tools. Some of them apply Gaussian filters to start anyway.

from calico.

oliver-batchelor avatar oliver-batchelor commented on June 8, 2024

Thanks for all this - I have discovered some interesting things since this last correspondence.

  1. Camera 4 in the data I sent you (aside from missing half the frames) is almost certainly out of sync (how this happened we're still not sure!)

  2. Rolling shutter is affecting the data from these cameras sometimes, leading to less impressive reprojection error - if this actually degrades the calibration much (when most camera shots are static-enough), it's hard to know.

First I have implemented most of the algorithm in your paper (which was very simple and easy to follow), I've been using python and scipy least-squares which seems to work well (perhaps not as fast as Ceres). (It's at github.com/saulzar/multical - not documented at all currently - so sorry that I have not credited you yet!). I've been playing around with some rolling shutter comp/motion models which seems promising, and have some interactive UI for visualising the results and tools for comparing between two different calibrations.. which have been time consuming, but quite valuable for debugging. If I get time it would be good to import calico calibrations.

I also have a couple of questions:
Are the datasets in your paper available anywhere? (found them)

I would be curious if I could verify the algorithm that we get the similar results. My initialisation I haven't yet added the hand-eye mode yet, but the overlapping ones should be good to go. I had more luck with a clustering/averaging algorithm to get relative poses than the least-squares method you cited - it seems to need a scale normalisation (it uses rotations and translations - but they're not always of the same magnitude if you use mm compared to meters).

The second one might be obvious... but in your paper title what is the "asynchronous" part? I think that out of sync camera calibration is possible but this doesn't seem to be the context, does it refer to cameras which don't overlap views?

Thanks!
Oliver

from calico.

amy-tabb avatar amy-tabb commented on June 8, 2024

Hi Oliver,

Glad you found the datasets.

The second one might be obvious... but in your paper title what is the "asynchronous" part? I think that out of sync camera calibration is possible but this doesn't seem to be the context, does it refer to cameras which don't overlap views?

Ok, the async part was the initial framing of the paper. I do calibrate async cameras. To do so, place the calibration object, run program to acquire images from all cameras. Place object in a new location. Press button to acquire images. Repeat about 20 times, you're done. That's why this method does not need 1,000 images. The images themselves are synced - b/c the object is still -- but the cameras are not. I am using cheap webcameras, it takes about 2 minutes to grab all of the images I need with this approach.

Hope that makes sense.

For citing this work, at this stage since your repo is public, you could put "reimplementation of https://github.com/amy-tabb/calico" in your README or something similar, that would do the trick.

Best
A

from calico.

amy-tabb avatar amy-tabb commented on June 8, 2024

Resolved this over email.

from calico.

Related Issues (4)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.