GithubHelp home page GithubHelp logo

Comments (9)

juliangaal avatar juliangaal commented on July 21, 2024 1

Thank you for those great pictures! Definitely gives me ideas for mounting

from cam_lidar_calibration.

darrenjkt avatar darrenjkt commented on July 21, 2024

Hi thanks for your interest. In our group, we just haven't had to use this repo for non-fisheye cameras. If you have other types of distortion, you'd have to add some code to undistort it first.

from cam_lidar_calibration.

juliangaal avatar juliangaal commented on July 21, 2024

thanks for the quick reply.

What are your plans for this repo? I am asking because of #33 and more changes I am working on in my branch for added safety and user friendliness. Are you interested in PRs?

The main difference in my branch is usability:

  • pressing Capture processes the last received scan: no need to press play on the paused bagfile! A small change, but may be useful in handheld scenarios or some movement of the markers, where the current code practically processes the next scan.
  • Also, when the bagfile is paused, parameter changes to the bounded cloud are applied to the current/latest cloud and published. This allows the user to retry to fit a plane as often as he wants
  • compiler warnings removed/mitigated
  • planned: there is some issue in the assess script, where the image with statistics is not displayed. May be down to user error, or something in the code

from cam_lidar_calibration.

darrenjkt avatar darrenjkt commented on July 21, 2024

Thanks, the PR looks really useful! We are currently doing some improvements internally that we have not pushed to the main branch yet. We plan to do that soon and concurrently review your PR. We'll keep you updated

from cam_lidar_calibration.

juliangaal avatar juliangaal commented on July 21, 2024

Nice. With "internally", are you speaking of changes to the minimization problem or just general structure, code smell, etc?

from cam_lidar_calibration.

jclinton830 avatar jclinton830 commented on July 21, 2024

@juliangaal the internal changes @darrenjkt is talking about are in the calib-v2 branch.

The changes made here are mainly to introduce some level of user-friendliness to the way how the chess board is extracted and the dimensions are registered. In the previous method, the feature extraction sliders had to be used to create a very fine filter to filter out everything other than the board.

In calib-v2 we have more of a hybrid approach. You have to use the feature extraction sliders to create a filter (can be as big as you want). This scene needs to be a static scene with no moving items. Then you capture this static scene without the chess board using the capture background button. After you capture this static scene, you place the chess board, facing the camera and lidar as centred and perpendicular as possible to the sensors and the ground respectively. You should see background subtraction occurring in every loop of the program that detects every new detail that was previously not in the static scene. Then you take your first sample of the chessboard using the capture sample button.

The capture sample routine has been improved by taking 5 consecutive frames after capture is done and taking the running average of the board sample's parameters. You must repeat the capture process multiple times till you think you have a good amount of data. You can then perform the optimise process by pressing the optimise button which will generate the calibration parameters.

In addition to this, I have reformatted a lot of files using Clang for consistency and have added pre-commit to the project. Hence you will see a lot of files have been changed in this branch.

I have also fixed some issues in the Python script visualise_results.py which was to avoid sudden fluctuation of angles when close to +/-2Pi.

These changes were requested by one of our own members - @chinitaberrio and were implemented by me.

With regard to your PR, this may have conflicts with calib-v2 for sure since I have reformatted most of the files in the package. I noticed that your changes in #33 are not big changes, any chance you can add these to calib-v2? @chinitaberrio is meant to test calib-v2 but she has been very busy so I am not sure if she has had time to do this. After she is happy with the changes we can merge this into master.

If you have any questions keep commenting here. Thanks.

from cam_lidar_calibration.

juliangaal avatar juliangaal commented on July 21, 2024

This sounds like a very nice improvement! In my dev branch, I introduced mainly the changes mentioned here, but I guess most won't be compatible with v2. But that's fine! I have gotten very good results with v1.

The main issue for my setup that I see: I am not mounting the board to anything, I am holding it, and have to adjust the filter parameters very carefully. Without a fixed mounting position, using consecutive scans may introduce too much noise. But works well enough as a proof on concept

A couple questions:

  • I am happy to test the v2 branch! Maybe #33 will not be necessary with v2 anyways but I will have to record new data though (current setup: Intel Realsense/ZED/FLIR + VLP16/OS1-32). Do you have any test data you're willing to share? Maybe the bag file from your video demo?

  • You have to use the feature extraction sliders to create a filter (can be as big as you want). This scene needs to be a static scene with no moving items. Then you capture this static scene

    I'm not sure I understand the need for both. If the static scene is subtracted from every incoming scan, why is an additional manual filter necessary?

  • The capture sample routine has been improved by taking 5 consecutive frames after capture is done and taking the running average of the board sample's parameters

    Assuming a static scene, wouldn't it make more sense to accumulate all points to form a "dense" board (may be especially important with low density lidar, or in greater distances) to increase the chance of a successful fitting. Maybe add a statistical outlier removal step before, to clean up the edges?

  • Is v2 more precise, in your experience? Or does that remain to be seen?

  • Could you tell me a little bit more about the way you're mounting the board to the (what looks like) tripod? Couldn't find anything in the paper

from cam_lidar_calibration.

jclinton830 avatar jclinton830 commented on July 21, 2024

@juliangaal apologies I did not realise your setup was handheld.

Yes, we can provide you with some of our data. We are setting up some new lidars on one of our vehicles this week and will collect some new data. So bear with me for a few days.

I am actually very new to this project so I haven't got a lot of experience with the results of this package. After testing it briefly, @chinitaberrio did mention that V2 gave much better results. However, we believe that it needs to be further tested. We will be testing the new updates in the next couple of weeks and will let you know how we are going.

The reason we still use the feature extraction UI is to create a scene where the point cloud is static. This is because we work in a small section of a big lab and there is always a lot going on like people walking, robots moving etc. The feature extraction UI lets us create a small 5 x 5 x 4 m view that is static. However, you are right in your case where you are using a handheld setup this becomes difficult.

Yes, I will post a photo of our board tomorrow. I am not in the lab today.

from cam_lidar_calibration.

jclinton830 avatar jclinton830 commented on July 21, 2024

@juliangaal Here are a couple of pictures of our board and tripod. The tripod has an added custom swivel mechanism to add an additional degree of freedom.

IMG_20230607_103557
IMG_20230607_103608

from cam_lidar_calibration.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.