GithubHelp home page GithubHelp logo

Comments (10)

RaviBeagle avatar RaviBeagle commented on August 24, 2024

To my understanding the evaluate_iterative_single_CALIB.py script use the testRT random translation and rotation error to do the evaluation. For the online usecase starting from a estimated pose, we would predict the transform to the actual pose. Then use that pose in next step successively.

Initialize: set P0 <- Initial Estimated Pose of Lidar (based on converting GPS to Local Co-ordinates)
1) Set P_Estimated = P0

Loop at 100ms:
2) Set P_real = Pose after CMRNet's predicted correction to the P0
3) Set P_Estimated = P_real
4) Goto Step 2

from cmrnet.

cattaneod avatar cattaneod commented on August 24, 2024

Your reasoning sounds correct to me.
However, I believe that you'll need to change how the map is loaded and processed. The provided code assumes that the point clouds are expressed in the camera reference frame (i.e., the ground truth pose is the identity matrix). However, in the online setting, you don't know the poses of the camera, and probably you'll need to load the full map point cloud, and use "global" poses instead of local poses.

from cmrnet.

RaviBeagle avatar RaviBeagle commented on August 24, 2024

Hello @cattaneod ,

Yes. I do load the entire point cloud map and calculate the local maps in camera reference frame (with appropriate axis shifting as required) online. Also the tr_error and rot_error is calculated per pose randomly. The code is completed and working well. However I still take the poses from file. I will send the code by mail.

I did not understand the point you mentioned on "global" poses and "local" poses. Currently this problem I dont see as in CARLA simulation where I am testing the poses provided are always the global ones. In the vehicle we might have to convert GPS to local coordinates.

My next attempt at using CMRNet to directly estimate the odometry i.e. using the loop I mentioned:

Initialize: set P0 <- Initial Estimated Pose of Lidar (based on converting GPS to Local Co-ordinates)

  1. Set P_Estimated = P0

Loop at 100ms:
2) Set P_real = Pose after CMRNet's predicted correction to the P0
3) Set P_Estimated = P_real
4) Goto Step 2

This is not working yet.

from cmrnet.

RaviBeagle avatar RaviBeagle commented on August 24, 2024

Your reasoning sounds correct to me. However, I believe that you'll need to change how the map is loaded and processed. The provided code assumes that the point clouds are expressed in the camera reference frame (i.e., the ground truth pose is the identity matrix). However, in the online setting, you don't know the poses of the camera, and probably you'll need to load the full map point cloud, and use "global" poses instead of local poses.

Hello @cattaneod ,
After some analysis I think now the only problem before us is only one thing: How can we convert the "local map" transforms provided by CMRNet (the RT_predicted) to global transforms in map reference frame. With this we will be able to "correct" the estimated global lidar poses to predicted pose which can be used in the next cycle.

from cmrnet.

cattaneod avatar cattaneod commented on August 24, 2024

You just need to apply the predicted transformation to the initial global pose.
RT_global = RT_init @ RT_predicted

from cmrnet.

RaviBeagle avatar RaviBeagle commented on August 24, 2024

RT_global = RT_init @ RT_predicted

Do you mean to that if the RT_predicted is applied successively to the global pose (the camera global poses or lidar global poses ?) its should be enough
// Apply at each step
RT_global_predicted = RT_global_estimated @ RT_predicted

But when I read the code in evaluate_iterative_single_CALIB.py, I see that the RT_predicted is a transform that applies to the local point cloud map in different axes representation (i.e, x-right, y-forward,z-down ?) Can we use that directly on global poses in a different frame and axes ?

Thank you

from cmrnet.

cattaneod avatar cattaneod commented on August 24, 2024

You might be right, the RT_predicted might be in the different axis representation. In that case you should additionally apply the transformation to convert the axis of the local map (X-forward, Y-right, Z-down), to the axis used in your global map

from cmrnet.

RaviBeagle avatar RaviBeagle commented on August 24, 2024

You might be right, the RT_predicted might be in the different axis representation. In that case you should additionally apply the transformation to convert the axis of the local map (X-forward, Y-right, Z-down), to the axis used in your global map

Hello @cattaneod

Yes.! I used this and its working now. I am getting the following values as the euclidean distances. It looks much better than before

1.0016057015709217
0.7632609130110889
1.6000662965492867
0.6335314340250894
1.4316691714409067
0.4104338104571504
0.5412108653227982
0.8677367236067993
0.06451896243708906
0.689013571125916
0.18936646651593728
1.1538748902937705
1.6963246365141917
1.5277596833268183
0.7666823448536583
1.516898214231317
1.4226326021778204
1.7708160155482124
1.203460323053111

And the values look much better!

from cmrnet.

RaviBeagle avatar RaviBeagle commented on August 24, 2024

Hello @cattaneod

While values look better I was expecting always to get values < 20cm atleast. But there are lots of jumps and inconsistencies. I am running iteratively with all three models of KITTI (provided by you) with sequence 00. I think CMRNet output is not usable an Odometry source.

from cmrnet.

cattaneod avatar cattaneod commented on August 24, 2024

As written in the README and in our paper, CMRNet achieves 20cm MEDIAN translation error, and not maximum error, which means that for half of the frames the error will be higher than 20cm, and for the other half it'll be lower than 20cm.

CMRNet was designed to perform localization, I never tested it for odometry, I'm sorry to hear that it doesn't perform as good as expected. Maybe the evolved CMRNet++ version might improve, but the code is not publicly available at the moment.

from cmrnet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.