Comments (11)
Calculating the jacobian of (B + RAR^T)^-1 is very complicated. I did it (you can find the code at the following links), but it was very slow and impractical.
In practice, we approximate RAR^T as a constant matrix during each optimization iteration step. Then, dr/dR can be simply given by (B + RAR^T)^-1 * dRa/dR. This approximation doesn't affect the accuracy while making the derivatives simple and fast.
https://github.com/SMRT-AIST/fast_gicp/blob/87cd6288d14bd155e8b7a2144f68bb5246aecc52/include/fast_gicp/gicp/gicp_loss.hpp
https://github.com/SMRT-AIST/fast_gicp/blob/87cd6288d14bd155e8b7a2144f68bb5246aecc52/include/fast_gicp/gicp/gicp_derivatives.hpp
from fast_gicp.
Thanks for your help @koide3!
Could you please give some advice how you calculate the jacobian, thanks for your help very much!
from fast_gicp.
Hi @narutojxl ,
- In this work, we used 3D (XYZ) residuals that result in the same objective function as the scalar one.
- In the paper,
$C^*$ are 3x3 covariance matrices, and thus there should be inverse. In the code, we used expanded 4x4 matrices to take advantage of SSE optimization, and we filled the right bottom corner with 1 before taking inverse to obtain a reasonable result.
from fast_gicp.
Thanks very very much@koide3 :)
BTW, skew(Ra)
according to left perturbation formula?
I see in the code it is skew(Ra + t)
.
Js[count].block<3, 3>(0, 0) = RCR_inv.block<3, 3>(0, 0) * skew(transed_mean_A.head<3>());
from fast_gicp.
It's a trick to calculate the jacobian of expmap. While the jacobian of expmap around r=0
is simply given by the skew symmetric function, the jacobian at an arbitrary point is not easy to obtain. To avoid complicated calculation, we calculate the jacobian at r=0
with the transformed point (p = Ra + t) instead of calculating the jacobian at r=R
with the original point p.
from fast_gicp.
I refer to this book 4.3.4 Perturbation Model
from fast_gicp.
@narutojxl, see Section 3.3.5 (the subsubsection after what you referenced) of the same book or eq (94) of Eade
from fast_gicp.
Hi @narutojxl ,
- In this work, we used 3D (XYZ) residuals that result in the same objective function as the scalar one.
- In the paper,
$C^*$ are 3x3 covariance matrices, and thus there should be inverse. In the code, we used expanded 4x4 matrices to take advantage of SSE optimization, and we filled the right bottom corner with 1 before taking inverse to obtain a reasonable result.
Question about the covariance.
In the linearization process, why directly multiply the M^{-1} * d_i as the residual function can work?From my point of view, I think maybe we should do the LLDT to the M^{-1} matrix and then build the update function?
from fast_gicp.
Hi, doctor @koide3
I have a problem about the objective function, why the log term can be ignored as shown in the red box which also includes the optimized variable
from fast_gicp.
As explained at #20 (comment), we fix the fused covariance matrix at the linearization point. This approximation makes the log term constant and negligible during optimization.
from fast_gicp.
Got it #20 (comment), Thanks for your reply!
from fast_gicp.
Related Issues (20)
- Crash while using VGICP
- how to take fast_gicp in my own code? HOT 3
- About NeighborSearchMethod HOT 1
- got "for_each: failed to synchronize" in every position using "thrust::for_each" HOT 2
- doc request: aligned output point cloud must be different from source and target HOT 1
- Registration score gradually diverges. HOT 3
- cudaErrorIllegalAddress: an illegal memory access was encountered HOT 1
- I want getTransformationProbability() function in ndt_cuda HOT 1
- Unexpected Performance: Single-Threaded Faster than Multi-Threaded in Point Cloud Alignment HOT 2
- Why is the **weight** the square root of the number of points? HOT 4
- the calculation of **rho** in step_lm() seems wrong HOT 2
- set step size on ndt_cuda
- Optimization Problem for Fast GICP (OpenMP)
- any bug in ndt_cuda? HOT 1
- core dumped HOT 1
- How to implement the pcl version of GICP?
- can't build fast_gicp.lib HOT 2
- fast_gicp iteration HOT 1
- Fast-gicp DO HOT 1
- Fast-gicp DO NOT HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fast_gicp.