GithubHelp home page GithubHelp logo

openhumanoids / pronto Goto Github PK

View Code? Open in Web Editor NEW
28.0 28.0 16.0 1.03 MB

Robot Motion Estimator

License: GNU Lesser General Public License v2.1

Makefile 0.94% CMake 20.00% MATLAB 8.40% C++ 64.15% C 1.22% Python 4.81% Shell 0.48%

pronto's People

Contributors

andybarry avatar christian-rauch avatar gizatt avatar gtinchev avatar kuindersma avatar mauricefallon avatar patmarion avatar raluca-scona avatar rdeits avatar simalpha avatar wxmerkt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pronto's Issues

how to accumulate lidar

in "pronto/motion_estimate/src/cloud_accumulate/cloud_accumulate.hpp", it seems that you named a "boost::circular_buffer<std::shared_ptr<bot_core::planar_lidar_t>> messages_buffer_", but how do you use it, what is this for?

Extend VO integration to pronto

  • get the module to run on @simalpha 's computer or have dls-fusion run on my computer with the hyq kinematics for a trot. this is a key step
  • ask @MarcoCar if dls-fusion extends pronto-distro or over writes it?
  • preferable his changes shouldn't change pronto itself, pronto distro is fine.
    • have him use the most recent version of pronto with the VO fusion mode
  • fix issues to allow channels for multisense to be MULTISENSE_SCAN in director etc
  • remove need for utorso in director - for either husky or hyq. ACTION: ignore for now
  • incorporating vicon at high rate has a cost. should decimate vicon corrections by 10. its reduces performance otherwise due to re-filtering
  • In the vo algorithm Remove image buffer copy in featureAnalysis() when not outputting features
  • Extend pronto to support orientation corrections from VO. currently just linear translation. straightforward

  • support situations when the VO estimate fails entirely, by resetting the measurement (deltaroot)

  • don't use bot frames to get the previous pose of the robot (at the top of the VO)

  • IMPORTANT Fix this issue: Eigen::Isometry3d delta_body_from_ref = world_to_body_ * ( deltaroot_body_pose_.inverse() );

  • IMPORTANT: mention that the acceleration signals should be filtered during foot impacts E.g. with a high gain FIR filter

  • TODO: get running on the handheld hyq logs, or any logs with an imu and multisense e.g. husky

Laser Utils linking issue

Apparently openhumanoids/common_utils#12 breaks the linking of the state estimator.

@mauricefallon I'm struggling to find the reason why the following happens:

gtinchev@beatrix:~/rpg-navigation/software/pronto/state-estimator$ make
[ 21%] Built target mav-state-est
[ 24%] Linking CXX shared library ../../lib/libgpf-laser-lib.so
CMakeFiles/gpf-laser-lib.dir/LaserLikelihoodInterface.cpp.o: In function `laser_create_projected_scan_from_planar_lidar_with_interpolation':
LaserLikelihoodInterface.cpp:(.text+0x110): multiple definition of `laser_create_projected_scan_from_planar_lidar_with_interpolation'
CMakeFiles/gpf-laser-lib.dir/laser_gpf_lib.cpp.o:laser_gpf_lib.cpp:(.text+0x12e0): first defined here
CMakeFiles/gpf-laser-lib.dir/LaserLikelihoodInterface.cpp.o: In function `laser_update_projected_scan_with_interpolation':
LaserLikelihoodInterface.cpp:(.text+0x120): multiple definition of `laser_update_projected_scan_with_interpolation'
CMakeFiles/gpf-laser-lib.dir/laser_gpf_lib.cpp.o:laser_gpf_lib.cpp:(.text+0x12f0): first defined here
CMakeFiles/gpf-laser-lib.dir/rbis_gpf_update.cpp.o: In function `laser_create_projected_scan_from_planar_lidar_with_interpolation':
rbis_gpf_update.cpp:(.text+0x6c0): multiple definition of `laser_create_projected_scan_from_planar_lidar_with_interpolation'
CMakeFiles/gpf-laser-lib.dir/laser_gpf_lib.cpp.o:laser_gpf_lib.cpp:(.text+0x12e0): first defined here
CMakeFiles/gpf-laser-lib.dir/rbis_gpf_update.cpp.o: In function `laser_update_projected_scan_with_interpolation':
rbis_gpf_update.cpp:(.text+0x6d0): multiple definition of `laser_update_projected_scan_with_interpolation'
CMakeFiles/gpf-laser-lib.dir/laser_gpf_lib.cpp.o:laser_gpf_lib.cpp:(.text+0x12f0): first defined here
collect2: error: ld returned 1 exit status
src/gpf/CMakeFiles/gpf-laser-lib.dir/build.make:172: recipe for target 'lib/libgpf-laser-lib.so' failed
make[3]: *** [lib/libgpf-laser-lib.so] Error 1
CMakeFiles/Makefile2:288: recipe for target 'src/gpf/CMakeFiles/gpf-laser-lib.dir/all' failed
make[2]: *** [src/gpf/CMakeFiles/gpf-laser-lib.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make[1]: *** [all] Error 2
Makefile:27: recipe for target 'all' failed
make: *** [all] Error 2

I'm unsure why it is trying to define the function laser_create_projected_scan_from_planar_lidar_with_interpolation when it is part of an external library.
I'm unsure if that's an issue of overlapping header files.

Could you please assist?

P.S. I tried changing the laser_create_projected_scan_from_planar_lidar_with_interpolation to laser_create_projected_scan_from_planar_lidar_with_interpolation_with_timeout and add an argument for the timeout (in the cpp files), however, the same error occurs. Thus, I think it must be something to do with the linking of the header files.

wrong acceleration estimation from IMU?

I've noticed from here:

bot_trans_apply_vec(&ins_to_body, msg->accel, body_accel);

the base acceleration seems to be computed as body-imu transform applied to the acceleration vector from the IMU, e.g.:

http://mathurl.com/hnloc2x

So if the IMU is far away from the base, there would be a big bias in the base acceleration vector.

Shouldn't instead be:
rotated imu acceleration + translation of the transform x rotational acceleration - rotation rate x (rotation rate x translation) ? e.g.:

http://mathurl.com/zljtlcb

Am I missing something?

I've taken the equations from here (Equation 25-26):
https://www.astro.rug.nl/software/kapteyn/_downloads/attitude.pdf

Fixed bias for imu state

I would like to introduce a ins modality in which I can pre-compute the gyro and accel bias of the IMU and keep them as fixed and never changed state.

I know that this is feature interesting also for other users in this organization, but before working on that I would like to know from which branch should I work on, because pronto-distro points to a very old commit of the master branch of this repository, and if I try to do a submodule update against the latest version, I get compilation errors, probably due to name and lcmtypes changes.

Please let me know.

Timestep dt check

It would be nice to have a way to warn the user when the timestep_dt value differs too much from the actual data frequency.

Something like computing the average of the diff of timestamps and compare to the value.

When switching between different IMUs and drivers, when pronto is in the loop it becomes dangerous when the integration step is wrong.

I've already tried in the past to detect the timestep_dt automatically, but it didn't go well.

Maybe when doing the bias detection we could detect the frequency as well and round to the most reasonable frequency.

correctly accumulating lidar

Current approach:

  1. check if the bot frame is correct (pronto_frame_check_tools), if true
  2. project scan using laser_utils

PROBLEMS
1 contains hard coded checks of multisense joints. (but the lidar could be from a another lidar e.g. SICK)
2 scan could arrive quite late

Challenges:

  • some frames change (multisense, body-to-head on valk), some don't (body-to-head on hyq, fixed lidar)
  • some frames are updated after the lidar is received e.g. state estimate on husky

HyQ issue:

  • there is no neck, so the pronto_frame_check_tools check just fails
  • To solve HyQ issue:
    • properly check that the frame is correct

Husky issue:

  • scan received before frame
  • To solve Husky issue:
    • only lidar which has a valid frame should be projected
    • try to project when lidar is first recieved
    • where this fails, put in a buffer and try again when the next lidar message is received, until the buffer reaches a large size, then drop the lidar scan

cmake file needs to explicitly locate boost libs

fyi - when compiling on the mac, I see

-------------------------------------------
-- visualization
-------------------------------------------
[  0%] Built target lcmgen_c
[  0%] Built target lcmgen_cpp
[  0%] Built target lcmgen_java
[  0%] Built target lcmgen_python
[ 71%] Built target lcmtypes_visualization
[ 80%] Built target lcmtypes_visualization_jar
[ 85%] Built target vs_vis
[ 90%] Built target collections_renderer
Linking CXX executable ../../bin/collections_example
ld: library not found for -lboost_thread
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Incorrect selection of estimator sub-state causes unintended/incorrect updates

The EKF correction is made here: RBISIndexedMeasurement::updateFilter which calls

  • indexedMeasurement() which calls
  • matrixMeasurementGetKandCovDelta() in rbis.cpp to calculate the Kalman Gain (K)

The state update is then used to create the correction:

  • dstate = RBIS(K * z_resid); ("Updated state estimate" on EKF Wikipedia page)

For a position measurement (for example from ICP):

  • I noticed that K and dstate are non zero in velocity and orientation. As a result, the orientation/velocity part of the dstate (the change in state) is non-zero and hence:
  • the orientation and velocity is changed when a position measurement is made. This is not as intended.

I believe this is because the calculation of the Kalman Gain in matrixMeasurementGetKandCovDelta() wrongly selects parts of the state space

@mcamurri @simalpha : I believe this is the reason for the incorrect behaviour when integrating ICP on HyQ
@manuelli @siyuanfeng-tri : I think this could also cause the instability in orientation we are seeing for Valkyrie - but I don't want to jump to conclusions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.