GithubHelp home page GithubHelp logo

Comments (7)

kirbiyik avatar kirbiyik commented on August 17, 2024 1

Running SLAM code while recording is problematic for me because I was only able to install dependencies to a desktop and I can't move it :) I'd like to apply transformations while I get input. I guess I'll need to follow K4AInputThread::Start() and apply transformations in there and save the images instead of sending to SLAM application. This is what you meant @puzzlepaint, right?

RGB dimensions and depth dimensions are different. I just resized them through imagemagick to check if error disappears, now it says FATL| CHECK FAILED: min_depth > 0.f (0 > 0) Keyframe min depth must be larger than 0 since the frustum checks do not work properly otherwise. which is fair. Since I will abandon this method I will close the issue and try to find a way to make it work by changing live input code.

from badslam.

puzzlepaint avatar puzzlepaint commented on August 17, 2024 1

This is what you meant @puzzlepaint, right?

Yes, that would be one way to approach it. Sorry for not responding earlier, I did not notice your later edits since I don't get notification e-mails for them.

from badslam.

puzzlepaint avatar puzzlepaint commented on August 17, 2024

The stack trace unfortunately contains little information, since it seems that the binary was compiled without debug symbols. If you change CMAKE_BUILD_TYPE to RelWithDebInfo and create a new stack trace after recompiling, it should show the location of the error much better.

Apart from that, what I would check is whether the program is able to correctly load the image files, for example by printing the image dimensions, or using the --show_input_images argument (in case the program gets far enough to use that).

In print_calibration(), what are the values of the radial and tangential distortion coefficients? If they are non-zero, then it does not work to simply extract the first four values into calibration.txt. One would need to un-distort the images first (as the live input code does, see for example K4AInputThread::transform_depth_to_color() and K4AInputThread::undistort_depth_and_rgb()), and then use the values given by cv::getOptimalNewCameraMatrix() for calibration.txt.

from badslam.

kirbiyik avatar kirbiyik commented on August 17, 2024

I've compiled with RelWithDebInfo and Debug, none of them gives stack trace with source code info. Should I give some argument while running the executable? Sorry for inconvenience, my C++ skills are very rusty.

After program starts to run it crashes so I can't test --show_input_images.

Regarding the calibration, I've started live input and then get the new matrix from here. Would that approach work?

from badslam.

puzzlepaint avatar puzzlepaint commented on August 17, 2024

Oh, that stack trace probably comes from the fatal log output then, and not from gdb? In that case, you can run the program in the gdb debugger by prepending:
gdb --ex run --args
(with a space at the end)

Then, once it crashes, type bt and press return to get a better stack trace. Type q and return to quit.

Another thing to consider with CUDA errors is that they may be reported asynchronously. In that case where the pitch is wrong I would suspect this not to be the case, though.

The approach for the intrinsics should work as long as you record the undistorted images (that are passed by the live input functionality to the SLAM system) and not the original images provided by the camera.

from badslam.

kirbiyik avatar kirbiyik commented on August 17, 2024

Below is the output of bt. Can you infer something out of it? Something happens while moving the image array to CUDA? I actually provided original images. I would expect bad results due to that but could it crash the program?

I am trying to see the performance of the algorithm with the videos I record using Azure Kinect. So what I did is, Kinect exports them to .mkv file, then I extract depth and rgb images from that. Then I build a dataset folder like above and open them in Dataset Playback. Do you think there is an easier way for my case? If so I can also adopt that approach.

#0  0x00007ffff17fbe97 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x00007ffff17fd801 in __GI_abort () at abort.c:79
#2  0x00007ffff4287ee9 in loguru::log_message(int, loguru::Message&, bool, bool) (stack_trace_skip=2, message=..., with_indentation=true, abort_if_fatal=true)
    at /home/omer/projects/badslam/libvis/third_party/loguru/loguru.cpp:1296
#3  0x00007ffff4288093 in loguru::log_to_everywhere(int, int, char const*, unsigned int, char const*, char const*) (stack_trace_skip=1, verbosity=-3, file=0x5555557eb548 "/home/omer/projects/badslam/./libvis/src/libvis/cuda/cuda_buffer_inl.h", line=85, prefix=0x7ffff42b9703 "", buff=0x7ffe67d57540 "Cuda Error: invalid pitch argument")
    at /home/omer/projects/badslam/libvis/third_party/loguru/loguru.cpp:1309
#4  0x00007ffff428819f in loguru::log(int, char const*, unsigned int, char const*, ...) (verbosity=-3, file=0x5555557eb548 "/home/omer/projects/badslam/./libvis/src/libvis/cuda/cuda_buffer_inl.h", line=85, format=0x7ffff42b99e0 "%s")
    at /home/omer/projects/badslam/libvis/third_party/loguru/loguru.cpp:1332
#5  0x00007ffff4288cb3 in loguru::StreamLogger::~StreamLogger() (this=0x7fffb6ae0a50, __in_chrg=<optimized out>) at /home/omer/projects/badslam/libvis/third_party/loguru/loguru.cpp:1452
#6  0x00005555555a11c6 in vis::CUDABuffer<unsigned short>::UploadAsync(CUstream_st*, vis::Image<unsigned short> const&) (this=<optimized out>, stream=<optimized out>, data=...)
    at /home/omer/projects/badslam/./libvis/src/libvis/cuda/cuda_buffer_inl.h:83
#7  0x000055555559eb74 in vis::BadSlam::PreprocessFrame(int, vis::CUDABuffer<unsigned short>**, std::shared_ptr<vis::Image<unsigned short> >*) (this=this@entry=0x7fffac655ee0, frame_index=frame_index@entry=0, final_depth_buffer=final_depth_buffer@entry=0x7fffac655fc8, final_cpu_depth_map=final_cpu_depth_map@entry=0x7fffb6ae0f80)
    at /home/omer/projects/badslam/applications/badslam/src/badslam/bad_slam.cc:665
#8  0x000055555559f2eb in vis::BadSlam::ProcessFrame(int, bool) (this=0x7fffac655ee0, frame_index=0, force_keyframe=false)
    at /home/omer/projects/badslam/applications/badslam/src/badslam/bad_slam.cc:187
#9  0x00005555555d3211 in vis::MainWindow::WorkerThreadMain() (this=0x7fffffffc010)
    at /home/omer/projects/badslam/applications/badslam/src/badslam/gui_main_window.cc:1834
#10 0x00007ffff222166f in  () at /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#11 0x00007ffff3ffb6db in start_thread (arg=0x7fffb6ae3700) at pthread_create.c:463
#12 0x00007ffff18de88f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

from badslam.

puzzlepaint avatar puzzlepaint commented on August 17, 2024

It usually shouldn't crash the program, and definitely not in this place, but if using original images I would expect the results to be so bad that I probably wouldn't even bother trying. The images must be processed in the way the live input does it in order for the calibration model to fit. The easiest way to achieve this might be to insert a few lines of code into the live input code to directly save the pre-processed images during recording, but that requires running the SLAM program while recording. Otherwise, the transformations must be applied to the saved images.

Thanks for the new stacktrace, this shows that it crashes when attempting to transfer the depth image to the GPU (in bad_slam.cc:665). Given that it works for the ETH3D dataset, I would guess that something might be wrong with the depth images. Are they 16-bit PNGs with the same image dimensions as the color images?

from badslam.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.