GithubHelp home page GithubHelp logo

Comments (9)

nanodust avatar nanodust commented on August 16, 2024

theoretically mac usb 3.0 should work, as kinectSDK works on my dualboot windows - yet, I can't speak to ubuntu native on macbook. that said, I'm surprised it's fast enough to process (i thought gpu was required for kinect 2 linux driver).

questions :

  • re: kinect node 1 - did you do intrinsic calibration for kinect 1 before extrinsic for the system ?
    can assume intrinsic is not necessary for kinect 2 - but for kinect 1, it can help.

  • did you run calibration refinement ?

the point clouds for the different kinect models might still appear split, but it will vastly improve if not eliminate splitting on tracking regardless of kinect model.

from open_ptrack.

jburkeucla avatar jburkeucla commented on August 16, 2024

from open_ptrack.

legshampoo avatar legshampoo commented on August 16, 2024

re: kinect node 1 - did you do intrinsic calibration for kinect 1 before extrinsic for the system ?
can assume intrinsic is not necessary for kinect 2 - but for kinect 1, it can help.

I have not done intrinsic calibration on either KinectV2. I will try that and see if it helps.

did you run calibration refinement ?

I thought that I did. Is that the step where you walk around the coverage zone and it generates the tracking image, as seen in the screenshot with the green and pink lines? Is it typical for the lines to not match up in the image that's generated?

My biggest question at the moment is, is the 'manual calibration adjustment' workaround sufficient to get accurate tracking? I just want to be sure that it's not simply a cosmetic adjustment... that what I see in rviz is in fact the same point cloud that is being tracked, if that makes sense.

The reason I ask is because the 'camera_poses.yaml' is copied to each node, but I am only changing the values in 'opt_calibration_results.launch', which is on the master cpu. Do I have to adjust the values in both files?

from open_ptrack.

nanodust avatar nanodust commented on August 16, 2024

I have not done intrinsic calibration on either KinectV2. I will try that and see if it helps.

to clarify intrinsic calibration is only recommended for the Kinect V1 (normal USB).

the new Kinect v2 (usb 3.0) does not typically require intrinsic calibration.

If you have two Kinect v2, you likely do not need to do intrinsic on them.

indeed ! I see - you did run calibration refinement. Unfortunately, you ended up with far worse result after refinement :(
A good refinement will 'fuse' the different colors into a solid grid (see reference image), yours has done the reverse.

observing your initial refinement tracks, a few tips:

  • make sure only one person is detected in the space during refinement process (including spurious detections - detection must be well calibrated prior to refinement)
  • never step in same place twice, from the same direction .
  • suggest not to cross over like you're doing... the 'x' pattern... better to walk like a grid as in the reference image. Walk at a steady constant pace, as densely as possible for the space you're in.

'manual calibration adjustment' workaround sufficient to get accurate tracking

i've never had to edit the calibration files manually, to get a system working.

from open_ptrack.

nanodust avatar nanodust commented on August 16, 2024

Looking more closely - I will say that you have a challenging space - there are a lot of dynamic objects on the perimeter (bicycles, boxes, tv, chairs) - and a lot of people in background. not really a problem... one can avoid detection by normal tuning per docs - though, it would be challenging to use background subtraction in your space.
Without background subtraction, it may help to limit the detection distance of the sensors in the configs so that you're not getting spurious tracks from objects in the background... if the 'box' in your refinement image is the perimeter of the open area, then the detections beyond that box during refinement are problematic.

Also, have you tried tracking the space with just one Kinect 2 ? I would bet it does a fine job just with the one facing the booths. right now your field of views cover the same space, so you're not getting much out of the 2nd camera.

  • also, note the point clouds will not fuse (more than normal) - refinement only affects the actual tracks.

from open_ptrack.

legshampoo avatar legshampoo commented on August 16, 2024

Thanks for the feedback - wanted to follow up on this issue.

The setup is now in a dedicated, empty space without all the complications from earlier (bikes, people, etc). However, the calibration issues remain. When I calibrate, the kinects are consistently about 1 meter off from each other. The calibration refinement process still yields the same results (nothing matches and it's actually makes things much worse).

I have been working around this by manually adjusting the kinects x,y,z coordinates in opt_calibration_results.launch, until the point clouds visually match up.

The data is being sent to a node.js app and being visualized in a simple html canvas rendering.

The issue I'm having now, is that the tracking positions (x,y,z) have a 'jitter'. You can see some of the jitter in Rviz, and it's amplified when mapped to the scale of our project in the node app to be very noticeable. It seems as if the kinects are fighting with each other to determine the 'true' centroid, so the centroid is constantly shifting slightly. For example, when someone is standing perfectly still their centroid has an erratic jitter.

I have tried adjusting the settings in moving_average_filter.yaml. Increasing the window size helps to reduce the jitter, but it creates a delay that is unacceptable for our requirements.

So I'm still wondering why my calibration is off (but consistently 'off' the same amount), and why the calibration refinement has the opposite effect that it should.

Also, is this 'jitter' to be expected? If the calibration was working correctly would it go away?

Thanks again for any insight

from open_ptrack.

jburkeucla avatar jburkeucla commented on August 16, 2024

Yes, calibration is not working correctly, it seems. Standard issues are non-rigid checkboards, lots of ambient / changing light, odd floor reflections causing Kinect noise. But these don't seem apparent in your photos. Did you update the calibration file with the physical measurements of your checkboard sizes?

from open_ptrack.

legshampoo avatar legshampoo commented on August 16, 2024

Yeah, I printed the checkerboard at scale on rigid foam board, and measured it. As far as I can tell the checkerboard dims are correct (using the defaults) but I will check that again. The new space is controlled - empty room, blacked out windows, wood floor. Nothing I can think of that would cause noise.

If it helps I can provide updated details/photos/screenshots of the setup (the images above were from an earlier iteration).

Are you saying that when the calibration is working the jitter is not present?

from open_ptrack.

jburkeucla avatar jburkeucla commented on August 16, 2024

You shouldn't see much jitter in a normal circumstance. The fact that calibration refinement is not helping is a signal that something is odd. Updated screenshots would help - I can share with the developers and see what they think. Also send the output of ntpq -p on all nodes, if possible.

from open_ptrack.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.