GithubHelp home page GithubHelp logo

luxonis / depthai Goto Github PK

View Code? Open in Web Editor NEW
869.0 45.0 219.0 248.18 MB

DepthAI Python API utilities, examples, and tutorials.

Home Page: https://docs.luxonis.com

License: MIT License

Python 92.94% Shell 0.81% CSS 0.15% PowerShell 0.41% Inno Setup 0.75% QML 4.90% Dockerfile 0.04%
ai ml embedded cv spatial performant

depthai's People

Contributors

ahmadchalhoub avatar alex-luxonis avatar azoviktor avatar daniilpastukhov avatar dhruvsheth-ai avatar erol444 avatar geektrove avatar iamsg95 avatar itsderek23 avatar jakaskerl avatar jonngai avatar luxonis-brandon avatar luxonis-brian avatar matictonin avatar moratom avatar njezersek avatar pamolloy avatar peskaf avatar petrnovota avatar philnelson avatar saching13 avatar szabolcsgergely avatar taras-luxonis avatar tersekmatija avatar themarpe avatar vandavv avatar yuriilaba avatar yuriilaba-luxonis avatar zigimigi avatar zrezke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

depthai's Issues

Hang/Stall if No Color Camera Present

When debugging #40 @PTS93 and I circled to the hypothesis that actually the lack of a color camera in his setup (see here) is actually what was causing his setup to hang even when used with USB2.

(When using USB3, the USB3-with-Pi hangup noted in that issue was the problem.)

So to confirm that the hang/stall does happen when no color camera is connected, I temporarily disconnected the camera on a BW1097 (which is only USB2) and confirmed that this same hang does occur.

To do this I just popped the MIPI connector for the color camera off the back of the board, as below:
image

After this I booted the BW1097 (RPi Compute Module Edition) and rand the calibration script.

And here's a video of the lockup/hangup:

https://photos.app.goo.gl/qoRNxocqBPUsvxHi9

@PTS93 - is this what you are seeing?

Thanks,
Brandon

Implement NN-type or NN-class in JSON for No-code running of various supported neural output formats

So as it stands now one can try out their own object detectors without code by using command line options like here. And to use their own models, they just find the folder structure w/ JSON, copy it over and enter any pertinent information according to their model.

So if they've done a custom face-detection model, they literally just use the same JSON and replace the .bin.

So for image classifiers (where a tensor of classes that are in the image), we do not have a no-code version. So for different networks, with different output format, different decode_nn and show_nn has to be implemented.
For some image classifier for example:
if args['cnn_model'] == 'age-gender-recognition-retail-0013':
decode_nn=decode_age_gender_recognition
show_nn=show_age_gender_recognition
calc_dist_to_bb=False

So one has to write their own format/parsing that they see in the emotions-recognition-retail-0003 to parse and display your results, see resources here. You could extend your own functionality into depthai.py. See the decode_emotion_recognition portion here and show_emotion_recognition here.

So the idea is to add another entry in json, to mark if it's something generic, let's say NN_type, or NN_class, and based on that select the decode_nn/show_nn methods.

So that way we don't have to hard-code based on names, and for any network output structure parsing we support (e.g. object detection, image classification, facial landmark, etc.), those types of networks can be used cleanly w/ no-code implementation by simply finding the equivalent folder, making sure the format follows, and running their model.

Segmentation Fault with calibrate.py on BW1097

When running calibrate.py on BW1097 the system seg faults after having stepped through checking out the stereo pairs for visualization.

This seems to be similar to the XLink issue that @VanDavv was seeing when running DepthAI on Raspberry Pi:

python3 calibrate.py -s 2.2 -ih
Using Custom Calibration File: depthai.calib
Using Arguments= {'count': 1, 'square_size_cm': 2.2, 'image_op': 'modify', 'mode': ['capture', 'process'], 'config_overwrite': None, 'board': None, 'field_of_view': 71.86, 'baseline': 9.0, 'swap_lr': True, 'dev_debug': None, 'invert_v': False, 'invert_h': True}
Starting image capture. Press the [ESC] key to abort.
Will take 13 total images, 1 per each polygon.
XLink initialized.
Sending device firmware "cmd_file": /home/pi/Desktop/depthai/depthai.cmd
Successfully connected to device.
Loading config file
Attempting to open stream config_d2h
Successfully opened stream config_d2h with ID #0!
Closing stream config_d2h: ...
Closing stream config_d2h: DONE.
WARNING: Version (dev) does not match (c722ebde932d6627463321816a5654b5be6069e1 & b8cebab238c71263c80087051a70c8165f95d374)
EEPROM data: valid (v3)
watchdog started 1000
  Board name     : BW1097
  Board rev      : R1M1E2
  HFOV L/R       : 71.86 deg
  HFOV RGB       : 68.7938 deg
  L-R   distance : 9 cm
  L-RGB distance : 2 cm
  L/R swapped    : yes
  L/R crop region: center
  Calibration homography:
    1.015737,    0.005940,  -21.193869,
   -0.002906,    1.014818,  -16.099894,
    0.000003,    0.000003,    1.000000,
CNN configurations read: /home/pi/Desktop/depthai/resources/nn/mobilenet-ssd/mobilenet-ssd.json
config_h2d json:
{"_board":{"_homography_right_to_left":[1.0165388584136963,0.0061319186352193356,-20.880294799804688,-0.0021309051662683487,1.0140399932861328,-16.308542251586914,5.620724095933838e-06,3.0937208066461608e-06,1.0]},"_load_inBlob":true,"_pipeline":{"_streams":[{"max_fps":10.0,"name":"left"},{"max_fps":10.0,"name":"right"},{"name":"meta_d2h"}]},"ai":{"calc_dist_to_bb":false},"board":{"clear-eeprom":false,"left_fov_deg":71.86000061035156,"left_to_rgb_distance_m":0.0,"left_to_right_distance_m":0.09000000357627869,"name":"","override-eeprom":true,"revision":"","rgb_fov_deg":69.0,"stereo_center_crop":true,"store-to-eeprom":false,"swap-left-and-right-cameras":true},"depth":{"padding_factor":0.30000001192092896}}
Attempting to open stream config_h2d
Successfully opened stream config_h2d with ID #1!
Writing 1000 bytes to config_h2d
!!! XLink write successful: config_h2d (1000)
Closing stream config_h2d: ...
Closing stream config_h2d: DONE.
Read: 14514560
Attempting to open stream inBlob
Successfully opened stream inBlob with ID #1!
Writing 14514560 bytes to inBlob
!!! XLink write successful: inBlob (14514560)
Closing stream inBlob: ...
Closing stream inBlob: DONE.
depthai: done sending Blob file /home/pi/Desktop/depthai/resources/nn/mobilenet-ssd/mobilenet-ssd.blob
Attempting to open stream outBlob
Successfully opened stream outBlob with ID #0!
Closing stream outBlob: ...
Closing stream outBlob: DONE.
CNN input width: 300
CNN input height: 300
CNN input num channels: 3
Host stream start:meta_d2h
Opening stream for read: meta_d2h
Attempting to open stream meta_d2h
Successfully opened stream meta_d2h with ID #1!
Starting thread for stream: meta_d2h
Started thread for stream: meta_d2h
Host stream start:left
Opening stream for read: left
Attempting to open stream left
Successfully opened stream left with ID #2!
Starting thread for stream: left
Host stream start:right
Opening stream for read: right
Attempting to open stream right
Started thread for stream: left
Successfully opened stream right with ID #0!
Starting thread for stream: right
Started thread for stream: right
depthai: INIT OK!
py: Saved image as: left_p0_0.png
py: Saved image as: right_p0_0.png
py: Saved image as: left_p1_1.png
py: Saved image as: right_p1_1.png
py: Saved image as: left_p2_2.png
py: Saved image as: right_p2_2.png
py: Saved image as: left_p3_3.png
py: Saved image as: right_p3_3.png
py: Saved image as: left_p4_4.png
py: Saved image as: right_p4_4.png
py: Saved image as: left_p5_5.png
py: Saved image as: right_p5_5.png
py: Saved image as: left_p6_6.png
py: Saved image as: right_p6_6.png
py: Saved image as: left_p7_7.png
py: Saved image as: right_p7_7.png
py: Saved image as: left_p8_8.png
py: Saved image as: right_p8_8.png
py: Saved image as: left_p9_9.png
py: Saved image as: right_p9_9.png
py: Saved image as: left_p10_10.png
py: Saved image as: right_p10_10.png
py: Saved image as: left_p11_11.png
py: Saved image as: right_p11_11.png
py: Saved image as: left_p12_12.png
py: Saved image as: right_p12_12.png
Starting image processing

Attempting to read images for left camera from dir: dataset/left/
Attempting to read images for right camera from dir: dataset/right/
Finding chessboard corners for left_p0_0.png and right_p0_0.png...
	[OK]. Took 0 seconds.
Finding chessboard corners for left_p10_10.png and right_p10_10.png...
	[OK]. Took 0 seconds.
Finding chessboard corners for left_p11_11.png and right_p11_11.png...
	[OK]. Took 0 seconds.
Finding chessboard corners for left_p12_12.png and right_p12_12.png...
	[OK]. Took 0 seconds.
Finding chessboard corners for left_p1_1.png and right_p1_1.png...
	[OK]. Took 0 seconds.
Finding chessboard corners for left_p2_2.png and right_p2_2.png...
	[OK]. Took 0 seconds.
Finding chessboard corners for left_p3_3.png and right_p3_3.png...
	[OK]. Took 0 seconds.
Finding chessboard corners for left_p4_4.png and right_p4_4.png...
	[OK]. Took 0 seconds.
Finding chessboard corners for left_p5_5.png and right_p5_5.png...
	[OK]. Took 0 seconds.
Finding chessboard corners for left_p6_6.png and right_p6_6.png...
	[OK]. Took 0 seconds.
Finding chessboard corners for left_p7_7.png and right_p7_7.png...
	[OK]. Took 0 seconds.
Finding chessboard corners for left_p8_8.png and right_p8_8.png...
	[OK]. Took 0 seconds.
Finding chessboard corners for left_p9_9.png and right_p9_9.png...
	[OK]. Took 0 seconds.
13 of 13 images being used for calibration
[OK] Calibration successful w/ RMS error=0.20734621552364482
Rectifying Homography...
[[ 1.0180424e+00  5.1960642e-03 -2.1386580e+01]
 [-1.8764788e-03  1.0149957e+00 -1.6575731e+01]
 [ 5.9488111e-06  1.8393540e-06  1.0000000e+00]]
Calibration file written to ./resources/depthai.calib.
	Took 15 seconds to run image processing.

Rectifying dataset for visual inspection
Using Homography from file, with values: 
[[ 1.0180424e+00  5.1960642e-03 -2.1386580e+01]
 [-1.8764788e-03  1.0149957e+00 -1.6575731e+01]
 [ 5.9488111e-06  1.8393540e-06  1.0000000e+00]]
Average Epipolar Error: 2.9739750158413183
Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.
Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.
Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.
Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.
Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.
Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.
Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.
Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.
Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.
Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.
Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.
Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.
Displaying Stereo Pair for visual inspection. Press the [ESC] key to exit.
py: DONE.
Stopping threads: ...
Closing stream left: ...
Closing stream left: DONE.
Thread for left finished.
Closing stream right: ...
Closing stream right: DONE.
Thread for right finished.
E: [global] [    344039] [python3] XLinkReadData:165	addEventWithPerf(&event, &opTime) failed with error: 3
Device get data failed: 3
Closing stream meta_d2h: ...
Closing stream meta_d2h: DONE.
Thread for meta_d2h finished.
E: [global] [    344040] [Scheduler00Thr] dispatcherEventSend:53	Write failed (header) (err -1) | event XLINK_CLOSE_STREAM_REQ

E: [xLink] [    344041] [Scheduler00Thr] sendEvents:1036	Event sending failed
Stopping threads: DONE.
Closing all observer streams: ...
Closing all observer streams: DONE.
Segmentation fault

Remove default.calib file from version control

The resources/default.calib is currently under version control. I believe this is problematic as it should be overwritten after performing calibration and it's easy to git push this board-specific calib file on mistake.

Possible alternative:

  • Change default.calib to default.calib.example. Keep this in version control.
  • Ignore all files w/the calib extension
  • When running our example scripts, copy default.calib.example => default.calib if it doesn't exist.

/cc @Luxonis-Brandon @Luxonis-Brian

Add `--config_overwrite` option to test.py

The test.py script doesn't have an argument parsing like calibrate.py, which means adjusting the config must be done in code. This presents a couple of problems:

  • During development, it's easy to forget to switch config values back to sensible defaults (see #22).
  • The 1098obc has a distance of 7.5cm between the stereo cameras, which means currently they'd have to modify the test.py code to set the correct distance.
  • If you want to verify the stereo calibration, you have to manually modify the code to adjust the streams.

I suggest adding a --config_overwrite option to test.py that behaves the same as calibrate.py for consistency.

test.py hangs after a few frames

I'm now seeing similar issues as #32 with example_run_emotion_recognition.py. The script runs for a few frames, then hangs.

I'm fairly certain both of these scripts worked at the time of the MVP release.

example_run_emotion_recognition.py - NameError: name 'detections' is not defined

The example_run_emotion_recognition.py script appears to broken w/the latest update. I'm seeing the following error when running the script:

Traceback (most recent call last):
  File "example_run_emotion_recognition.py", line 75, in <module>
    emotion = e_states[np.argmax(detections)]
NameError: name 'detections' is not defined

Noisy calibration logs

When running the calibration script, DepthAI generates a lot of log messages that don't impact the user.

In this example, I'm running calibration for a single polygon position. This generates 79 log lines (to be clear, it's not 79 per-polygon as a number of these are ran at start/stop). Error messages that the user needs to know can easily get lost. For example, in the output, the error message on line 55 is very important:

[ERROR] Missing valid image sets for 13 polygons. Re-run calibration with the
'-p 0 1 2 3 4 5 6 7 8 9 10 11 12' argument to re-capture images for these polygons.

...but an additional 24 log lines are emitted by DepthAI as the pipeline is shutdown, making the error message hard to find.

Is there a way to adjust the log level of the DepthAI pipeline?

/cc @Luxonis-Brandon

Cloned repo is large (433MB)

.git/objects/pack (295 MB)

The largest chunk of this is from a large file within .git/objects/pack (295 MB):

-r--r--r--    1 dlite  staff   295M Feb  6 06:19 pack-0748331a0ae468ca5a9b5bee41418f549ba5da00.pack
 git verify-pack -v pack-0748331a0ae468ca5a9b5bee41418f549ba5da00.pack \
> | sort -k 3 -n \
> | tail -10
e15384968d47a2e872030a79ea0cb7dced9f55e6 blob   10699686 1693489 176982870
c25229a9a5baacc4d329b19607f57f68d4f5e1e7 blob   11669167 11239986 156585534
34b3a22cea76771375c88d2759aa397b19e41d72 blob   12001915 1873891 173316462
c228dda8b728160ef64ecfc93a2917105e611734 blob   13721024 12389225 264320842
92aa84d7737c0e7f2f18948efb6366b6fa3115dc blob   14480768 10688751 80481154
a27dd3f3b30dccfa84381d2e78db96ec31854af5 blob   14485504 10693136 100915888
453306b0fd84f7100aebe6faada019cbebddb6bb blob   17373318 2765131 167825520
a0b91c133e07bce25fe5391da0d3a3d7f3a7157e blob   17416012 2760363 151965320
a379aad59c03317f33353a71a866e51d1cfaae2b blob   23778880 21935094 276710132
0f249d5573b25944b99aa9a84176928a1b35f696 blob   42642112 39501577 179054474

These appear to be from the mechanical files which have been removed from the repo:

 git rev-list --objects --all | grep e15384968d47a2e872030a79ea0cb7dced9f55e6
e15384968d47a2e872030a79ea0cb7dced9f55e6 Mechanical_Models/BW1099_R3M2E3_KTHSNF.step
Dereks-MacBook-Pro:pack dlite$ git rev-list --objects --all | grep c25229a9a5baacc4d329b19607f57f68d4f5e1e7
c25229a9a5baacc4d329b19607f57f68d4f5e1e7 Mechanical_Models/BW1097_R2M2E2.zip

See this comment from another repo w/a similar issue on how to cleanup this disk space.

nn folder (66 MB)

du -h nn/
 52M	nn//object_detection_4shave
4.8M	nn//object_recognition_4shave/emotion_recognition
9.5M	nn//object_recognition_4shave/landmarks
 14M	nn//object_recognition_4shave
 66M	nn/

/cc @Luxonis-Brandon

Calibration.py writes to same image directories without notifying user

When running Calibration.py back-to-back from one board to the next, the default script writes the images to the same directory - so you end up with images from different boards - and therefore the calibration fails.

TDTRT We should at the least warn the user of this, and ideally warn the user and make a new folder path for the other board, and work out of that.

Version checking between script, .so, and .cmd

The why of this is for folks (like me) who build DepthAI from source on their machines because of customized environments.

When building from source, it's necessary to rebuild from source when updating to the latest from Master. I forgot to so so, and then forgot I hadn't done so, and it resulted in needless debugging on what was going wrong, when the actual issue was that the .so and the .cmd were communicating using differing formats, because they were built with differing codebases.

Specifically, the .py and the .cmd were up-to-date with Master, but the .so was not - because I forgot to rebuild it.

So the .cmd and .so already have versioning information in them, so the what of this improvement is:

  • Add a version info into the script, and check if it matches with .so and .cmd to avoid this kind of error.

The error produced when the .so did not match the .cmd and .py is below, and note that it will change to really any permutation depending on how they don't match, and what was changed between builds... so having the checking in the script is important to avoid confusing false-alarm debugging (which is what I caused in this case).

(py38) DESKTOP-OGSI4CI:depthai leeroy$ git pull
remote: Enumerating objects: 35, done.
remote: Counting objects: 100% (35/35), done.
remote: Compressing objects: 100% (22/22), done.
remote: Total 35 (delta 17), reused 25 (delta 13), pack-reused 0
Unpacking objects: 100% (35/35), done.
From https://github.com/luxonis/depthai
   773f471..cad13e5  master     -> origin/master
Fetching submodule depthai-api
From https://github.com/luxonis/depthai-api
   cb09505..b2e658c  master     -> origin/master
 * [new branch]      refactor   -> origin/refactor
Updating 773f471..cad13e5
Fast-forward
 calibrate.py                               |   5 ++++-
 calibrate_and_test.py                      |   2 ++
 depthai-api                                |   2 +-
 depthai.cmd                                | Bin 5735224 -> 5735872 bytes
 depthai.cpython-36m-x86_64-linux-gnu.so    | Bin 4922816 -> 4929144 bytes
 depthai.cpython-37m-arm-linux-gnueabihf.so | Bin 4496568 -> 4523272 bytes
 depthai.py                                 |  28 ++++++++++++++++++----------
 depthai_supervisor.py                      |   2 ++
 depthai_usb2.cmd                           | Bin 5734808 -> 5735456 bytes
 integration_test.py                        |   4 +++-
 10 files changed, 30 insertions(+), 13 deletions(-)
 mode change 100644 => 100755 calibrate.py
 mode change 100644 => 100755 calibrate_and_test.py
 mode change 100644 => 100755 depthai.py
 mode change 100644 => 100755 depthai_supervisor.py
 mode change 100644 => 100755 integration_test.py
(py38) DESKTOP-OGSI4CI:depthai leeroy$ python3 test.py -s metaout previewout depth_sipp,15 -bb -ff
python3 depthai.py '-s' 'metaout' 'previewout' 'depth_sipp,15' '-bb' '-ff' 
Using Custom Calibration File: depthai.calib
Using Arguments= {'config_overwrite': None, 'board': None, 'field_of_view': 71.86, 'rgb_field_of_view': 68.7938, 'baseline': 9.0, 'rgb_baseline': 2.0, 'swap_lr': True, 'store_eeprom': False, 'clear_eeprom': False, 'override_eeprom': False, 'device_id': '', 'dev_debug': None, 'force_usb2': None, 'cnn_model': 'mobilenet-ssd', 'disable_depth': False, 'draw_bb_depth': True, 'full_fov_nn': True, 'streams': [{'name': 'metaout'}, {'name': 'previewout'}, {'name': 'depth_sipp', 'max_fps': 15.0}], 'video': None}
depthai.__version__ == 0.0.10a
depthai.__dev_version__ == cb09505f04b16734af99b58cbe763f7afcf86faf
XLink initialized.
Sending device firmware "cmd_file": /Users/leeroy/depthai/depthai.cmd
Successfully connected to device.
Loading config file
Attempting to open stream config_d2h
watchdog started 3000
Successfully opened stream config_d2h with ID #0!
Closing stream config_d2h: ...
Closing stream config_d2h: DONE.
WARNING: Version (dev) does not match (30295b558351cd030408e12a220cdd55b5fb450e & cb09505f04b16734af99b58cbe763f7afcf86faf)
EEPROM data: valid (v2)
  Board name     : BW1098OBC
  Board rev      : R0M0E0
  HFOV L/R       : 71.86 deg
  HFOV RGB       : 68.7938 deg
  L-R   distance : 7.5 cm
  L-RGB distance : 3.75 cm
  L/R swapped    : yes
  L/R crop region: top
  Calibration homography:
    1.002324,   -0.004016,   -0.552212,
    0.001249,    0.993829,   -1.710247,
    0.000008,   -0.000010,    1.000000,
Available streams: ['meta_d2h', 'left', 'right', 'disparity', 'depth_sipp', 'metaout', 'previewout', 'jpegout', 'video']
CNN configurations read: /Users/leeroy/depthai/resources/nn/mobilenet-ssd/mobilenet-ssd_depth.json
config_h2d json:
{"_board":{"_homography_right_to_left":[0.9997568726539612,-0.002453562105074525,0.18778356909751892,-0.00013814299018122256,0.9944143295288086,-2.22407865524292,5.636975402012467e-06,-7.338615432672668e-06,1.0]},"_load_inBlob":true,"_pipeline":{"_streams":[{"name":"metaout"},{"name":"previewout"},{"data_type":"uint16","max_fps":15.0,"name":"depth_sipp"}]},"ai":{"calc_dist_to_bb":true,"keep_aspect_ratio":false},"board":{"clear-eeprom":false,"left_fov_deg":71.86000061035156,"left_to_rgb_distance_m":0.019999999552965164,"left_to_right_distance_m":0.09000000357627869,"name":"","override-eeprom":false,"revision":"","rgb_fov_deg":68.7938003540039,"stereo_center_crop":true,"store-to-eeprom":false,"swap-left-and-right-cameras":true},"depth":{"depth_limit_mm":10000,"padding_factor":0.30000001192092896}}
Attempting to open stream config_h2d
Successfully opened stream config_h2d with ID #1!
Writing 1000 bytes to config_h2d
!!! XLink write successful: config_h2d (1000)
Closing stream config_h2d: ...
Closing stream config_h2d: DONE.
Creating observer stream host_capture: ...
Attempting to open stream host_capture
Successfully opened stream host_capture with ID #0!
Creating observer stream host_capture: DONE.
Read: 14514560
Attempting to open stream inBlob
Successfully opened stream inBlob with ID #1!
Writing 14514560 bytes to inBlob
!!! XLink write successful: inBlob (14514560)
Closing stream inBlob: ...
Closing stream inBlob: DONE.
depthai: done sending Blob file /Users/leeroy/depthai/resources/nn/mobilenet-ssd/mobilenet-ssd.blob
Attempting to open stream outBlob
Successfully opened stream outBlob with ID #2!
Closing stream outBlob: ...
Closing stream outBlob: DONE.
CNN input width: 300
CNN input height: 300
CNN input num channels: 3
CNN to depth bounding-box mapping: start(68, 38), max_size(1144, 643)
Host stream start:depth_sipp
Opening stream for read: depth_sipp
Attempting to open stream depth_sipp
E: [global] [    950439] [] dispatcherEventSend:53	Write failed (header) (err -4) | event XLINK_CREATE_STREAM_REQ

E: [xLink] [    950439] [] sendEvents:1036	Event sending failed
E: [global] [    950439] [] XLinkOpenStream:95	Cannot find stream id by the "depth_sipp" name
E: [global] [    950439] [] XLinkOpenStream:96	Max streamId reached!
Failed to open stream depth_sipp ! Retrying ...
E: [global] [    950493] [] dispatcherEventSend:53	Write failed (header) (err -4) | event XLINK_CREATE_STREAM_REQ

E: [xLink] [    950493] [] sendEvents:1036	Event sending failed
E: [global] [    950493] [] XLinkOpenStream:95	Cannot find stream id by the "depth_sipp" name
E: [global] [    950493] [] XLinkOpenStream:96	Max streamId reached!
Failed to open stream depth_sipp ! Retrying ...
E: [global] [    950548] [] dispatcherEventSend:53	Write failed (header) (err -4) | event XLINK_CREATE_STREAM_REQ

E: [xLink] [    950548] [] sendEvents:1036	Event sending failed
E: [global] [    950548] [] XLinkOpenStream:95	Cannot find stream id by the "depth_sipp" name
E: [global] [    950548] [] XLinkOpenStream:96	Max streamId reached!
Failed to open stream depth_sipp ! Retrying ...
E: [global] [    950599] [] dispatcherEventSend:53	Write failed (header) (err -4) | event XLINK_CREATE_STREAM_REQ

E: [xLink] [    950599] [] sendEvents:1036	Event sending failed
E: [global] [    950599] [] XLinkOpenStream:95	Cannot find stream id by the "depth_sipp" name
E: [global] [    950599] [] XLinkOpenStream:96	Max streamId reached!
Failed to open stream depth_sipp ! Retrying ...
E: [global] [    950649] [] dispatcherEventSend:53	Write failed (header) (err -4) | event XLINK_CREATE_STREAM_REQ

E: [xLink] [    950649] [] sendEvents:1036	Event sending failed
E: [global] [    950649] [] XLinkOpenStream:95	Cannot find stream id by the "depth_sipp" name
E: [global] [    950649] [] XLinkOpenStream:96	Max streamId reached!
Failed to open stream depth_sipp ! Retrying ...
Stream not opened: depth_sipp
depthai: depth_sipp error;
depthai: INIT OK!
watchdog triggered 
Stopping threads: ...
Stopping threads: DONE 0.000s.
Closing all observer streams: ...
Closing stream host_capture: ...
E: [global] [    952837] [] dispatcherEventSend:53	Write failed (header) (err -4) | event XLINK_CLOSE_STREAM_REQ

E: [xLink] [    952837] [] sendEvents:1036	Event sending failed
Closing stream host_capture: DONE.
Closing all observer streams: DONE.
Reseting device: 0.
E: [global] [    952837] [] dispatcherEventSend:53	Write failed (header) (err -4) | event XLINK_RESET_REQ

E: [xLink] [    952837] [] sendEvents:1036	Event sending failed
E: [global] [    952837] [] XLinkResetRemote:243	can't wait dispatcherClosedSem

Reseting: DONE.

Crash when moving out of frame

@Luxonis-Brandon reported that the reduce_rgb_latency branch crashed when he moved out of the frame on the BW1097 when running python3 test.py.

Host exception:

Traceback (most recent call last):
  File "test.py", line 120, in <module>
    y1 = int(e[0]['top'] * img_h)
ValueError: cannot convert float NaN to integer

My guess is e[0]['top'] is NaN. This didn't happen every time he left the frame.

Reduce significant figures in X, Y, Z outputs

The precision isn't real, so we should eliminate it. I think it will also look and feel like it's less noisy as the user won't see the numbers changing as much, even though the apparent precision is lower.

This could be set as a function of distance from the camera that follows the rise in stereo error at increased Z, or it could be set to a fixed value, like 10cm. Here's an example of RMS error increasing with distance from some Intel Real Sense devices.

Suggestion:

  • <2m use cm precision
  • 2m to 5m use dm precision
  • >5m use half-meter precision

Crash w/invalid label index

See:

Traceback (most recent call last):ย  
File "test.py", line 132, in <module>ย  ย  
cv2.putText(frame, labels[int(e[0]['label'])], pt_t1, cv2.FONT_HERSHEY_SIMPLEX, 0.5, 
(0, 0, 255), 2)IndexError: list index out of range

System info:

./log_system_information.sh 
Linux raspberrypi 4.19.66-v7+ #1253 SMP Thu Aug 15 11:49:46 BST 2019 armv7l GNU/Linux
depthai.__version__ == 0.0.9a
depthai.__dev_version__ == c1082d5a67f724398b0617485076f0e34da1054f
numpy.__version__ == 1.16.2
Python 3.7.3

USB3 Doesn't Work with Raspberry Pi 4

Testing a slew of USB3 cables of various lengths between DepthAI: USB3 [Onboard Camera or FFC Camera] and a Raspberry Pi 4 results in hanging after Successfully initialized XLink! message.

With USB2 cables, it always works.

There is some code in XLink host-side that tries to match the USB port numbers before and after boot (the idea is to connect multiple devices to a host). This code is flawed for some non-standard connection schemes, like it seems to be on RPi4:

USB2 (boot) address: 1-1.1
USB3 (app) address: 2-1
(that code tries to match 1.1 with 1)

On my laptop the USB addresses look like:

USB2 (boot) address: 1-1
USB3 (app) address: 2-1
So the matching works

The failing code is here:
https://github.com/opencv/dldt/blob/2019_R3.1/inference-engine/thirdparty/movidius/XLink/pc/usb_boot.c#L321

We could implement a workaround (better place in XLink itself, but also possible in the host app code) to resolve this. Already tried hardcoding the expected port number, and depthai USB boot works fine

So when referring to the XLink SingleStream example, it looks like new API is used that no longer requires a mapping between USB port addresses (pre and post boot). Using this API in the host app will resolve our issue.

So it seems with this root cause (port-number matching), and the use of the new API (which doesn't do port-number matching, we should be good to solve this problem.

We will just need to switch to the new API and make sure it doesn't break anything else.

Capability to Select AutoFocus Type

Start with the why:

Right now DepthAI uses a continuous autofocus which is a bit autofocus happy in that it corrects it self often. For neural networks, they don't care and this works well.

But if DepthAI is being used for human consumption, this jumpy focusing behavior can be distracting.

The 'what':
We can support other focusing modes, including the capability for the host to trigger a focus on-demand (i.e. the host can tell DepthAI when it wants a refocus).

These include:
1 Continuous (current default, depthai is constantly searching for the best focus)
2 Host-triggered (host tells depthai when to attempt a refocus)
3 MACRO - (UNTESTED)
4 EDOF - Extended Depth of Field (UNTESTED)

The how
Implement host-side parameter that allows configuring depthai into the focus mode of interest.

For 2 above, implement a trigger capability from the host to tell depthai to refocus.

ROI and Depth Only Approximately Mapped Currently

So when using DepthAI with 3D output, one can notice regions where the depth information is not aligning with the bounding box. This is because in an effort to ship CrowdSupply on time we crudely and approximately (i.e. we 'eye-balled') the mapping between the depth stream and the color stream on which the region of interest is found.

For the most part, it still works 'OK', and we've yet to have anyone complain, but depending on the hardware model one is using, and the neural model one is using, it can be pretty incorrectly mapped. It's easy to see with face-detection-retail-0004.blob on the BW1098OBC using test.py.

https://photos.app.goo.gl/QxKBHCfGkxQkg3ZP8

So now that we have calibration saved to on-board eeprom (#56), and we're about to have board configs (#73) we should do a more rigorous (non-eye-balling) mapping between the cameras, field of view, and any dependence on the aspect ration of the neural model used.

Transfer this repo to the "luxonis" GitHub org & re-org the repo

This repo is currently under @Luxonis-Brandon's GitHub account. It makes sense to move this under the Luxonis GitHub org.

Why now?

The RPi compute additions have this repo checked out so the calibration can be performed prior to shipping. We want to ensure their local git repo points to a valid url and simply type "git pull" to fetch updates vs. having to update their .git/config as well.

Additionally:

  • Change the name to depthai-python-extras.
  • Remove the /images folder and top-level README.
  • Move the contents of the python-api folder to the top-level

Why not call the repo depthai-python-api?

Two primary reasons:

  1. While the repo contains the shared library files (.so extension) that contain the depthai Python module, in the medium-term we plan to release a pip-installable version of depthai. When that happens, the .so files will not be used and no API-specific things, just utilities, examples, and tutorials will be located in this repo.

  2. This repo does not contain the depthai Python module source code, which is what I'd expect for a repo with the name depth-python-api. Instead, it's primarily scripts that load the depthai module.

How will this repo be organized?

Eventually we'll have 3 primary folders:

  1. utils
  2. examples
  3. tutorials (the code used for more in-depth walkthrus on our docs and blog)

I don't believe we can start this structure now because of how resource paths and resources are loaded (they assume the script is being executed in the top-level directory and the above dependencies are in nested folders). This will change w/the pip version of depthai.

For now, it's just the contents of the existing python-api folder.

What if I already have this repo checked out locally?

I'll provide instructions for updating your local repo after the transfer.

How do you transfer?

The repo owner, @Luxonis-Brandon, can do this in the repository settings "Danger Zone" area:

image

Creating a pipeline with a blob file config that doesn't exist results in obscure error

I switched to a new detection network and upon doing so made a typo in the blob_file_config which presented me with a very obscure error seen below. It would be helpful if an error was thrown if the blob or blob config file does not exist.

Using Custom Calibration File: depthai.calib
XLink initialized.
Sending device firmware "cmd_file": /home/pi/depthai/depthai.cmd
Successfully connected to device.
Loading config file
watchdog started Attempting to open stream config_d2h
3000
Successfully opened stream config_d2h with ID #0!
Closing stream config_d2h: ...
Closing stream config_d2h: DONE.
WARNING: Version (dev) does not match (c722ebde932d6627463321816a5654b5be6069e1 & 30295b558351cd030408e12a220cdd55b5fb450e)
EEPROM data: invalid / unprogrammed
watchdog triggered
Stopping threads: ...
Stopping threads: DONE 0.000s.
Closing all observer streams: ...
Closing all observer streams: DONE.
Reseting device: 0.
E: [global] [    215801] [Scheduler00Thr] dispatcherEventSend:53        Write failed (header) (err -4) | event XLINK_RESET_REQ

E: [xLink] [    215801] [Scheduler00Thr] sendEvents:1036        Event sending failed
Reseting: DONE.
XLink initialized.
Sending device firmware "cmd_file": /home/pi/depthai/depthai.cmd
Successfully connected to device.
Loading config file
Attempting to open stream config_d2h
Successfully opened stream config_d2h with ID #0!
Closing stream config_d2h: ...
Closing stream config_d2h: DONE.
WARNING: Version (dev) does not match (c722ebde932d6627463321816a5654b5be6069e1 & 30295b558351cd030408e12a220cdd55b5fb450e)
EEPROM data: invalid / unprogrammed
terminate called after throwing an instance of 'nlohmann::detail::parse_error'
  what():  [json.exception.parse_error.101] parse error at line 1, column 1: syntax error while parsing value - unexpected end of input; expected '[', '{', or a literal
Aborted

Example code:

depthai.init_device(consts.resource_paths.device_cmd_fpath)

config = {
    'streams': ['metaout', 'previewout'],
    'depth':
    {
        'calibration_file': consts.resource_paths.custom_calib_fpath,
        'padding_factor': 0.3
    },
    'ai':
    {
        'blob_file': "networks/person-2/face-detection-adas-0001.blob",
        # Notice the typo here with networks-2 instead of person-2
        'blob_file_config': "networks-2/person/face-detection-adas-0001_depth.json",
        'calc_dist_to_bb': True
    }
}

pipeline = depthai.create_pipeline(config)

Example scripts have blurry video output

The video output for emotion / landmarks is very blurry and the text overlay is large compared to the video output size:

image

From @yuriilaba-luxonis via Slack:

The image in previewout has the same size as input tensor for neural network (in case of emotion recognition it is 64x64, for landmark recognition it is 48x48).
We use it, because it's cheap (resources) to get it.
So we just rescale previewout to 300x300, that's why image is the blurry. We can reduce text size.

Strange artifacts showing on right side of preview image

I just converted the vehicle-detection-adas-002 to the blob format and tried running it through a similar pipeline as the second tutorial. It partially works and it does manage to recognize vehicles quite accurately but seems to show some weird green and pink bars on the right hand side of the preview image:
image

I did re-compile the depthai-api module to run on my board which is a 64 bit version of linux: Linux NanoPi-Fire3 4.4.172-s5p6818 #1 SMP PREEMPT Mon Nov 11 11:24:09 CST 2019 aarch64 aarch64 aarch64 GNU/Linux

I am also using python 3.7 and opencv 3.4.9 if that makes any difference?

Here is the output from depthai after running the python script:

python3 vehicle_detection.py
Using Custom Calibration File: depthai.calib
XLink initialized.
Sending device firmware "cmd_file": /home/pi/depthai-python-extras/depthai.cmd
Successfully connected to device.
Loading config file
watchdog started Attempting to open stream config_d2h
1000
Successfully opened stream config_d2h with ID #0!
Closing stream config_d2h: ...
Closing stream config_d2h: DONE.
WARNING: Version (dev) does not match (unknown & 8c6328c8acd1086542b47d09b4b192ec1143914c)
CNN configurations read: vehicle-detection-adas-0002.json
depthai: Calibration file is not specified, will use default setting;
config_h2d json:
{"_board":{"_homography_right_to_left":[0.988068163394928,0.0029474012553691864,5.067617416381836,-0.008765067905187607,0.9921473264694214,-8.795275688171387,-8.449587767245248e-06,-3.603489403758431e-06,1.0]},"_load_inBlob":true,"_pipeline":{"_streams":[{"name":"metaout"},{"name":"previewout"},{"name":"meta_d2h"}]},"ai":{"calc_dist_to_bb":false},"board":{"clear-eeprom":false,"left_fov_deg":69.0,"left_to_rgb_distance_m":0.0,"left_to_right_distance_m":0.03500000014901161,"override-eeprom":false,"store-to-eeprom":false,"swap-left-and-right-cameras":false},"depth":{"padding_factor":0.30000001192092896}}
Attempting to open stream config_h2d
Successfully opened stream config_h2d with ID #1!
Writing 1000 bytes to config_h2d
!!! XLink write successful: config_h2d (1000)
Closing stream config_h2d: ...
Closing stream config_h2d: DONE.
Read: 4248000
Attempting to open stream inBlob
Successfully opened stream inBlob with ID #2!
Writing 4248000 bytes to inBlob
!!! XLink write successful: inBlob (4248000)
Closing stream inBlob: ...
Closing stream inBlob: DONE.
depthai: done sending Blob file vehicle-detection-adas-0002.blob
Attempting to open stream outBlob
Successfully opened stream outBlob with ID #0!
Closing stream outBlob: ...
Closing stream outBlob: DONE.
CNN input width: 672
CNN input height: 384
CNN input num channels: 3
Host stream start:meta_d2h
Opening stream for read: meta_d2h
Attempting to open stream meta_d2h
Successfully opened stream meta_d2h with ID #4!
Starting thread for stream: meta_d2h
Started thread for stream: meta_d2h
Host stream start:metaout
Opening stream for read: metaout
Attempting to open stream metaout
Successfully opened stream metaout with ID #5!
Starting thread for stream: metaout
Started thread for stream: metaout
Host stream start:previewout
Opening stream for read: previewout
Attempting to open stream previewout
Successfully opened stream previewout with ID #0!
Starting thread for stream: previewout
Started thread for stream: previewout
depthai: INIT OK!

Segmentation fault w/test_cnn.py

After a period of time (from a couple of minutes to an hour), test_cnn.py will fail with a Segmentation fault. Then, a module restart is required to re-init xlink.

I haven't narrowed down what events trigger the segfault.

example_run_object_detection.py - missing labels

When running example_run_object_detection.py after the latest update, annotations are applied to the video stream but they are named like label1.

I assume these should be friendlier category labels?

Add parameter to specify whether get_available_nnet_and_data_packets() is blocking or not

Start with the why:
Currently pipeline.get_available_nnet_and_data_packets() is non-blocking (returns even if no data is available), causing a high polling rate in the python script, that also increases the CPU usage. So on depthai.py, running the script with default options, I measured a 330 times/s rate.

The 'what':
So it would be great to pass a parameter to get_available_nnet_and_data_packets(), whether to block waiting for new data, or not. The non-blocking behavior may still be useful if there is additional custom processing in the python script main loop (single-threaded model).

AttributeError: 'NoneType' object has no attribute 'get_available_nnet_and_data_packets'

If I run:

python test_cnn.py

and CTRL-C a couple of times to exit the script, eventually I get the following error when attempting to run the script again:

pi@raspberrypi:~/projects/DepthAI/python-api $ python3 test_cnn.py
depthai.__version__ == 0.0.4a
CNN configurations read: ./resources/mobilenet_ssd.json
Changing streaminfo sizes to: 300x300x3
depthai: before xlink init;
Device Found name 1.2- 
about to boot device with "cmd_file": ./depthai_ai.cmd
not xlink success
depthai: Error initalizing xlink;
Traceback (most recent call last):
  File "test_cnn.py", line 43, in <module>
    nnet_packets, data_packets = p.get_available_nnet_and_data_packets()
AttributeError: 'NoneType' object has no attribute 'get_available_nnet_and_data_packets'

I'm assuming the Python code breaks because of:

not xlink success
depthai: Error initalizing xlink;

...which might be triggered by not gracefully exiting the script.

System info:

$ ./log_system_information.sh 
Linux raspberrypi 4.19.66-v7+ #1253 SMP Thu Aug 15 11:49:46 BST 2019 armv7l GNU/Linux
depthai.__version__ == 0.0.4a
numpy.__version__ == 1.16.2
Python 3.7.3

Auto-Restart doesn't work with `ctrl-c` on hanging example script

#30 helped with resetting when the host app dies.

However, I'm seeing another case where the device doesn't reset/cleanup on failure. To reproduce:

  • Run python3 example_run_landmarks_recognition.py. It should hang (see #32).
  • ctrl-c 2x
  • Run python3 example_run_landmarks_recognition.py or any other script. The script will fail to start with an error like below:
python3 test.py
Using Custom Calibration File: depthai.calib
Using Arguments= {'config_overwrite': None}
depthai.__version__ == 0.0.9a
depthai.__dev_version__ == 35018f82c3451287cc6661c04faa764a7a58b4bc
depthai: before xlink init;
Device Found name 1.2- 
about to boot device with "cmd_file": /home/pi/Desktop/depthai-python-extras/depthai.cmd
not xlink success
depthai: Error initalizing xlink;
Error initializing device. Try to reset it.

example_run_landmarks_recognition.py hangs

When running python3 example_run_landmarks_recognition.py , the video stream hangs immediately after displaying the first frame. I'm seeing this on the RPi Compute and we have a report of the same issue from another user (unknown board).

See the log.

Check if calibration is being run on Pi, Set Framerate to 10FPS

The why of this is to reduce the latency on the Pi. Since the Pi is doing some work in the background on calibration (reducing the stream resolution from 2x 1280x720 so you can see both easily on a 1920x1080 display), the Pi struggles to keep up and so the streams getting latent with high jitter in the latency. This is especially painful when the streams aren't mirrored (see enhancement #63), as you move the checkerboard in the wrong direction - and it takes a while for this to update on the screen - so by the time you realize you've moved the boards WAY in the wrong direction.

On faster hosts, they can easily keep up so this isn't necessary, but on the Pi it does start to struggle w/ 2x 1280x720 video feeds at 30FPS each and resizing them smaller for display fitting purposes.

So what would be great is to detect if calibration is being run on a Pi and automatically reduce the framerate to 10FPS.

To do this, it looks like (from here) that you can just use import os and os.uname() and then pull out the nodename result and check if it is raspberrypi.

Then we could use the following specify the framerate of the streams coming from DepthAI (but change it for left and right:

# 'streams': [{'name': 'depth_sipp', "max_fps": 12.0}, {'name': 'previewout', "max_fps": 12.0}, ],

Disparity Confidence Threshold

Background:
Currently we hard-code in firmware the disparity confidence threshold to be 200 out of a configurable range of 0 to 255.

Improvement:
Allow this to be set from the host as a parameter in the depth entry in config:

'depth':

Problem with `face-detection-retail-0004` with openvino_2020.1.023

I would like to use the latest OpenVino, but I have a problem running the face-detection-retail-0004 example. I'm following this tutorial and using this script. I can see the video capture from the camera and the script doesn't report errors, but the inference does't work. When I print out print(str(e['label']) + " " + str(e['conf'])) I get constant values:

1.0 0.04296875
1.0 0.04296875
1.0 0.04296875
1.0 0.04296875
1.0 0.04296875
.
.
.

Can I somehow debug further, or should I just switch to the older OpenVINO release?

Specify a board type in calibration, writing to EEPROM, running DepthAI

Now that we have the capability to write board information to eeprom, including calibration data, information on camera baselines, etc. it makes sense to have standard board configurations for the two models with on-board cameras:

  1. BW1097 - DepthAI Raspberry Pi Edition
  2. BW1098OBC - DepthAI Onboard Camera Edition

We should also allow the user to specify which board they have when running DepthAI, for those who don't have such information loaded into the eeprom, and for those who want to over-ride for some reason.

calibrate.py sometimes does nothing when hitting space bar

On the BW1097 sometimes calibrate.py does nothing at all when hitting space bar. No green display of it working, no error display, etc. It just sits there for a while.

We should correct this. If the system is stuck processing, we should display this so that the user knows what's going on.

Adjustment of Board Dimensions for Calibration

The process below describes the steps of modifying calibration_utils.py to use a different chessboard size during calibration. Note that width and height are the number of internal corners per a chessboard row and column. A useful link for creating custom chessboards can be found here: https://calib.io/pages/camera-calibration-pattern-generator.

Navigate to the location of the calibration_utils.py and make a copy of the original script if interested in comparing

cd <INSTALLDIR>/depthai-python-extras/depthai_helpers/
cp calibration_utils.py calibration_utils_py.bak

Use the stream editor sed to replace the desired values of the script

sed -i  '/self.objp\|Chessboard/{
  N
  s/9/<WIDTH>/g
  s/6/<HEIGHT>/g
}' calibration_utils.py

Compare the results to confirm the correct lines are replaced
diff -u calibration_utils.py.bak calibration_utils.py

The calibration.py script should now work for the new board size. Alternatively, the StereoCalibration class could incorporate the board dimension as an argument.

Make Calibration Windows Not Appear on Top of Each-other

Start with the why:

One bit of feedback - I was confused for a while during calibration because the left/right stream windows opened up directly on top of each other. so I only saw the right stream and didn't notice the left stream behind it. I couldn't tell why it kept saying 'chessboard not found' because of this.

So we should at least stagger the left/right windows a bit so it's obvious to the first-time user of the calibration system that there are 2x windows.

Add Option to Overlay Bounding Box On `depth_sipp`

The why of this is three-fold:

  1. It helps to make sure for the user that the correct board config (camera baselines, etc.) are entered by showing where the object detector bounding box is directly on the depth data (i.e. the correct config in #73 is set/being used).
  2. It helps in making sure the mapping between the color camera object detector and the depth stream is properly aligned, through visual inspection (and/or scripted inspection), which should help with #74
  3. It looks really cool and is super helpful when trying to visually inspect how detectable objects are, and a visual of what the depth looks like on the object. This is really useful for initially debugging.

Floating point exception after running test.py for 8 hours

Doing long-duration tests on the BW1097 (Raspberry Pi Compute Module Edition) we I saw this 'Floating point exception' after 8 hours of it running.

On another 12 hour run I did not see it (I ended this run as I needed to do some work with the system and it hadn't yet crashed).

Image from iOS (23)

Mirror the mono camera videos during calibration

The 'why' of this is if the video is not mirrored, you're instincts are to move the calibration checkerboard in the wrong direction. It's like riding a bike with the handlebars reversed (it's really hard, see here).

So this is why w/ FaceTime and other tools that show yourself, they mirror the video. It's way more intuitive (in the FaceTime, they mirror it when they show it back to you, but then don't mirror it going out to the other party).

So we should do the same thing, mirror on display only, and pass the original frames-is to the calibration routine.

So I think the host (even the Pi) should be fast enough to do the flipping, but if not we can probably flip the streams in the Myriad X easily.

Video pause in between every time while running test.py

2020-01-17-104635_1920x1080_scrot

here is the image that shows while running test.py , the video pause in between and the code is showing error like

Device get data failed: 7
E: [ 0] [python] XLinkReadDataWithTimeOut:1468 /home/felistrs/projects/libs/intel_mdk/R9.8/mdk/common/components/XLink/shared/XLink.c:1468
Assertion Failed: link != NULL

I am using Raspbian buster and Pi4 for the same

Re-map disparity such that it is from `Left` grayscale instead of `Right`

As it stands now the disparity is mapped such that the 'Right' camera (as marked as RIGHT or R) on the PCB is the center of the disparity.

On the BW1097, the RGB camera (IMX378) is actually furthest from this R camera (7cm away), so there exists an offset between the bounding box from object detection (when object detection is being run on the RGB camera instead of the grayscale camera(s)) and where the object actually is in the disparity- (and therefore depth-) map. This can result in inaccuracy in object position. An example of this offset from the BW1097 is below:

image

Note the upper left of the chair, for example.

On this same BW1097 the L (as marked on the PCB) camera is quite close (2cm) from the RGB camera, so this offset will be significantly (effectively not notice-able) when the LEFT camera is used as the center of the disparity.

BW1097 PCB for reference below:
image

As you can see above, we had intended for the L camera to be the reference (center) for disparity, but mis-interpreted how the flow functioned, and how the referencing would work out, so the code as it stands now implements the opposite of the intended use of the hardware (using R instead, which produces the worst-case offset).

(On the BW1098OBC, using LEFT or RIGHT results in the same offset given that the system is symmetric, with 3.25cm between both the LEFT and RIGHT and the RGB cameras).

Running just previewout stream (without AI) breaks DepthAI

I want to do display frames from previewout stream, without any AI context at the moment - and I'm unable to achieve it, since depthAI throws errors.

Code that fails (cannot reproduce on depthai.py since it always adds some nnet by default):

import consts.resource_paths
import cv2
import depthai

if not depthai.init_device(consts.resource_paths.device_cmd_fpath):
    raise RuntimeError("Error initializing device. Try to reset it.")

p = depthai.create_pipeline(config={
    "streams": ["previewout"]
})


while True:
    _, data_packets = p.get_available_nnet_and_data_packets()

    for packet in data_packets:
        if packet.stream_name == 'previewout':
            data = packet.getData()
            data0 = data[0, :, :]
            data1 = data[1, :, :]
            data2 = data[2, :, :]
            frame = cv2.merge([data0, data1, data2])
            cv2.imshow('previewout', frame)

    if cv2.waitKey(1) == ord('q'):
        break

del p
depthai.deinit_device()

Error log from run

Stopping threads: ...
E: [global] [    581628] [python] XLinkReadData:165	addEventWithPerf(&event, &opTime) failed with error: 3
Device get data failed: 3
Closing stream previewout: ...
Closing stream previewout: DONE.
E: [global] [    581629] [Scheduler00Thr] dispatcherEventSend:53	Write failed (header) (err -4) | event XLINK_CLOSE_STREAM_REQ

Thread for previewout finished.
E: [xLink] [    581629] [Scheduler00Thr] sendEvents:1036	Event sending failed
Stopping threads: DONE 3.749s.
Closing all observer streams: ...
Closing stream host_capture: ...
E: [global] [    581629] [Scheduler00Thr] dispatcherEventSend:53	Write failed (header) (err -4) | event XLINK_CLOSE_STREAM_REQ

E: [xLink] [    581629] [Scheduler00Thr] sendEvents:1036	Event sending failed
Closing stream host_capture: DONE.
Closing all observer streams: DONE.
Reseting device: 0.
E: [global] [    581629] [Scheduler00Thr] dispatcherEventSend:53	Write failed (header) (err -4) | event XLINK_RESET_REQ

E: [xLink] [    581629] [Scheduler00Thr] sendEvents:1036	Event sending failed
Reseting: DONE.

Calibration Fails on Mac OS X with Segmentation Fault

When running calibration on Mac OS X it seems to SegFault on or around saving the 13th image.

Tested on #66 and also Master (49aad66).

git submodule update --init
./depthai-api/build_py_module.sh
grep: /proc/meminfo: No such file or directory
MemAvailable:  MB
Unable to get MemAvailable
[BUILD depthai-api/host/py_module/build] make -j4
[ 14%] Built target nlohmann_json_schema_validator
[100%] Built target depthai
[BUILD depthai-api/host/py_module/build] cp depthai.cpython-37m-darwin.so ../../../../

python3 calibrate.py -ih
No calibration file. Using Calibration Defaults.
Using Arguments= {'count': 1, 'square_size_cm': 2.5, 'image_op': 'modify', 'mode': ['capture', 'process'], 'config_overwrite': None, 'field_of_view': 71.86, 'baseline': 9.0, 'swap_lr': True, 'dev_debug': None, 'invert_v': False, 'invert_h': True}
Starting image capture. Press the [ESC] key to abort.
Will take 13 total images, 1 per each polygon.
XLink initialized.
Sending device firmware "cmd_file": /Users/leeroy/depthai-python-extras/depthai.cmd
Successfully connected to device.
Loading config file
Attempting to open stream config_d2h
watchdog started 1000
Successfully opened stream config_d2h with ID #0!
Closing stream config_d2h: ...
Closing stream config_d2h: DONE.
WARNING: Version (dev) does not match (unknown & 62b4db70b327f51f6066af9620187ee4ae3878b8)
CNN configurations read: /Users/leeroy/depthai-python-extras/resources/nn/object_detection_4shave/object_detection.json
depthai: Calibration file is not specified, will use default setting;
config_h2d json:
{"_board":{"_homography_right_to_left":[0.988068163394928,0.0029474012553691864,5.067617416381836,-0.008765067905187607,0.9921473264694214,-8.795275688171387,-8.449587767245248e-06,-3.603489403758431e-06,1.0]},"_load_inBlob":true,"_pipeline":{"_streams":[{"name":"left"},{"name":"right"},{"name":"meta_d2h"}]},"ai":{"calc_dist_to_bb":false},"board":{"clear-eeprom":false,"left_fov_deg":71.86000061035156,"left_to_rgb_distance_m":0.0,"left_to_right_distance_m":0.09000000357627869,"override-eeprom":false,"store-to-eeprom":false,"swap-left-and-right-cameras":true},"depth":{"padding_factor":0.30000001192092896}}
Attempting to open stream config_h2d
Successfully opened stream config_h2d with ID #1!
Writing 1000 bytes to config_h2d
!!! XLink write successful: config_h2d (1000)
Closing stream config_h2d: ...
Closing stream config_h2d: DONE.
Read: 14514560
Attempting to open stream inBlob
Successfully opened stream inBlob with ID #1!
Writing 14514560 bytes to inBlob
!!! XLink write successful: inBlob (14514560)
Closing stream inBlob: ...
Closing stream inBlob: DONE.
depthai: done sending Blob file /Users/leeroy/depthai-python-extras/resources/nn/object_detection_4shave/mobilenet_ssd.blob
Attempting to open stream outBlob
Successfully opened stream outBlob with ID #0!
Closing stream outBlob: ...
Closing stream outBlob: DONE.
CNN input width: 300
CNN input height: 300
CNN input num channels: 3
Host stream start:meta_d2h
Opening stream for read: meta_d2h
Attempting to open stream meta_d2h
Successfully opened stream meta_d2h with ID #1!
Starting thread for stream: meta_d2h
Host stream start:left
Opening stream for read: left
Attempting to open stream left
Started thread for stream: meta_d2h
Successfully opened stream left with ID #2!
Starting thread for stream: left
Started thread for stream: left
Host stream start:right
Opening stream for read: right
Attempting to open stream right
Successfully opened stream right with ID #3!
Starting thread for stream: right
depthai: INIT OK!
Started thread for stream: right
py: Saved image as: left_p0_0.png
py: Capture failed, unable to find chessboard! Fix position and press spacebar again
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full right:
Data queue is full left:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
py: Saved image as: left_p0_0.png
py: Saved image as: right_p0_0.png
py: Saved image as: left_p1_1.png
py: Capture failed, unable to find chessboard! Fix position and press spacebar again
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
Data queue is full left:
Data queue is full right:
py: Saved image as: left_p1_1.png
py: Saved image as: right_p1_1.png
py: Saved image as: right_p2_2.png
py: Saved image as: left_p2_2.png
py: Saved image as: left_p3_3.png
py: Saved image as: right_p3_3.png
py: Saved image as: left_p4_4.png
py: Saved image as: right_p4_4.png
py: Saved image as: left_p5_5.png
py: Saved image as: right_p5_5.png
py: Saved image as: left_p6_6.png
py: Saved image as: right_p6_6.png
py: Saved image as: right_p7_7.png
py: Saved image as: left_p7_7.png
py: Saved image as: left_p8_8.png
py: Saved image as: right_p8_8.png
py: Saved image as: left_p9_9.png
py: Saved image as: right_p9_9.png
py: Saved image as: left_p10_10.png
py: Saved image as: right_p10_10.png
py: Saved image as: left_p11_11.png
py: Saved image as: right_p11_11.png
py: Saved image as: right_p12_12.png
py: Saved image as: left_p12_12.png
Segmentation fault: 11

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.