GithubHelp home page GithubHelp logo

aideck-gap8-examples's Introduction

AI-deck examples repository

Check out the documentation for starting guides.

aideck-gap8-examples's People

Contributors

ataffanel avatar bananazooka avatar danieldennett avatar evoggy avatar gemenerik avatar hmllr avatar knmcguire avatar krichardsson avatar ravenlite avatar supern1ck avatar tobbeanton avatar valeriyvan avatar whoenig avatar williamleong avatar yaooooo avatar zeroos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aideck-gap8-examples's Issues

Face detection example compile error with gapsdk 3.6

hello
i am using ubuntu 18.04 and have gap_sdk version 3.6 my issue is that when i run this example i keep getting this error Makefile:48: recipe for target 'GenTile' failed
make: *** [GenTile] Error 1. Could you let me know how to solve this problem.
thank you

Unexpected Behavior When Using Camera

Hey,
I have one problem when using the camera of the AI-Deck I couldn't fix properly. I tested out the test_camera-example for the GAP8-chip and got some weird results (but this is also applicable to other examples as far as it seems to me).

When I try to capture an image it gets tiled in a weird way like so:

img_tiled

What got me thinking is that this only occurs for the first image. If you wrap it in a loop and continuously capture images like in the wifi streaming example or the GAP8-IO example the problem is gone starting from the second image. So I was investigating what caused this issue and how to solve it. So far the only solution that worked for me is to start and stop the camera once before starting the actual capture process but also after setting the registers for the orientation or other parameters (for instance around line 95)

pi_camera_control(&camera, PI_CAMERA_CMD_START, 0);
pi_camera_control(&camera, PI_CAMERA_CMD_STOP, 0);

Doing so is exactly what happens when capturing images continuously but leaving out the capture call.

Therefore this resolves the issue and generates a correct image
img_correct

But doesn't seem like the correct or proper solution. This also happened on two different AI-Decks so it don't think my specific board might be faulty.

So this results in a couple of open questions:

  • Does this only happen for our AI-Deck(s)? Or did anyone else experience a similar issue?
  • Is there a better way to fix the problem? I also looked at another repository providing an implementation for interfacing the camera but couldn't see any major differences
  • Is this a general issue which should be posted in the gap_sdk repository?

Cheers,
-Nick

Testing the Himax camera

Hi,
When I try to run the "Testing the Himax camera", it will always stop at certain point like below:

356543bPz7

Therefore, I try to find the program stop at which scentence,and find:

mFBrKbyIMF

But 3 of my teammates can run this example successfully, and we have the same virtual envirnment.
So, I want to know is this problem may be caused by computer configuration? or there are other causes?
By the way, my teammates and I all used the offical docker. And we refer to this page:https://www.bitcraze.io/documentation/repository/AIdeck_examples/master/test-functions/test-camera/
Regards,
Mathilda

setting docker hub image

So currently we are working on a docker hub container so it should be much easier to retrieve the container (so that people don't need to build it themselves all the time).

This will be done without the autotiler as that one needs user registration, so people will need to install this seperately on the downloaded container, but since the first step almost takes one hour it will save a lot of time !

https://github.com/bitcraze/docker-aideck

add docker files for building and flashing both nina and gap8

Installing the gapsdk and the espressif toolchains and such can be quite a difficult job... especially when so many libraries rely on it. The idea is to provide dockerfiles, so that people can make their own small environment with the containers with all the correct libraries installed and build and flash the examples from there without having to install all of those on their local machine (kind of like a small virtual machine).

Improve WiFi JPEG streaming demo

Improve the WiFi demo so that it supports connection after the AI deck has been started, re-connection and streaming full JPEG images.

nntool not working for gap_sdk version 3.7 anymore

Hey all,

in the release version 3.7 of the gap sdk there is a hard-coded download from a master branch of tensorflow, see here: https://github.com/GreenWaves-Technologies/gap_sdk/blob/d33d6c90c6a6d1e87572515f61ee8642e09847f0/tools/nntool/Makefile#L56

Unfortunately, this breaks the nntool for version 3.7 when using the docker. A quick workaround that worked for me is to replace master by some older commit like 6be604aaacd9d270de01c37ec6e9a9a077397848 being inside the docker then rebuilding the nntool and commiting my changes. But I guess that is not substantially enough for everybody.

FYI, in the release version 3.7.2 of the gap sdk this was fixed (https://github.com/GreenWaves-Technologies/gap_sdk/blob/a934747441481ea3d9c029719d721780cdff9e46/tools/nntool/Makefile#L58) but building the docker with version 3.7.2 results in a different issue which was already addressed here: GreenWaves-Technologies/gap_sdk#174

Best,
-Nick

Bluetooth example

Hi! I was wondering if there are some examples regarding the Bluetooth module present on the NINA-W102.

Thank you for the support.

Manuel

problem uploading to ai deck

Hello kimberley
I installed the gap v3.7, I run the test_camera example and it worked and I was surprised to get a color picture, then I installed the face detection example, when I run it I couldn’t get it to work, I changed the setting in the makefile to USE_CAMERA, but it doesn’t work, so I tried to run the test camera example again, now I keep getting this message
Info: gdb port disabled
Info: halt timed out, wake up GDB
error: timed out while waiting for target halted
i can not flash any other example
I noticed that only the NINA led is coming on.
Could you let me know what I can do to fix this problem
regards
rodrigo

Cannot flash wi-fi jpeg streamer example

Hi,
I'm working on a project on the AIdeck which only implements GAP8 chip + NINA module functionalities. I bought an Olimex ARM-USB-TINY-H with ARM JTAG 20 to 10 pin adapter, as suggested, to program the board.

Saddly I couldn't make it to work, both by building the GAP_SDK myself and by using the docker image provided.

I will post the output I receive from every example of GAP8 i tried:

Open On-Chip Debugger 0.10.0+dev-00002-ga9347474 (2020-11-14-17:27)
Licensed under GNU GPL v2
For bug reports, read
  http://openocd.org/doc/doxygen/bugs.html
DEPRECATED! use 'adapter speed' not 'adapter_khz'
DEPRECATED! use 'adapter driver' not 'interface'
Warn : Interface already configured, ignoring
TARGET create
Info : core 0 selected
0
Info : gap8_adv_debug_itf tap selected
Info : adv_dbg_unit debug unit selected
Info : Option 7 is passed to adv_dbg_unit debug unit
GAP8 INIT TARGET
Info : clock speed 1500 kHz
Error: JTAG scan chain interrogation failed: all zeroes
Error: Check JTAG interface, timings, target power, etc.
Error: Trying to use configured scan chain anyway...
Error: gap8.cpu: IR capture error; saw 0x00 not 0x01
Warn : Bypassing JTAG setup events due to errors
GAP8 examine target
Init jtag
Initialising GAP8 JTAG TAP
Info : adv debug unit is configured with option BOOT MODE JTAG
Info : adv debug unit is configured with option ADBG_USE_HISPEED
Warn : Burst read timed out
Warn : Burst read timed out
Warn : Burst read timed out
Warn : Burst read timed out
Warn : Burst read timed out

I'm using the latest release of gap_sdk on an Ubuntu 20.04 machine.
(note: out of the box, when powered, AIdeck setup the WI-FI hotspot and stream the camera images as expected)

Thank you in advance for your time.

Best regards,

Lorenzo Gualniera

HyperFlash and HyperRAM

Hey,
I'm kinda confused regarding memory/RAM on the AI-Deck. On the product/github page the AI-Deck is advertised to have
512 Mbit HyperFlash and 64 Mbit HyperRAM but when I look at the GAP8 product page
It's much less. Is this a typo? But, when I look at the schematics of the AI-Deck it's also mentioned there.
So, if this hypermemory exists is there any example code showing how to use the extended memory?

Thanks,
-Nick

Is there any advice for flashing the nina module using the docker on the macOS?

Hello, I firstly used the docker to flash the nina under the official ubuntu virtual machine, but the command didn't work as expected. There is something trouble of the command 'cd' in the dockerflie.
So, I turn to use the docker under the local environment, macOS.
But I meet an another problem when I want to execute the 'docker run' command,

docker run --rm -it -v $PWD:/module/ --device /dev/ttyUSB0 --privileged -P espidf:3.3.1 /bin/bash -c "make menuconfig; make clean; make all; /openocd-esp32/bin/openocd -f interface/ftdi/olimex-arm-usb-tiny-h.cfg -f board/esp-wroom-32.cfg -c 'program_esp32 build/partitions_singleapp.bin 0x8000 verify' -c 'program_esp32 build/bootloader/bootloader.bin 0x1000 verify' -c 'program_esp32 build/ai-deck-jpeg-streamer-demo.bin 0x10000 verify reset exit'"

You know, there is not device file in the macOS, so, I can't set the value of the --device, is there any advice to solve it?
Thanks!

I finish the work according to the Docker Nina

Native Installation on Ubuntu 20.04

I tried following the instructions at https://www.bitcraze.io/documentation/repository/AIdeck_examples/master/getting-started/getting-started/ on Ubuntu 20.04. I tried different versions of the SDK (3.8.1, 4.0.0) and usually get a

cc1plus: error: bad value (‘tigerlake’) for ‘-march=’ switch

when making the SDK.

For the steps, I oriented myself at the docker instructions in https://github.com/bitcraze/docker-aideck/blob/master/src/Dockerfile as well as the readme for the official SDK.

Is the recommended way here to use Docker, or did anybody succeed at using a native installation? The current Docker image uses SDK 4.0.0 and not 3.8.1 as stated in the documentation, correct?

USB acces through docker is unstable

The docker images for the gap8 is displaying unstable behavior when connecting a programmer through USB. During RUN or Flash, the usb seems to disconnect several times.

If this is happening to you, just install gap_sdk locally on your machine...

update stream examples to work with gapsdk 3.6 and up

A lot of examples, especially those that work with the streamer, work only in 3.5 and not in 3.6 (it blocks). Moreover the CNN examples seem to be broken, even if I try to go back to 3.4. It might be due to the fact that the autotiler has updated.

ai deck

hello
i have uploaded the face detection image on the ai deck, now i need to know how to run this code and receive images on the laptop. Could you explain how i can achieve this. Do i need to build an application to run on the laptop?
thank you
regards
rodrigo calvo

Error with GAP8 Docker

While trying to setup the GAP8 Docker container I run into several errors.

The first of which I was able to work around, and created a pull request to fix.

The next one I ran into, I haven't been able to sort out yet. After running

sudo docker build --tag gapsdk:${GAP_SDK_VERSION} --build-arg GAP_SDK_VERSION=$GAP_SDK_VERSION .

I eventually get the error

E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/universe/libk/libkml/libkmlbase1_1.3.0-5_amd64.deb  Undetermined Error [IP: 91.189.88.152 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
The command '/bin/sh -c apt-get install -y libopencv-dev python3-opencv' returned a non-zero code: 100

Could this perhaps be fixed with the new SDK version

Standalone streamer demo / NINA flashing documentation

Recently got an AI-deck.

  • Can the AI-deck be used as standalone without a Crazyflie by powering via VCOM with 3.7V? If yes, could not see a WiFi-AP when powering it standalone.
  • While flashing the NINA, the JTAG one-liner mentioned in the readme of the NINA directory doesn't work out of the box. Consider expanding the documentation a bit.

Had to do the following before it could flash:

  1. Add adapter_khz 20000 line to the olimex-arm-usb-tiny-h.cfg
  2. Copy the tcl folder from openocd-esp32 to the NINA/firmware dir
  3. Run the 'make flash' command in the 'NINA/firmware dir'
  4. Rename program_esp32 in the one-liner to program_esp:
~/esp/openocd-esp32/src/openocd -f interface/ftdi/olimex-arm-usb-tiny-h.cfg -f board/esp-wroom-32.cfg -c "program_esp build/partitions_singleapp.bin 0x8000 verify" -c "program_esp build/bootloader/bootloader.bin 0x1000 verify" -c "program_esp build/ai-deck-jpeg-streamer-demo.bin 0x10000 verify reset exit"

Improve and add more documentation

  • insert image that shows the workflow from tensorflow script to GAP8 application code which included important stepping stones as nntool and autotiler

Add files files for wifi demo

  • Add nina firmware files
  • Add gap8 firmware files
  • Start readme with flashing instructions for NINA
  • Start readme with flashing instructions for Gap8 wifi example

Automatic exposure gain control doesn't work

From what I understand, the Himax camera should be able to adjust the exposure continuously to adapt to different lighting conditions. In the WiFi streaming example, this is not the case. If you turn on the camera in a dark room, and then turn the lights on, the image will be completely washed out and the camera does not adjust the new lighting conditions. If you start the camera in a room that is already bright, the exposure will be set correctly. It seems that the exposure is only set at the time the camera is turned on, and then it stays the same.
I have the AI deck v1.1

Problems running Test Himax Camera

Hello,
I'm studying to use test ai-deck's camera function. When I'm trying to flash the AIdeck_examples/GAP8/test_functionalities/test_camera module, my flash program blocks at "Opening Himax Camera". The detailed outputs are as follows.

Initialising GAP8 JTAG TAP
Info : adv debug unit is configured with option BOOT MODE JTAG
Info : adv debug unit is configured with option ADBG_USE_HISPEED
Info : gdb port disabled
Loading binary through JTAG
Info : tcl server disabled
Info : telnet server disabled
Warn : Burst read timed out
*** PMSIS Camera with LCD Example ***
Entering main controller
Testing normal camera capture
Opening Himax camera
^Z

Furthermore, when I flashed and tested the wifi stream example, the viewer.py blocked at socked connected and no image displayed.

I don't think it's a hardware problem because I've tried on two different ai-deck boards. And I'm wondering if you could give me some idea on how to fix this issue.

Thanks,
David

relay char delivery from GAP8 through NINA to CF2

The UART1 connection will be conflicting with the lighthouse deck. Moreover, the UART connection with the GAP8 will be disrupted when the cluster is turned on, which might cause problems for users in the end anyway.

Manual exposure control

Hey guys!

I was trying to work with AI-deck and the himax camera without the AEG turned on. We'd like to be able to fly the crazyflie in very low lighting conditions and wanted to turn off the AEG, fix integration time (exposure), analog gain, and digital gain to have more reproducible images.

However, it seems like changing the appropriate parameters does not result in any change in the captured images. I've created an MWE to better visualize what I mean:
https://github.com/bitcraze/AIdeck_examples/compare/master...zeroos:mwe/manual_exposure?expand=1
I would expect that running this code would result in images with constantly changed exposure, but all the images are exactly the same.

Am I missing something or is it a bug in HIMAX?

wifi_streaming example returns half-corrupted images

Hi!
I am working on the AIdeck and I am trying to make the wifi streaming example work.

I am using the last bitcraze commit and last gap_sdk version 3.8.1.

My problem is that when I run the viewer.py script to visualize the streamed images, the bottom part of the image is either out of sinc or black.
I debugged hardly the greenwaves code of the streamer, but I didnt find the problem.

On AIdeck side: after the camera acquisition I dump the raw image, this looks correct. So the camera works.
If I run the test camera program (write the image to the PC via debug interface) I always get the right image. This proofs again the working camera.

The problem I think is either in the Jpeg encoder, in the trasmission, or its a buffer size problem at receiving side.

in the viewer.py file i tried many buffer sizes (wrt to the original value of 512). I went down to 100 bytes and up to 10000 bytes, but the same problem occurs.

does anyone has the same problm that I do?
refer here for an example image: https://forum.bitcraze.io/viewtopic.php?f=21&t=4643

On board object detection

Hi,

Is object detection possible on board on AIdeck? Do you have any test about this? Does it support tf2 model zoo SSD models? Or do you suggest off board detection?

Thank you

JPEG streammer as Client

Hello!!!!
I have many crazyflie and I want to make them to stream images to my PC using my home wifi. My idea is to join the images using OpenCV in my computer, but for that I need the crazyflies to be wifi clients rather than servers. How can I do it??
The problem with the JPEG streamer in the examples is that I cannot receive images by openCV neither I know how to get several images at the same time. Any idea?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.