GithubHelp home page GithubHelp logo

axiscommunications / acap-computer-vision-sdk-examples Goto Github PK

View Code? Open in Web Editor NEW
48.0 15.0 22.0 1.71 MB

Axis Camera Application Platform (ACAP) version 4 example applications that provide developers with the tools and knowledge to build their own solutions based on the ACAP Computer Vision SDK

License: Apache License 2.0

Dockerfile 30.14% Python 69.02% HTML 0.84%
video camera edge analytics python acap axis computer-vision

acap-computer-vision-sdk-examples's Introduction

Copyright (C) 2022, Axis Communications AB, Lund, Sweden. All Rights Reserved.

ACAP Computer Vision SDK examples

CI

Mission

Our mission is to provide an excellent development experience by enabling developers to build new AI/ML applications for a smarter and safer world.

Axis video analytics example applications

Video analytics ensures that video surveillance systems become smarter, more accurate, more cost-effective and easier to manage. The most scalable and flexible video analytics architecture is based on ‘intelligence at the edge’, that is, processing as much of the video as possible in the network cameras or video encoders themselves.

This not only uses the least amount of bandwidth but also significantly reduces the cost and complexity of the network. Open application development platforms such as Axis Camera Application Platform (ACAP) facilitate the integration of compatible third-party solutions, resulting in a quickly growing variety of applications – general as well as specialized for different industries. The growing number of video analytics applications creates new end-user benefits and opens new business possibilities.

Getting started

This repository contains a set of application examples which aims to enrich the developers analytics experience. All examples are using Docker framework and has a README file in its directory which shows overview, example directory structure and step-by-step instructions on how to run applications on the camera.

Requirements

Supported architectures

The examples support the following architectures:

  • armv7hf
  • aarch64

Example applications for video analytics

Below is a list of examples available in the repository:

  • hello-world-python
    • A Python example which builds a simple hello-world application.
  • minimal-ml-inference
    • A Python example to build a minimal machine learning inference application.
  • object-detector-python
    • A Python example which implements object detection on a video stream from the camera.
  • opencv-qr-decoder-python
    • A Python example which detects and decodes QR codes in the video stream using OpenCV.
  • parameter-api-python
    • A Python example which reads camera parameters using the beta version of the Parameter-API.
  • pose-estimator-with-flask
    • A Python example which implements pose estimation on a video stream from the camera, and publish the output on a video stream using flask.
  • web-server
    • A C++ example which runs a Monkey web server on the camera.

Docker Hub images

The examples are based on the ACAP Computer Vision SDK. This SDK is an image which contains APIs and tooling to build computer vision apps for running on camera, with support for Python. Additionally, there is the ACAP Native SDK, which is more geared towards building ACAPs that uses AXIS-developed APIs directly, and primarily does so using C/C++.

How to work with Github repository

You can help to make this repo a better one using the following commands.

  1. Fork it (git checkout ...)
  2. Create your feature branch: git checkout -b <contr/my-new-feature>
  3. Commit your changes: git commit -a
  4. Push to the branch: git push origin <contr/my-new-feature>
  5. Submit a pull request

License

Apache 2.0

acap-computer-vision-sdk-examples's People

Contributors

carlcn avatar corallo avatar daniel-falk avatar deepikas20 avatar ecosvc-dockerhub avatar github-axiscommunications-ecosystem avatar hussanm avatar johanxmodin avatar jonisuominen avatar kimraxis avatar kristoffer-github-anderson avatar mattias-kindborg-at-work avatar mikaelli-axis avatar mirzamah avatar pataxis avatar petterwa avatar sebaxis avatar shreyasatwork avatar xiaoxyzero avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

acap-computer-vision-sdk-examples's Issues

web-server example is not working

Describe the bug

web-server example not working

To reproduce

Follow the steps on Readme

Screenshots

image

Environment

  • Axis device model: [e.g. Q1615 Mk III]
  • Axis device firmware version: [10.11.65]
  • Stack trace or logs: [e.g. Axis device system logs]
    Monkey HTTP Server v1.5.6
    Built : May 12 2022 12:11:47 (gcc 9.4.0)
    Home : http://monkey-project.com
    [+] Process ID is 7
    [+] Server socket listening on Port 2001
    [+] 2 threads, 134217724 client connections per thread, total 268435448
    [+] Transport layer by liana in http mode
    [+] Linux Features: TCP_FASTOPEN SO_REUSEPORT
    [2022/05/13 11:56:47] [ Error] Segmentation fault (11), code=1, addr=0x4
    Aborted (core dumped)
  • OS and version: [ Ubuntu 20.04 LTS]

Problem loading custom model

Hi, I have the same issue when I loaded my own tflite model.
I have already replaced the path to my model in the Dockerfile.model and env.aarch64.artpec8:

Dockerfile.model:

# Get custom model
ADD ./models/yolov4/train-608_best.tflite models/
ADD ./models/yolov4/labels.txt models/

env.aarch64.artpec8:

MODEL_PATH=/models/train-608_best.tflite
OBJECT_LIST_PATH=/models/labels.txt
INFERENCE_SERVER_IMAGE=axisecp/acap-runtime:aarch64-containerized
INFERENCE_SERVER_ARGS="-m /models/train-608_best.tflite -j 12" 

There is the error I encountered:
image

error message:

ERROR in Inference: Failed to load model train-608_best.tflite (Could not load model: Asynchronous connection has been closed)

<_InactiveRpcError of RPC that terminated with:
object-detector-python_1 | status = StatusCode.UNAVAILABLE
object-detector-python_1 | details = "failed to connect to all addresses"
object-detector-python_1 | debug_error_string = "{"created":"@1659425957.027178280","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3217,"referenced_errors":[{"created":"@1659425957.027173640","description":"failed to connect to all addresses","file":"src/core/lib/transport/error_utils.cc","file_line":165,"grpc_status":14}]}"

Environment

  • Axis device model: P3265-LVE
  • Axis device firmware version: 10.11.76 (also tried 10.10.73 but didn't work)
  • SDK VERSION: 1.2.1
  • docker daemon with Compose: 1.2.3

Please help me, thanks in advance.

Originally posted by @JENNSHIUAN in #50 (comment)

Build external library and integrate with ACAP computer vision

Issue description

I'm trying to build an external libraries and integrate them with object-detector-cpp example in ACAP-Computer-Vision-SDK but I get this error

integration_1_error

External Libraries

  • oatpp
  • libconfig

Dockerfile


# syntax=docker/dockerfile:1

ARG ARCH=armv7hf
ARG REPO=axisecp
ARG SDK_VERSION=1.4
ARG UBUNTU_VERSION=22.04

FROM arm32v7/ubuntu:${UBUNTU_VERSION} as runtime-image-armv7hf
FROM arm64v8/ubuntu:${UBUNTU_VERSION} as runtime-image-aarch64

FROM ${REPO}/acap-computer-vision-sdk:${SDK_VERSION}-${ARCH} as cv-sdk-runtime
FROM ${REPO}/acap-computer-vision-sdk:${SDK_VERSION}-${ARCH}-devel as cv-sdk-devel

# Setup proxy configuration
ARG HTTP_PROXY
ENV http_proxy=${HTTP_PROXY}
ENV https_proxy=${HTTP_PROXY}

ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y -f libglib2.0-dev libsystemd0 && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

RUN mkdir -p /tmp/devel /tmp/runtime /build-root /target-root

# Download the target libs/headers required for compilation
RUN apt-get update && apt-get install --reinstall --download-only -o=dir::cache=/tmp/devel -y -f libglib2.0-dev:$UBUNTU_ARCH \
    libsystemd0:$UBUNTU_ARCH \
    libgrpc++-dev:$UBUNTU_ARCH \
    libprotobuf-dev:$UBUNTU_ARCH \
    libc-ares-dev:$UBUNTU_ARCH \
    libgoogle-perftools-dev:$UBUNTU_ARCH \
    libssl-dev:$UBUNTU_ARCH \
    libcrypto++-dev:$UBUNTU_ARCH \
    libgcrypt20:$UBUNTU_ARCH

RUN for f in /tmp/devel/archives/*.deb; do dpkg -x $f /build-root; done
RUN cp -r /build-root/lib/* /build-root/usr/lib/ && rm -rf /build-root/lib

# Separate the target libs required during runtime
RUN apt-get update && \ 
    apt-get install --reinstall --download-only -o=dir::cache=/tmp/runtime -y -f libgrpc++:$UBUNTU_ARCH \
    '^libprotobuf([0-9]{1,2})$':${UBUNTU_ARCH} \
    libc-ares2:$UBUNTU_ARCH

RUN for f in /tmp/runtime/archives/*.deb; do dpkg -x $f /target-root; done
RUN cp -r /target-root/lib/* /target-root/usr/lib/ && rm -rf /target-root/lib

WORKDIR /build-root
RUN git clone https://github.com/oatpp/oatpp.git && \
    cd oatpp && \
    mkdir build && cd build && \
    cmake -D CMAKE_CXX_COMPILER=/usr/bin/arm-linux-gnueabihf-g++ \
    -D OATPP_DISABLE_ENV_OBJECT_COUNTERS=ON \
    -D OATPP_BUILD_TESTS=OFF .. && \
    make install

ARG BUILDDIR=/build-root/oatpp/build
RUN cp -r ${BUILDDIR}/src/liboatpp* /target-root/usr/lib

WORKDIR /build-root
RUN git clone https://github.com/hyperrealm/libconfig.git
WORKDIR /build-root/libconfig
RUN autoreconf -i && ./configure &&\
    gCFLAGS=' -O2 -mthumb -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a9 -fomit-frame-pointer' \
    CC=arm-linux-gnueabihf-gcc cmake -G"Unix Makefiles" -D CMAKE_CXX_COMPILER=/usr/bin/arm-linux-gnueabihf-g++ . && \
    make -j;

ARG BUILDDIR=/build-root/libconfig/out
RUN cp -r ${BUILDDIR}/libconfig.so* /target-root/usr/lib &&\
    cp -r ${BUILDDIR}/libconfig++.so* /target-root/usr/lib

COPY app/Makefile /build/
COPY app/src/ /build/src/
WORKDIR /build
RUN make

FROM runtime-image-${ARCH}
# Copy the libraries needed for our runtime
COPY --from=cv-sdk-devel /target-root /

# Copy the compiled object detection application
COPY --from=cv-sdk-devel /build/objdetector /usr/bin/

# Copy the precompiled opencv libs
COPY --from=cv-sdk-runtime /axis/opencv /

# Copy the precompiled openblas libs
COPY --from=cv-sdk-runtime /axis/openblas /

CMD ["/usr/bin/objdetector"]


Makefile


# Application Name
TARGET := objdetector

# Function to recursively find files in directory tree
rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d))

# Find all .o files compiled from protbuf files
PROTO_O := $(call rwildcard, /axis/tfproto, *.o)

# Determine the base path
BASE := $(abspath $(patsubst %/,%,$(dir $(firstword $(MAKEFILE_LIST)))))

# Find cpp files
OBJECTS := $(patsubst %.cpp, %.o, $(wildcard $(BASE)/src/*.cpp))
OTHEROBJS = $(BASE)/src/*/*.cpp

CXX = $(TARGET_TOOLCHAIN)-g++
CXXFLAGS += -I/usr/include -I/usr/include/grpcpp/security -I/axis/tfproto -I/axis/openblas/usr/include -I/axis/opencv/usr/include -I/build-root/usr/include
CXXFLAGS += -I/build-root/oatpp/src
CXXFLAGS += -I/build-root/libconfig/lib

CPPFLAGS = --sysroot=/build-root $(ARCH_CFLAGS) -Os -pipe -std=c++17
STRIP=$(TARGET_TOOLCHAIN)-strip

SHLIB_DIR = /target-root/usr/lib
LDFLAGS = -L$(SHLIB_DIR) -Wl,--no-as-needed,-rpath,'$$ORIGIN/lib'
LDLIBS += -L $(BASE)/lib \
 -L /axis/opencv/usr/lib \
 -L /axis/openblas/usr/lib
LDLIBS += -lm -lopencv_core -lopencv_imgproc -lopencv_imgcodecs -lopencv_videoio -lopenblas -lgfortran
LDLIBS += -lprotobuf -lz -lgrpc -lgpr -lgrpc++ -lssl -lcrypto -lcares -lprofiler -lrt
LDLIBS += -lvdostream -lfido -lcapaxis -lstatuscache 
SHLIBS += -loatpp -lconfig++

.PHONY: all clean

all: $(TARGET)

$(TARGET): $(OBJECTS)
	$(CXX) $< $(CPPFLAGS) $(CXXFLAGS) $(LDFLAGS) $(LDLIBS) $(SHLIBS) $(OTHEROBJS) $(PROTO_O) -o $@ && $(STRIP) --strip-unneeded $@

clean:
	$(RM) *.o $(TARGET)

"Error -1 getting parameter"

Describe the bug

object-detector-python (tag v1.2.1) example fails with error:

...
Creating object-detector-python_inference-server_1       ... done
Attaching to object-detector-python_inference-server_1, object-detector-python_acap_dl-models_1, object-detector-python_object-detector-python_1
acap_dl-models_1          | COPYRIGHT
acap_dl-models_1          | coco_labels.txt
acap_dl-models_1          | ssd_mobilenet_v2_coco_quant_postprocess.tflite
acap_dl-models_1          | ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
inference-server_1        | #Error -1 getting parameter 'root.Acapruntime.Verbose'!
inference-server_1        | #Error -1 getting parameter 'root.Acapruntime.IpPort'!
object-detector-python_acap_dl-models_1 exited with code 0
inference-server_1        | #Error -1 getting parameter 'root.Acapruntime.ChipId'!
object-detector-python_1  | object-detector-python connect to: unix:///tmp/acap-runtime.sock
...

Docker ACAP 1.2.3 is built from tag 1.2.3.

To reproduce

Install and run object-detector-python example according to the steps.

Environment

  • Axis device model: Q3538-LVE
  • Axis device firmware version: 10.11.65
  • OS and version: Ubuntu 20.04 LTS

Additional context

It is next step after #72 is fixed (Thanks!).

Object detection sample mounting error: Are you trying to mount a directory onto a file (or vice-versa)?

Minimum Debug Steps

Before opening the issue, try to load the model on the device using this command:

Please provide us this log together with your issue.

brock@ubuntu:/Desktop/Axis/acap-computer-vision-sdk-examples/object-detector-python$ larod-client -g model_path -c cpu-tflite
larod-client: command not found
brock@ubuntu:
/Desktop/Axis/acap-computer-vision-sdk-examples/object-detector-python$ journalctl -u larod
-- No entries --

I am attempting to go through the steps for the object-detector-python sample, but the application fails to run. I was able to run the hello-world sample, but for object-detector-python I get an error on the docker compose command:

Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/usr/lib/libvdostream.so.1" to rootfs at "/usr/lib/libvdostream.so.1": mount /usr/lib/libvdostream.so.1:/usr/lib/libvdostream.so.1 (via /proc/self/fd/6), flags: 0x5000: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type

To reproduce

cd ~/Desktop/Axis/acap-computer-vision-sdk-examples/object-detector-python
export ARCH=armv7hf
export CHIP=tpu
export APP_NAME=acap4-object-detector-python
export MODEL_NAME=acap-dl-models
docker run --rm --privileged multiarch/qemu-user-static --credential yes --persistent yes
docker build --tag $APP_NAME --platform linux/arm/v7 --build-arg ARCH .
docker build --file Dockerfile.model --tag $MODEL_NAME --platform linux/arm/v7 --build-arg ARCH .
docker save $APP_NAME | docker load
docker save $MODEL_NAME | docker load
docker compose --env-file ./config/env.$ARCH.$CHIP up

See error:

WARN[0000] /home/brock/Desktop/Axis/acap-computer-vision-sdk-examples/object-detector-python/docker-compose.yml: version is obsolete
[+] Running 2/0
✔ Container object-detector-python-inference-server-1 Created 0.0s
✔ Container object-detector-python-acap_dl-models-1 Created 0.0s
Attaching to acap_dl-models-1, inference-server-1, object-detector-python-1
inference-server-1 | exec /opt/app/acap_runtime/acapruntime: exec format error
acap_dl-models-1 | exec /bin/sh: exec format error
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/usr/lib/libvdostream.so.1" to rootfs at "/usr/lib/libvdostream.so.1": mount /usr/lib/libvdostream.so.1:/usr/lib/libvdostream.so.1 (via /proc/self/fd/6), flags: 0x5000: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type

Environment

  • Axis device model: Axis P3255-LVE
  • Axis device firmware version: 11.9.60
  • Stack trace or logs: [e.g. Axis device system logs]
  • OS and version: Ubuntu 20.04 LTS via VMware (same error when running Windows 11 natively)
  • Version: See above

Additional context

Add any other context about the problem here.

latest opencv-qr-decoder-python can't work on Q3538 os 11.9.60

Describe the bug

opencv-qr-decoder-python can't work on Q3538 os 11.9.60, it reports following error:
image

I remember this sample works with previous OS and previous CV SDK on Q1656.

Environment

  • Axis device model: [e.g. Q3538]
  • Axis device firmware version: [e.g. 11.9.60]

Deploying YOLOv7 / any type of YOLO model (v5,v4,etc)

Support for YOLOv7 / any type of YOLO model (v5,v4,etc)

I was wondering if the ACAP team could please confirm if anyone has managed to deploy any type of YOLO object detector model (v7,v5,v4) to any ARTPEC-8 & aarch64 camera derivates / if this would be at all supported from a hardware perspective for these model types in the case these are quantized to either fp16 / int8?

For reference we are seeing several other issues mentioning quantization to fp16 / int8 would not be supported (#[65], [#62]), and for background our own tests attempting to run fp16 and int8-quantized yolov7 models shows these fail to be correctly loaded to the AXIS runtime via this object-detector-python inference server image per the below logs output:

inference-server_1        | ERROR in Inference: Failed to load model yolov7_fp16_quant.tflite (Could not load model: Asynchronous connection has been closed)

If this would not be possible, could the ACAP team please advise if there would be any other object detection weight files apart from those provided in the object-detector-python MobileNet examples folder that the team would recommend taking a look at? Any pointers on this would be much appreciated if possible, thank you.

Containers crashing after extended period of time due to RPC error

Please do not disclose security vulnerabilities as issues. See our security policy for responsible disclosures.

Describe the bug

I built the object-detector-python and pushed it to a P3255-LVE dome camera. I have noticed that the containers crash after a few hours and I have to restart them using the docker-compose up command. The issue looks to be a problem with sending a frame to the inference server and the request timing out.

I am expecting it to continue running till the camera is shut down or the docker-compose down command.

To reproduce

Build container:
docker build . -t obj-app --build-arg axisecp --build-arg armv7hf --build-arg arm32v7/ubuntu:20.04
Save the container and load it on the camera:
docker save obj-app -o opencv.tar
docker -H tcp://192.168.0.2:2375 load -i opencv.tar
Run containers:
docker-compose -H tcp://192.168.0.2:2375 --env-file ./config/env.armv7hf.tpu up

Wait a day or 2 and the containers will crash and exit

I looked at the logs of the obj-app container and it showed a RPC error, I pasted in the Stack trace or logs section.

Environment

  • Axis device model: P3255-LVE Dome camera
  • Axis device firmware version: 10.10.69
  • Stack trace or logs: exception.txt
  • OS and version: Windows 10
  • Version: axisecp/acap-runtime:0.6-armv7hf

Additional context

Models does not detect objects CHIP_ID=12 SDK=1.2.1

Describe the bug

I have a detection model built using tf models (tf 1.15.5 to have per-tensor quantization). The model worked fine on 10.8.x firmware and SDK=1.2. After upgrade to 10.11.x and SDK=1.2.1 I see the following: model works as expected on cpu (CHIP_ID=2) and does not detect anything (low scores, random boxes) for artpec8 (CHIP_ID=12). Whereas object-detection-python example works fine in my environment for cpu and artpec8 both.
When I just change CHIP_ID=12 to CHIP_ID=2 my application starts working.
There are no any crashes or errors in logs in case CHIP_ID=12 everything looks the same like in case CHIP_ID=2.

Environment

  • Axis device model: Q3538-LVE
  • Axis device firmware version: 10.11.x
  • OS and version: Ubuntu 20.04 LTS
  • Version: SDK=1.2.1, axisecp/acap-runtime:aarch64-containerized

Additional context

Could you provide advice what should I check to find the cause of the issue? Any test to get more information?

No space on device when loading hello-world on P3265

Hi there,
I'm trying to install this computer vision example, https://github.com/AxisCommunications/acap-computer-vision-sdk-examples/tree/master/hello-world
But I fail to install it, I get the error "no space on device". I attach an ssh session screenshot to the camera, showing the available space before and after trying to install the example.
Camera firmware is 10.10.73. Docker daemon 1.2.3
Attached screenshot before and after trying to install the example, and dockerlog from the camera

Thank you very much
thumbnail_image

An issue when setup the environment for running a sample

Hello
How are you?
Thanks for contributing to this project.
I tried to run the sample "minimal-ml-inference" on my AXIS camera device.
I followed the below steps on Ubuntu 20.04 desktop as README.

export ARCH=armv7hf
export CHIP=tpu

export AXIS_TARGET_IP=x.x.x.x
export DOCKER_PORT=2376
docker --tlsverify -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT system prune -af

But I met the following issue
unable to resolve docker endpoint: open /home/osboxes/.docker/ca.pem: no such file or directory

Where and how should I generate ca.pem file?

Unable to run custom model in inference server

Hi!

I recently tried to run a custom models with the inference client, but one gave me the following error:

larod: Session 55: Could not run job: Could only read 196608 out of 786432 bytes from file descriptor

I used the Python API, with the minimal-inference-server as a reference code base.
Inference on the model through Tensorflow Lite on my pc works fine.
Any tips on how to solve this?

acap_dl-models_1 | exec /bin/sh: exec format error pose-estimator_1 | exec /usr/bin/python3: exec format error

Please do not disclose security vulnerabilities as issues. See our security policy for responsible disclosures.

Describe the bug

Explain the behavior you would expect and the actual behavior.

To reproduce

Please provide as much context as possible and describe the reproduction steps that someone else can follow to recreate the issue on their own.

A team member will try to reproduce the issue with your provided steps. If there are no reproduction steps or no obvious way to reproduce the issue, the team will ask you for those steps. Bugs without steps will not be addressed until they can be reproduced.

Steps to reproduce the behavior:

  1. Set up '...'
  2. Do this '...'
  3. See error

Screenshots

If applicable, add screenshots to help explain your problem.

Environment

  • Axis device model: [e.g. Q1615 Mk III]
  • Axis device firmware version: [e.g. 10.5]
  • Stack trace or logs: [e.g. Axis device system logs]
  • OS and version: [e.g. macOS v12.2.1, Ubuntu 20.04 LTS, Windows build 21996.1]
  • Version: [version of the application, SDK, runtime environment, package manager, interpreter, compiler, depending on what seems relevant]

Additional context

Add any other context about the problem here.

Unable to load custom model on firmware 11

Description

Thank you for your attention. I've trained custom ssdlite_mobiledet model using the TensorFlow API. Following the efforts of previous work, I made changes to the Dockerfile.model and env.aarch64.artpec8 paths, and I was able to successfully run it in the following environment:

  • Axis device model: P3265-LVE
  • Axis device firmware version: 10.11.76
  • SDK VERSION: 1.2.1
  • docker daemon with Compose: 1.2.3
  • acap-runtime : 1.2.0
    image

However, when I upgraded Axis firmware to version 11, I encountered the following issue during inference:

image

inference-server_1            | ERROR in Inference: Failed to load model model.tflite (Could not send message: Transport endpoint is not connected)
object-detector-api-python_1  | <_InactiveRpcError of RPC that terminated with:
object-detector-api-python_1  |         status = StatusCode.CANCELLED
object-detector-api-python_1  |         details = ""
object-detector-api-python_1  |         debug_error_string = "{"created":"@1695725084.818779200","description":"Error received from peer unix:/tmp/acap-runtime.sock","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"","grpc_status":1}"

Issue environment

  • Axis device model: P3265-LVE
  • Axis device firmware and CV SDK: 11.3.70 vs 1.9
    (also tried 11.2.68 vs 1.6, 11.4.63 vs 1.8, 11.5.64 vs 1.10)
  • docker daemon with Compose: 1.2.3
  • acap-runtime : 1.2.0 (also tried 1.3.1)

Please help me, thanks in advance.

Frequently asked questions

This thread collects common questions that developers asked about our examples.

Which cameras can I use to run these examples?

Only Artpec7 cameras equipped with TPU and Artpec8 cameras are supported at the moment.

Can I use a different model?

Yes! You can use Ideally any model, however you have to make sure it is compatible with the hardware you want to run it on.

On CPU, you can run any model in the .tflite format.

On EdgeTPU, you have to verify that your model is compatible with the EdgeTPU. The easiest way, is just to try to convert it to the EdgeTPU format with the EdgeTPU compiler and see the result. You can also check the Supported operations when you build your model to see which operations can be used.

On Artpec8, you can run in theory any tflite model quantized in int8 format. We recommend that the model is quantized by tensor and not by channel to obtain better performance. If you are interested, take a look at our guide about how to train and quantize a model for Artpec8. If some parts of your models can't be executed by the DLPU accelerator, they will be automatically sent to the CPU, this flexibility will come with a drop in performance, if you use a model that jumps between DLPU and CPU too much, the execution will be very slow. There is also a limit in how many times the inference can be handed over from the DLPU to the CPU which is currently set to 16.

Can I run models in other formats like ONNX or PyTorch?

Unfortunately no. You could install the pytorch or onnx pip packages in your application, but these libraries won't have access to the hardware accelerators, and will run your model only on CPU.

SSD mobilenet performances are not good enough, can I run EfficientDet or YOLOv5 to improve the accuracy of the model?

Yes, however, these two model are heavier than SSD mobilenetv2.

EfficinetDet

Some users successfully used EfficientDet Lite0 from the coral website on Artpec8 obtaining an inference time of 360ms.

YOLOv5

We also saw that it is possible to use the popular YOLOv5, we recommend using the official implementation from the Ultralytics repository. In their repository, you can find also a script to convert their model to tflite (make sure to use the -int8 flag) and edgeTPU. Be aware that they quantize their model by channel, which is not the right quantization technique to get the best performance out of Artpec8 cameras.

We have collected some results from tests done with the YOLOv5 model
On Artpec8 using the "small" version of YOLOv5 and input size 640x640 the inference takes 1200 ms
On Artpec7 using the "nano" version of YOLOv5 and input size 224x224 the inference takes 30-40 ms.

How do I make my model faster in artpec8 cameras?

To make your model faster in artpec8 cameras, you should make sure that it is optimized for the hardware accelerator.
First, verify that your model doesn't have OP that are executed on CPU (e.g. dequantize -> float OP -> requantize) in the middle of the graph. Layers like this will prevent the accelerator to process your inference, giving the task to the CPU, which will make your network slower.
Another way to maximize performances is to quantize your network by tensor, see our guide.
Standard Conv2D blocks should be preferred to DepthwiseCov2d.

My network has a low accuracy when used on camera frames, why? How do I improve it?

If you are using a pretrained network on the COCO dataset, you should be aware that the data distribution of that dataset can be different from camera frames distributions. Thus, it would be better to fine-tuning your network, using some camera frames.

Ask your question

If you didn't find what you were looking for, feel free to open a new thread in the discussion tab!

static-image.yml not up to date

I did docker save axisecp/acap-runtime:1.2.0-aarch64-containerized | docker -H tcp://$DEVICE_IP:$DOCKER_PORT load and it worked.
I now have another error while running the example program ; with accessing the shared library of acap runtime

docker-compose --host tcp://$DEVICE_IP:$DOCKER_PORT --file static-image.yml --env-file ./config/env.$ARCH.$CHIP up

gives following error :

WARNING: The INFERENCE_SERVER_COMMAND variable is not set. Defaulting to a blank string.
Starting object-detector-python_object-detector-python_1 ... done
Starting object-detector-python_inference-server_1 ... done
Starting object-detector-python_acap_dl-models_1 ... done
Attaching to object-detector-python_inference-server_1, object-detector-python_object-detector-python_1, object-detector-python_acap_dl-models_1
acap_dl-models_1 | COPYRIGHT
acap_dl-models_1 | coco_labels.txt
acap_dl-models_1 | ssd_mobilenet_v2_coco_quant_postprocess.tflite
acap_dl-models_1 | ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
inference-server_1 | /opt/app/acap_runtime/acapruntime: error while loading shared libraries: liblarod.so.1: cannot open shared object file: No such file or directory
object-detector-python_inference-server_1 exited with code 127
object-detector-python_acap_dl-models_1 exited with code 0
object-detector-python_1 | object-detector-python connect to: inference-server
object-detector-python_1 | <_InactiveRpcError of RPC that terminated with:
object-detector-python_1 | status = StatusCode.DEADLINE_EXCEEDED
object-detector-python_1 | details = "Deadline Exceeded"
object-detector-python_1 | debug_error_string = "{"created":"@1673593775.919770280","description":"Deadline Exceeded","file":"src/core/ext/filters/deadline/deadline_filter.cc","file_line":81,"grpc_status":4}"

Is it because of any access permission to the files ? Thank you in advance

Originally posted by @amararjun in #124 (reply in thread)

Communication between webpage and c++ App

I was using net_http.h in SDK 3 for communicating from webpage to c++ via .cgi file, I found out it is no more supported in SDK 4. I have also checked there is a new way of communicating via Monkey server, which needs a major change in my application architecture. Is there is a way for using same .cgi way in SDK 4?

Issue while uploading docker images with no SD card support

Hello,

I am trying to upload my docker images to P3265-LVE (with 8 GB flash) camera running firmware 11.5.64. My docker images have a total size of < 700 MB. I compiled them in an eap file with ACAP docker-compose installed prior to installing my current app. However, I can still not upload them on the camera with no SD card.

Is there an issue with the way docker-compose handles storage space?

How to load custom model in object-detector-python

Hello,

I'm not able to load a custom model in a modified example of object-detector-python. From the Dockerfile.model I see that the current model is compiled to run on google coral so I transformed my model acordingly (based on yolov4-tiny).
My custom model is fully quantized to tf-lite int8 and then postprocessed to be compatible with google coral by using edgetpu_compiler with success. also I tried with no success to load the quantized int8 model without compiling for google-coral.
But after loading the docker image to the camera (P3265) it shows the following error:
ERROR in Inference: Failed to load model model.tflite (Could not load model: Could not build an interpreter of the model)
<_InactiveRpcError of RPC that terminated with:
object-detector-python-object-detector-python-1 | status = StatusCode.CANCELLED
object-detector-python-object-detector-python-1 | details = ""
object-detector-python-object-detector-python-1 | debug_error_string = "{"created":"@1657181878.926597840","description":"Error received from peer ipv4:172.29.0.2:8501","file":"src/core/lib/surface/call.cc","file_line":1063,"grpc_message":"","grpc_status":1}"

I don't know if I'm missing some step or the model needs to be converted in a different way.

Environment

  • Axis device model: P3265-LVE
  • Axis device firmware version: 10.10.73
  • SDK VERSION: 1.2
  • docker daemon: 1.2.3

EDIT: I upgraded the camera firmware to 10.11.76 (also I reinstalled docker daemon 1.2.3). Now the error is different:
/usr/bin/acap_runtime: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /host/lib/liblarod.so.1)

Thank you very much

object-detector-python fails on tag v1.2

Describe the bug

object-detector-python (tag v1.2) example fails with error:

acap_dl-models_1          | COPYRIGHT
acap_dl-models_1          | coco_labels.txt
acap_dl-models_1          | ssd_mobilenet_v2_coco_quant_postprocess.tflite
acap_dl-models_1          | ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
object-detector-python_acap_dl-models_1 exited with code 0
inference-server_1        | /usr/bin/acap_runtime: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /host/lib/liblarod.so.1)
object-detector-python_inference-server_1 exited with code 1
object-detector-python_1  | object-detector-python connect to: inference-server:8501
object-detector-python_1  | <_InactiveRpcError of RPC that terminated with:
object-detector-python_1  | 	status = StatusCode.UNAVAILABLE
object-detector-python_1  | 	details = "DNS resolution failed for service: inference-server:8501"
object-detector-python_1  | 	debug_error_string = "{"created":"@1660143486.605863880","description":"Resolver transient failure","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":1324,"referenced_errors":[{"created":"@1660143486.605859080","description":"DNS resolution failed for service: inference-server:8501","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/dns_resolver_ares.cc","file_line":359,"grpc_status":14,"referenced_errors":[{"created":"@1660143486.605787880","description":"C-ares status is not ARES_SUCCESS qtype=A name=inference-server is_balancer=0: Domain name not found","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":698}]}]}"

To reproduce

Install and run object-detector-python example according to the steps.

Environment

  • Axis device model: Q3538-LVE, P3265-LV
  • Axis device firmware version: 10.11.76, 10.11.65, 10.12.73
  • OS and version: Ubuntu 20.04 LTS

Additional context

Looks like axisecp/acap-runtime:0.6-aarch64 is outdated.

How can I correctly install the third-party python packages

Hi, I tried to implement the object-detector-python example. As a result, It worked successfully and I'm impressed by the wonderful application.
However, I want to use the python library such as tqdm, pandas, requests etc. I've already tried to rewrite the Dockfile:

ARG ARCH=armv7hf
ARG SDK_VERSION=1.2.1
ARG REPO=axisecp

FROM arm32v7/ubuntu:20.04 as runtime-image-armv7hf
FROM arm64v8/ubuntu:20.04 as runtime-image-aarch64

FROM $REPO/acap-computer-vision-sdk:$SDK_VERSION-$ARCH AS cv-sdk
FROM runtime-image-${ARCH}
COPY --from=cv-sdk /axis/python /
COPY --from=cv-sdk /axis/python-numpy /
COPY --from=cv-sdk /axis/python-tfserving /
COPY --from=cv-sdk /axis/opencv /
COPY --from=cv-sdk /axis/openblas /


WORKDIR /app
COPY app/* /app/
CMD ["pip", "install", "requests"]
CMD ["python3", "detector.py"]

But I still got the error:

ModuleNotFoundError: No module named 'requests'

I wonder how can I correctly install the third-party python packages?
Thanks.

object-detector-python StatusCode.UNIMPLEMENTED

Describe the bug

object-detector-python v1.2.1 example fails with error:

Starting object-detector-python_acap_dl-models_1         ... done
Recreating object-detector-python_inference-server_1     ... done
Starting object-detector-python_object-detector-python_1 ... done
Attaching to object-detector-python_acap_dl-models_1, object-detector-python_object-detector-python_1, object-detector-python_inference-server_1
acap_dl-models_1          | COPYRIGHT
acap_dl-models_1          | coco_labels.txt
acap_dl-models_1          | ssd_mobilenet_v2_coco_quant_postprocess.tflite
acap_dl-models_1          | ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
object-detector-python_acap_dl-models_1 exited with code 0
inference-server_1        | #Error -1 getting parameter 'root.Acapruntime.Verbose'!
inference-server_1        | #Error -1 getting parameter 'root.Acapruntime.IpPort'!
inference-server_1        | #Error -1 getting parameter 'root.Acapruntime.ChipId'!
object-detector-python_1  | object-detector-python connect to: unix:///tmp/acap-runtime.sock
object-detector-python_1  | <_InactiveRpcError of RPC that terminated with:
object-detector-python_1  | 	status = StatusCode.UNIMPLEMENTED
object-detector-python_1  | 	details = ""
object-detector-python_1  | 	debug_error_string = "{"created":"@1661189648.821133600","description":"Error received from peer unix:/tmp/acap-runtime.sock","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"","grpc_status":12}"
object-detector-python_1  | >
object-detector-python_1  | Dropping old frames: 820212
object-detector-python_1  | Dropping old frames: 619994
object-detector-python_1  | Dropping old frames: 420254
object-detector-python_1  | <_InactiveRpcError of RPC that terminated with:
object-detector-python_1  | 	status = StatusCode.UNIMPLEMENTED
object-detector-python_1  | 	details = ""
object-detector-python_1  | 	debug_error_string = "{"created":"@1661189649.848179440","description":"Error received from peer unix:/tmp/acap-runtime.sock","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"","grpc_status":12}"

To reproduce

Follow the steps.

Environment

  • Axis device model: AXIS Q3538-LVE
  • Axis device firmware version: 10.11.x, 10.12.x
  • OS and version: Ubuntu 20.04 LTS
  • Version: 1.2.1

Additional context

Can it be caused by the latest axisecp/acap-runtime:aarch64-containerized update?

Is it possible to make the opencv.dnn module available in acap-computer-vision-sdk?

Is it possible to make the opencv.dnn module available in acap-computer-vision-sdk for axis P3265-LVE Dome Camera?
"""
OpenCV modules:
To be built: calib3d core features2d flann imgcodecs imgproc objdetect python3 video videoio
Disabled: world
Disabled by dependency: gapi highgui java_bindings_generator js_bindings_generator ml objc_bindings_generator photo python_tests stitching
Unavailable: dnn java python2 ts
Applications: apps
Documentation: NO
Non-free algorithms: NO
"""

GLIBC_2.34' not found (required by /host/lib/liblarod.so.1) error with Q1615 MK III (AXIS OS version 10.11.65)

Describe the bug

pose-estimator-with-flask example not working with Q1615 MM III (AXIS OS version 10.11.65)

To reproduce

Follow the readme file from this project
$ export ARCH=armv7hf
$ export CHIP=tpu
$ export AXIS_TARGET_IP=192.168.50.114
$ export DOCKER_PORT=2375
$ export APP_NAME=acap4-pose-estimator-python
$ export MODEL_NAME=acap-dl-models
$ docker -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT system prune -af
$ docker build . -t $APP_NAME --build-arg ARCH
$ docker save $APP_NAME | docker -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT load
$ docker build . -f Dockerfile.model -t $MODEL_NAME --build-arg ARCH
$ docker save $MODEL_NAME | docker -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT load
$ docker-compose -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT --env-file ./config/env.$ARCH.$CHIP up

Screenshots

image

Same error with minimal-ml-inference example:

image
Same error with object-detector-cpp:
image

Environment

  • Axis device model: [Q1615 Mk III]
  • Axis device firmware version: [10.11.65]
  • Stack trace or logs:
    /usr/bin/acap_runtime: /lib/arm-linux-gnueabihf/libc.so.6: version `GLIBC_2.34' not found (required by /host/lib/liblarod.so.1)
  • OS and version: [Ubuntu 20.04 LTS]

run error for object-detector-python with artpec8 camera

Environment

  • Axis device model: [e.g. P3265-LVE ]
  • Axis device firmware version: [e.g. 10.12.114]

tit works for Q1615 mkiii, but when compile with aarch64 for arpec8, it has error, here is the error

user@ubuntu:~/Downloads/acap-computer-vision-sdk-examples-main/object-detector-python$ docker-compose --host tcp://192.168.1.234:2375 --env-file ./config/env.aarch64.artpec8 up
Starting object-detector-python_object-detector-python_1 ... done
Recreating object-detector-python_inference-server_1 ... done
Starting object-detector-python_acap_dl-models_1 ... done
Attaching to object-detector-python_object-detector-python_1, object-detector-python_acap_dl-models_1, object-detector-python_inference-server_1
acap_dl-models_1 | COPYRIGHT
acap_dl-models_1 | coco_labels.txt
acap_dl-models_1 | ssd_mobilenet_v2_coco_quant_postprocess.tflite
acap_dl-models_1 | ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
object-detector-python_acap_dl-models_1 exited with code 0
object-detector-python_1 | object-detector-python connect to: unix:///tmp/acap-runtime.sock
object-detector-python_1 | Failed to initialize vdo stream
object-detector-python_1 | Traceback (most recent call last):
object-detector-python_1 | File "detector.py", line 136, in
object-detector-python_1 | Detector().run()
object-detector-python_1 | File "detector.py", line 103, in run
object-detector-python_1 | self.run_camera_source()
object-detector-python_1 | File "detector.py", line 114, in run_camera_source
object-detector-python_1 | succeed, bounding_boxes, obj_classes, _ = self.detect(frame)
object-detector-python_1 | File "detector.py", line 39, in detect
object-detector-python_1 | image = image.astype(np.uint8)
object-detector-python_1 | TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
object-detector-python_object-detector-python_1 exited with code 1

Are "edge-tpu" optimized models referenced in sample projects compatible with aarch64/ARTPEC8-based cameras?

Describe the bug

An app based on the sample project "object-detector-cpp" is not able to initialize an inference-server when an "edge-tpu optimized" SSD model is enabled by the app in an aarch64 camera (Axis Q1656). No other changes are made either to the original code files or Docker files besides changing the reference to the utilized model. It's not entirely clear from the documentation whether or not these kind of models are compatible with this aarch64 SoC (ARTPEC8).

To reproduce

Change the model path on this line

MODEL_PATH=/models/ssd_mobilenet_v2_coco_quant_postprocess.tflite
to an edgetpu-optimized model, e.g., ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite which is originally referenced here
ADD https://github.com/google-coral/edgetpu/raw/master/test_data/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite models/

Otherwise follow the steps detailed in the tutorial's chapter "How to run the code" of the sample project "object-detector-cpp" while using arm64-specific variables. Testing was simplified by temporarily disabling TLS from the Docker daemon in the camera.

Specifically, the inference-server fails to utilize the provided edge-tpu model due to an error:

inference-server_1 | ERROR in Inference: Failed to load model ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite (Could not load model: Could not set VX delegate)

Besides using an original Google/Coral SSD model (i.e., ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite), we also tried out our own custom model, optimized with an "Edge TPU Compiler" by Coral, but the end result was identical, the initialization of the inference server ended to the same delegation error. We kept the following command line unchanged, as we thought that the flag "-j 12" enables the accelerator chip, is this fine?

INFERENCE_SERVER_COMMAND=/usr/bin/acap_runtime -p 8501 -j 12 -c /models/server.pem -k /models/server.key

Is there a fundamental reason that these types of optimizations are not compatible with the aarch64, or specifically the ARTPEC8 SoC, or does this happen due to something missing from the firmware/Docker images?

The project's original configuration (from env.aarch64.artpec8) using the model ssd_mobilenet_v2_coco_quant_postprocess.tflite did work fine in our tests for this camera (around 40ms per inference), however we would prefer to test and run models which are (fully?) optimized for the cameras on aarch64/ARTPEC8.

Environment

  • Axis device model: AXIS Q1656 Box Camera

  • Axis device firmware version: 10.10.69
    (Previously the firmware was 10.11.65 but we had to downgrade due to the "GLIBC not found" problem with it - see Issue #28)

  • Stack trace or logs:
    .....
    Attaching to object-detector-cpp_acap_dl-models_1, object-detector-cpp_inference-server_1, object-detector-cpp_object-detector_1
    acap_dl-models_1 | COPYRIGHT
    acap_dl-models_1 | coco_labels.txt
    acap_dl-models_1 | server.key
    acap_dl-models_1 | server.pem
    acap_dl-models_1 | ssd_mobilenet_v2_coco_quant_postprocess.tflite
    acap_dl-models_1 | ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
    inference-server_1 | Server listening on 0.0.0.0:8501
    object-detector-cpp_acap_dl-models_1 exited with code 0
    object-detector_1 | Start: /usr/bin/objdetector /models/server.pem
    object-detector_1 | Create memory mapped file
    object-detector_1 | Caught frame 0 480x320
    object-detector_1 | Connecting to: inference-server:8501
    object-detector_1 | Waiting for response Z
    inference-server_1 | ERROR in Inference: Failed to load model ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite (Could not load model: Could not set VX delegate)
    object-detector_1 | gRPC call return code:
    object-detector_1 | Capture: 22 ms
    object-detector_1 | Inference grpc call: 227 ms
    object-detector_1 | Postprocess: 0 ms
    object-detector_1 |
    object-detector_1 | Caught frame 1 480x320
    object-detector_1 | Connecting to: inference-server:8501
    object-detector_1 | Waiting for response Z
    inference-server_1 | ERROR in Inference: Failed to load model ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite (Could not load model: Could not set VX delegate)
    object-detector_1 | gRPC call return code:
    .....

  • OS and version: Ubuntu 20.04.4 LTS (Microsoft-WSL2 x86_64)

  • Version: the sample project was pulled from the master branch of acap-computer-vision-sdk-examples(/object-detector-cpp), commit 636f437.

How can I interpret this python exercise, please help.

Entry – Sampling – Cycles –
Decisions – Strings

  1. Quantity: 5 times
  2. The data to work with are: file, name and surname, age, sex code, category code, category, salary, retirement.
    a) File: 1,2,3… i+1 is automatically generated
    b) Surname: all capital letters.
    c) Name: first capitalized, rest lowercase
    d) Age: values ​​between 21 and 60. Validate.
    e) Sex code: validate F/M.
    f) Job category code: validate A/B/C/D
    g) Enter the number of overtime hours at 50%. – Enter the number of overtime hours at 100%. VALIDATE income, values ​​between 0 and 15
    h) If the sex code is F, the sex is Female, otherwise it is Male.
    i) If the job category code is A, the category is SALESMAN and the salary is 72000, if the job category code is B, the category is CASHIER and the salary is 75000; if the category code is C, the category is ADMINISTRATIVE and the salary is 82000, otherwise the category is MAESTRANZA and the salary is 52000.
    j) Show surname and first name of the oldest person
    k) Show last name and name of the person with the least amount of overtime at 50%
    l) Count number of women, number of men.
    m) Number of people in each category.
    n) Accumulate salaries by category.
    o) Calculate retirement: 11% of salary – Social work: 3% of salary
    p) Calculate the value to be charged according to overtime
    q) Show data and results

Error "connection reset by peer" after docker load

Issue description

Having issues following the hello-world-python guide, at the "Install the image step", we run the command:

docker save $APP_NAME | docker --host tcp://$DEVICE_IP:$DOCKER_PORT load

And we get the following error:

error during connect: Post "[http://DEVICE_IP:DOCKER_PORT/v1.41/images/load?quiet=0":](http://DEVICE_IP:DOCKER_PORT/v1.41/images/load?quiet=0%22:) write tcp 10.0.0.4:43498->DEVICE_IP:DOCKER_PORT: write: connection reset by peer 

I've correctly set the DEVICE_IP and DOCKER_PORT with those of my device.
The connection works fine running the command of the previous step:

docker --host tcp://$DEVICE_IP:$DOCKER_PORT system prune --all --force

As we get the following output:

Total reclaimed space: 0B 

Any clue on how to solve this?
Thanks

Environment

  • Axis device model: Q1656-LE camera
  • Axis device firmware version: 11.4.63

Changing default detection model to run on Axis camera (Urgent)

Hi everyone,

I was developing a computer vision pipeline on Axis camera model Q1656-LE Box. I installed Axis ACAP and Axis computer vision SDK using Docker and everything is functional when I use the default detection model which is SSD Mobilenet V2. (Screenshot below).
Screenshot from 2022-10-17 18-40-03

This detection model is configured in the following files as far as I learned.

  • app/object-detector-python/Dockerfile.model
`ARG ARCH=aarch64

FROM arm64v8/alpine as model-image-aarch64

FROM model-image-${ARCH}

# Get SSD Mobilenet V2
ADD https://github.com/google-coral/edgetpu/raw/master/test_data/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite models/
ADD https://github.com/google-coral/edgetpu/raw/master/test_data/ssd_mobilenet_v2_coco_quant_postprocess.tflite models/
ADD https://github.com/google-coral/edgetpu/raw/master/test_data/coco_labels.txt models/
ADD https://github.com/google-coral/edgetpu/raw/master/test_data/COPYRIGHT models/

CMD /bin/ls /models`
  • config/env.aarch64.artpec8
`MODEL_PATH=/models/ssd_mobilenet_v2_coco_quant_postprocess.tflite
INFERENCE_SERVER_IMAGE=axisecp/acap-runtime:0.6-aarch64
INFERENCE_SERVER_COMMAND=/usr/bin/acap_runtime -p 8501 -j 12 `

What I need to do is to replace this default detection model with other open-source model like EfficientDet Lite0 (from coral AI model zoo here) or any other newly created model. I tried to change the two files above to accomodate these changes but the Axis device keeps loading the default model SSD Mobilenet V2 and returns an error.

  • app/object-detector-python/Dockerfile.model (Modified)
`ARG ARCH=aarch64

FROM arm64v8/alpine as model-image-aarch64

FROM model-image-${ARCH}

# Get EfficientDet 0 Model
ADD https://raw.githubusercontent.com/google-coral/test_data/master/efficientdet_lite0_320_ptq_edgetpu.tflite models/
ADD https://raw.githubusercontent.com/google-coral/test_data/master/efficientdet_lite0_320_ptq.tflite models/
ADD https://github.com/google-coral/edgetpu/raw/master/test_data/coco_labels.txt models/
ADD https://github.com/google-coral/edgetpu/raw/master/test_data/COPYRIGHT models/

CMD /bin/ls /models`
  • config/env.aarch64.artpec8
`MODEL_PATH=/models/efficientdet_lite0_320_ptq_edgetpu.tflite
INFERENCE_SERVER_IMAGE=axisecp/acap-runtime:0.6-aarch64
INFERENCE_SERVER_COMMAND=/usr/bin/acap_runtime -p 8501 -j 12 `

Then when I run rebuild the object detector according to the instructions and run the detector, I received the following error:
Screenshot from 2022-10-17 18-43-26

Thank you for your help in advance.

Deploying YOLOv8 to ARTPEC-8 cameras

Can ARTPEC-8 cameras run YOLOv8?

Could the Axis team please advise if ARTPEC-8 DLPU chips are able to support running int8 per-tensor quantized YOLOv8m? For reference our team is seeing model loading errors when attempting to load this model to the inference server leveraging the object_detector_python script, with these errors pointing to the model containing > max 16 graph partitions and certain operations not supported on ARTPEC-8 cameras (see docs for full overview of YOLOv8 architecture / layers), which we were curious to learn more about why this would break. Any pointers on this would be much appreciated if possible, thank you.

inference-server_1        | ERROR in Inference: Failed to load model yolov8m_int8.tflite (Could not load model: Model contains too many graph partitions (137 > 16) and 68 of the graph partitions can't be run on the device. Consider redesigning the model to better utilize the device. (Is it fully integer quantized? Is it using non-supported operations?))

To reproduce

  1. Export YOLOv8m to int8 per-tensor quantized .tflite weights using exporter.py script made available here
    Quantization parameters:
  • int8
  • per-tensor
  1. Deploy exported .tflite weights to camera
  2. Run docker-compose with below command leveraging object_detector_python scripts
docker compose --env-file ./config/env.$ARCH.$CHIP up
# env file:
ARCH=aarch64
CHIP=artpec8
  1. Observe above failed model loading errors

Environment

  • Axis device model: AXIS P3267-LVE Dome Camera
  • Axis device firmware version: 11.3.70
  • Stack trace or logs: Console output when running
  • OS and version: [e.g. macOS v13.2.1]
  • Client and server application scripts: object_detector_python

Error with docker-compose

Running the command: docker-compose --tlsverify --host tcp://$DEVICE_IP:$DOCKER_PORT --env-file ./config/env.$ARCH.$CHIP up

Gives me an error:

[+] Running 0/1

  • inference-server Error 15.8s
    Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Camera: AXIS Q1615 MK III

Common Issues

This thread collects common issues and solutions that developers encounter when using our examples.
Before you go through this, make sure you are using the latest firmware in your camera, using an old firmware can often cause issues.

GRPC UNAVIABLE

Issue

When executing the example, I get the error:

<_InactiveRpcError of RPC that terminated with:
object-detector-python_1 | status = StatusCode.UNAVAILABLE
object-detector-python_1 | details = "failed to connect to all addresses"
object-detector-python_1 | debug_error_string = "{"created":"@1659425957.027178280","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3217,"referenced_errors":[{"created":"@1659425957.027173640","description":"failed to connect to all addresses","file":"src/core/lib/transport/error_utils.cc","file_line":165,"grpc_status":14}]}"

Explanation

The StatusCode Unavailable indicates that your application can't connect to the inference server (acap-runtime) that performs the inference. That usually happens for two reasons.

  1. The inference server failed to load your model.
  2. The inference server is not responding, because it is busy loading your model.

Suggestions

Leave it run for up to 2-3 minutes. The first time a model is loaded, it is converted internally, and can take long (We usually see 1 min to load SSD mobilenetv2 on Q1656). After this time, your model should be converted successfully and cached, and the rest of the execution should be smoother.

If that is not the case, run docker -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT ps
This command will tell you which containers are in execution in your camera. And will help you to verify if the acap-runtime is still running.
If you are missing your acap-runtime container, check in the log of the docker-compose up if your inference server crashed. It usually prints something. If you can't find any log, ssh into your camera and run journalctl -u larod, you will get more info about your crash.

Can't replace default models

Issue

When I replace the models in the Dockerfile.model I still see the default models being loaded, and my application can't find my model
image

Explanation

When you terminate an execution you should make sure to add the --volume to your docker-compose down command, otherwise your old volume will remain in your camera, and it won't be overwritten when you update it.

Suggestions

Follow the instruction of the example, and run the docker compose down with the --volume flag.
If that doesn't work, try to clean all the volumes in the camera with: docker -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT volume prune -f

exec format error - during execution

Issue

When I run my example, I get this error:

object-detector-python-object-detector-python-1 | standard_init_linux.go:219: exec user process caused: exec format error

Explanation

You probably chose the wrong architecture while building the application

Suggestions

Verify that your camera is armv7hf or aarch64 and build your application with the right ARCH flag.

exec format error - during build

Issue

When I build my example, I get this error:

---> Running in 27e79ec59704
standard_init_linux.go:228: exec user process caused: exec format error
The command '/bin/sh -c pip install RUN pip install Flask' returned a non-zero code: 1

Explanation

Docker doesn't manage to run instructions in another architecture, you probably didn't set up qemu properly.

Suggestions

Follow the instructions in the example about how to install qemu. You have probably missed running the line:
docker run -it --rm --privileged multiarch/qemu-user-static --credential yes --persistent yes

No space left in the device

Issue

I get the error "no space on device" when trying to install an example

Explanation

The space in the camera is very limited, most of the cameras won't be able to handle more than 1 light docker image.

Suggestions

Make sure to install an SD card in the camera, and specify in your docker ACAP settings that you want to use it.
Look here for more details

Open camera feed from from object-detector-python example

Hello everyone,
we have an AXIS Q1615 Mk III Network Camera and we are getting an error when running one of the examples from object-detector-python.
Here are the details of how I run the example.

Values of environment variables:
export ARCH=armv7hf
export CHIP=tpu

export APP_NAME=acap4-object-detector-python
export MODEL_NAME=acap-dl-models

export DEVICE_IP=my-Ip-here
export DOCKER_PORT=2375

Command:
docker --host tcp://$DEVICE_IP:$DOCKER_PORT compose --env-file ./config/env.$ARCH.$CHIP up

Log:
✔ Container object-detector-python-inference-server-1 Created 0.0s
✔ Container object-detector-python-object-detector-python-1 Created 0.0s
✔ Container object-detector-python-acap_dl-models-1 Created 0.0s
Attaching to object-detector-python-acap_dl-models-1, object-detector-python-inference-server-1, object-detector-python-object-detector-python-1
object-detector-python-acap_dl-models-1 | COPYRIGHT
object-detector-python-acap_dl-models-1 | coco_labels.txt
object-detector-python-acap_dl-models-1 | ssd_mobilenet_v2_coco_quant_postprocess.tflite
object-detector-python-acap_dl-models-1 | ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
object-detector-python-acap_dl-models-1 exited with code 0
object-detector-python-object-detector-python-1 | object-detector-python connect to: unix:///tmp/acap-runtime.sock
object-detector-python-object-detector-python-1 | Failed to initialize vdo stream
object-detector-python-object-detector-python-1 | Traceback (most recent call last):
object-detector-python-object-detector-python-1 | File "detector.py", line 136, in
object-detector-python-object-detector-python-1 | Detector().run()
object-detector-python-object-detector-python-1 | File "detector.py", line 103, in run
object-detector-python-object-detector-python-1 | self.run_camera_source()
object-detector-python-object-detector-python-1 | File "detector.py", line 114, in run_camera_source
object-detector-python-object-detector-python-1 | succeed, bounding_boxes, obj_classes, _ = self.detect(frame)
object-detector-python-object-detector-python-1 | File "detector.py", line 39, in detect
object-detector-python-object-detector-python-1 | image = image.astype(np.uint8)
object-detector-python-object-detector-python-1 | TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
object-detector-python-object-detector-python-1 exited with code 1

There seems to be a problem with opening the camera feed through openCV, when calling cap.read() I get (False, None).

Any help will be highly appreciated.

Thanks in advance.

Build fails to install extra libraries with pip

I'm using the object-detector-python project as a base. I've made changes to the detector.py that uses some additional python packages shapely and pandas. I've edited the dockerfile to include this:
ARG ARCH=armv7hf
ARG SDK_VERSION=1.2
ARG REPO=axisecp
ARG RUNTIME_IMAGE=arm32v7/ubuntu:20.04

FROM $REPO/acap-computer-vision-sdk:$SDK_VERSION-$ARCH AS cv-sdk
FROM ${RUNTIME_IMAGE}
COPY --from=cv-sdk /axis/python /
COPY --from=cv-sdk /axis/python-numpy /
COPY --from=cv-sdk /axis/python-tfserving /
COPY --from=cv-sdk /axis/opencv /
COPY --from=cv-sdk /axis/openblas /

WORKDIR /app
RUN pip install requests
RUN pip install jsonpickle
RUN pip install shapely
RUN pip install pandas
COPY app/* /app/
CMD ["python3", "detector.py"]

When I build the container using this command:
docker build . -t container --build-arg axisecp --build-arg armv7hf --build-arg arm32v7/ubuntu:20.04
The container fails to build, the output is attached

The container builds successfully with the other 2 packages I added, requests and jsonpickle. It looks like it fails to install numpy, but there is already a version of numpy copied from /axis/python-numpy. Is there anyway to successfully build the container with these packages?

Thanks
output.txt

Building a docker image

Hello
How are you?
Thanks for contributing to this project.
I have a quick question.
Which device & platform should the app docker image be built on?
I can NOT see this in the README.
I think it will be a PC platform(eg: Windows 10 or Ubuntu 20.04), right?

"RUN apt-get" in Docker file gives error: "unknown instruction APT-GET"

Please do not disclose security vulnerabilities as issues. See our security policy for responsible disclosures.

Describe the bug

While trying "opencv-image-capture-cpp" and " parameter-api" the docker build gives an error

docker build --tag $APP_NAME --build-arg ARCH .

Sending build context to Docker daemon  17.92kB
Error response from daemon: dockerfile parse error line 15: unknown instruction: APT-GET

To reproduce

docker build --tag $APP_NAME --build-arg ARCH .

Environment

  • Axis device model: Q1656 LE
  • Axis device firmware version: 10.12.114
  • Stack trace or logs: [e.g. Axis device system logs]
  • OS and version: Ubuntu 20.04 LTS,
  • Version: SDK 1.5

Pose estimator - invalid output

Hello everyone.

We have a P1467-LE camera. We're trying example 'pose-estimator-with-flask', but when we execute

docker-compose --tlsverify --host tcp://$DEVICE_IP:$DOCKER_PORT --env-file ./config/env.$ARCH.$CHIP up

it returns

Error response from daemon: pull access denied for acap4-pose-estimator-python, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

and, in the camera log, we find:

2023-06-19T17:36:33.498+02:00 axis-b8a44f82b9a9 [ INFO ] dockerdwrapper[12194]: time="2023-06-19T17:36:33.496465080+02:00" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
2023-06-19T17:36:33.498+02:00 axis-b8a44f82b9a9 [ INFO ] dockerdwrapper[12194]: time="2023-06-19T17:36:33.496587200+02:00" level=info msg="Ignoring extra error returned from registry: unauthorized: authentication required"
2023-06-19T17:36:33.607+02:00 axis-b8a44f82b9a9 [ INFO ] dockerdwrapper[12194]: time="2023-06-19T17:36:33.606334960+02:00" level=warning msg="error aborting content ingest" digest="sha256:3fda7495b281a86f22bccae769d752db33a9e89af264570a1853b107390599a6" error="context canceled" remote="docker.io/axisecp/acap-runtime:1.2.0-aarch64-containerized"
2023-06-19T17:36:33.607+02:00 axis-b8a44f82b9a9 [ INFO ] dockerdwrapper[12194]: time="2023-06-19T17:36:33.606507960+02:00" level=warning msg="Error persisting manifest" digest="sha256:3fda7495b281a86f22bccae769d752db33a9e89af264570a1853b107390599a6" error="error writing manifest to content store: failed to send write: EOF: unknown"

Any idea about it?

Thank you very much in advance.

Problem loading yolov4 model

Discussed in #62

Originally posted by **JENNSHIUAN ** August 2, 2022
Hi, I have the same issue when I loaded my own tflite model.
I have already replaced the path to my model in the Dockerfile.model and env.aarch64.artpec8:

Dockerfile.model:

# Get custom model
ADD ./models/yolov4/train-608_best.tflite models/
ADD ./models/yolov4/labels.txt models/

env.aarch64.artpec8:

MODEL_PATH=/models/train-608_best.tflite
OBJECT_LIST_PATH=/models/labels.txt
INFERENCE_SERVER_IMAGE=axisecp/acap-runtime:aarch64-containerized
INFERENCE_SERVER_ARGS="-m /models/train-608_best.tflite -j 12" 

There is the error I encountered:
image

error message:

ERROR in Inference: Failed to load model train-608_best.tflite (Could not load model: Asynchronous connection has been closed)

<_InactiveRpcError of RPC that terminated with:
object-detector-python_1 | status = StatusCode.UNAVAILABLE
object-detector-python_1 | details = "failed to connect to all addresses"
object-detector-python_1 | debug_error_string = "{"created":"@1659425957.027178280","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3217,"referenced_errors":[{"created":"@1659425957.027173640","description":"failed to connect to all addresses","file":"src/core/lib/transport/error_utils.cc","file_line":165,"grpc_status":14}]}"

Environment

  • Axis device model: P3265-LVE
  • Axis device firmware version: 10.11.76 (also tried 10.10.73 but didn't work)
  • SDK VERSION: 1.2.1
  • docker daemon with Compose: 1.2.3

Please help me, thanks in advance.

Originally posted by @JENNSHIUAN in #50 (comment)

Issue obtaining images from camera source using a python script

Hello,

I'm trying to, using a python script, read an image from video streaming and later write it on my axis camera device. The objective is to process that image using a dnn similarly to the object-detector-python example, but my current issue is just about reading the image from video streaming.

I have followed the steps described in object-detector-python example, but for the following image (screenshot from the web interface of the camera):

325594054-ad4ecd1a-629f-4fcf-8f27-76117d64fb53

I'm getting the following image:

325592596-ed58668f-23c5-4b45-b215-1934d67f3fb1

Next, I provide my specific case characteristics:

  • Camera model: AXIS P3268-LVE Dome Camera
  • Camera firmware: 10.12.165
  • Camera resolution: 1280x960 (4:3). This is the resolution used for the provided images, but I have already tried the same case for different resolutions (with similar results).

I also provide some reproducible code:

import cv2
from vdo_proto_utils import VideoCaptureClient
stream_width, stream_height, stream_framerate = (1280, 960, 10)
grpc_socket = 'unix:///tmp/acap-runtime.sock'
capture_client = VideoCaptureClient(socket=grpc_socket, stream_width=stream_width, stream_height=stream_height, stream_framerate=stream_framerate)
frame = capture_client.get_frame()
cv2.imwrite('/app/output_image.jpg', frame)

I'm running that code inside a docker container based on a docker image built using the Dockerfile provided in the object-detector-python example.

Thank you for your time.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.