This thread collects common issues and solutions that developers encounter when using our examples.
Before you go through this, make sure you are using the latest firmware in your camera, using an old firmware can often cause issues.
GRPC UNAVIABLE
Issue
When executing the example, I get the error:
<_InactiveRpcError of RPC that terminated with:
object-detector-python_1 | status = StatusCode.UNAVAILABLE
object-detector-python_1 | details = "failed to connect to all addresses"
object-detector-python_1 | debug_error_string = "{"created":"@1659425957.027178280","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3217,"referenced_errors":[{"created":"@1659425957.027173640","description":"failed to connect to all addresses","file":"src/core/lib/transport/error_utils.cc","file_line":165,"grpc_status":14}]}"
Explanation
The StatusCode Unavailable indicates that your application can't connect to the inference server (acap-runtime) that performs the inference. That usually happens for two reasons.
- The inference server failed to load your model.
- The inference server is not responding, because it is busy loading your model.
Suggestions
Leave it run for up to 2-3 minutes. The first time a model is loaded, it is converted internally, and can take long (We usually see 1 min to load SSD mobilenetv2 on Q1656). After this time, your model should be converted successfully and cached, and the rest of the execution should be smoother.
If that is not the case, run docker -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT ps
This command will tell you which containers are in execution in your camera. And will help you to verify if the acap-runtime is still running.
If you are missing your acap-runtime container, check in the log of the docker-compose up if your inference server crashed. It usually prints something. If you can't find any log, ssh into your camera and run journalctl -u larod
, you will get more info about your crash.
Can't replace default models
Issue
When I replace the models in the Dockerfile.model I still see the default models being loaded, and my application can't find my model
Explanation
When you terminate an execution you should make sure to add the --volume
to your docker-compose down
command, otherwise your old volume will remain in your camera, and it won't be overwritten when you update it.
Suggestions
Follow the instruction of the example, and run the docker compose down
with the --volume
flag.
If that doesn't work, try to clean all the volumes in the camera with: docker -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT volume prune -f
exec format error - during execution
Issue
When I run my example, I get this error:
object-detector-python-object-detector-python-1 | standard_init_linux.go:219: exec user process caused: exec format error
Explanation
You probably chose the wrong architecture while building the application
Suggestions
Verify that your camera is armv7hf or aarch64 and build your application with the right ARCH flag.
exec format error - during build
Issue
When I build my example, I get this error:
---> Running in 27e79ec59704
standard_init_linux.go:228: exec user process caused: exec format error
The command '/bin/sh -c pip install RUN pip install Flask' returned a non-zero code: 1
Explanation
Docker doesn't manage to run instructions in another architecture, you probably didn't set up qemu properly.
Suggestions
Follow the instructions in the example about how to install qemu. You have probably missed running the line:
docker run -it --rm --privileged multiarch/qemu-user-static --credential yes --persistent yes
No space left in the device
Issue
I get the error "no space on device" when trying to install an example
Explanation
The space in the camera is very limited, most of the cameras won't be able to handle more than 1 light docker image.
Suggestions
Make sure to install an SD card in the camera, and specify in your docker ACAP settings that you want to use it.
Look here for more details