GithubHelp home page GithubHelp logo

project-posenet's People

Contributors

jingw222 avatar mtyka avatar namburger avatar scottamain avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

project-posenet's Issues

pose_camera fails to start with "failed to configure video mode"

Running python3 pose_camera.py --res 640x480 fails as follows:

$ python3 pose_camera.py --res 640x480 
Unable to init server: Could not connect: Connection refused
Unable to init server: Could not connect: Connection refused
Loading model:  models/mobilenet/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite
Gstreamer pipeline:  v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=30/1 ! decodebin ! videoflip video-direction=identity ! tee name=t
               t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert ! freezer name=freezer ! rsvgoverlay name=overlay
                  ! videoconvert ! autovideosink
               t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert ! videoscale ! video/x-raw,width=641,height=480 ! videobox name=box autocrop=true
                  ! video/x-raw,format=RGB,width=641,height=481 ! appsink name=appsink emit-signals=true max-buffers=1 drop=true
            
Error: gst-core-error-quark: GStreamer error: negotiation problem. (7): gstkmssink.c(1059): gst_kms_sink_set_caps (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstKMSSink:autovideosink0-actual-sink-kms:
failed to configure video mode

python3 simple_pose.py seems to work just fine.
I've tested the camera with Motion-Project/motion to verify that it's working.

In case it helps:

$ uname -a
Linux raspberrypi 4.19.97-v7l+ #1294 SMP Thu Jan 30 13:21:14 GMT 2020 armv7l GNU/Linux
$ python3 --version
Python 3.7.3

AttributeError: 'Delegate' object has no attribute '_library' when trying to execute quant_decoder

@ivelin I have gone through precisely this a while back and am trying to update the implementation currently. I will do my best to provide what I did.

Does that mean EdgeTPU knows how to resolve the CustomOp reference in the graph and execute it?

Not quite, at least not as far as I can tell. It appears to be provided by the EdgeTPU library.

Is there a way to help TFLite resolve the CustomOp reference in the graph or that's an EdgeTPU feature only?

I was previously unable to find a way.

Looks like one way to inform TFLite of custom ops is to rebuild it from source. However that requires the CustomOp implementation to be available at build time.

Correct. Fortunately, it is. As I mentioned above, it appears to be part of the Edge TPU library. I simply build this code into my binary. You can find the op here: https://github.com/google-coral/edgetpu/blob/master/src/cpp/posenet/posenet_decoder_op.cc

Related code that is likely to be needed as well is available there as well. I have successfully used this in CPU only on Linux. I am working on a build for Windows. I am not fond of the Bazel build system, particularly because for whatever reason tensorflow does not use up to date versions of Bazel and building on Windows is not easy. Frankly neither is building on Linux but there is at least the Docker image available there.

Hello jwoolston! I'm trying to do the same right now but I'm facing the next problem:

AttributeError: 'Delegate' object has no attribute '_library' (decoder) [ec2-user@ip-172-31-5-112 mobilenet]$ vim run_model.py (decoder) [ec2-user@ip-172-31-5-112 mobilenet]$ python run_model.py }Traceback (most recent call last): File "run_model.py", line 3, in <module> tpu= tflite.load_delegate('libedgetpu.so.1') File "/home/ec2-user/decoder/lib/python3.7/site-packages/tflite_runtime/interpreter.py", line 166, in load_delegate delegate = Delegate(library, options) File "/home/ec2-user/decoder/lib/python3.7/site-packages/tflite_runtime/interpreter.py", line 90, in __init__ self._library = ctypes.pydll.LoadLibrary(library) File "/usr/local/lib/python3.7/ctypes/__init__.py", line 442, in LoadLibrary return self._dlltype(name) File "/usr/local/lib/python3.7/ctypes/__init__.py", line 364, in __init__ self._handle = _dlopen(self._name, mode) OSError: libedgetpu.so.1: cannot open shared object file: No such file or directory Exception ignored in: <function Delegate.__del__ at 0x7fa6194d70e0> Traceback (most recent call last): File "/home/ec2-user/decoder/lib/python3.7/site-packages/tflite_runtime/interpreter.py", line 125, in __del__ if self._library is not None: AttributeError: 'Delegate' object has no attribute '_library'

When executing this code:
`import tflite_runtime.interpreter as tflite

tpu= tflite.load_delegate('../posenet-tflite-convert/data/edgetpu/libedgetpu/direct/aarch64/libedgetpu.so.1')
#posenet = tflite.load_delegate('posenet_decoder.so')
interpreter = tflite.Interpreter(model_path='posenet_mobilenet_v1_075_481_641_quant_decoder.tflite')
interpreter.allocate_tensors()`

I used this docker to generate the .so libraries required:
https://github.com/muka/posenet-tflite-convert

But I'm not sure how I have to connect the libraries with my code or the tflite_runtime.

Please, could you help me to have a better vision of this problem or tell me more about the solution you used?

I'm working on Amazon Linux 2.

Greetings!

Originally posted by @lupitia1 in #36 (comment)

adjust USB cam frame rate?

I've a lame webcam:

Index : 1
Type : Video Capture
Pixel Format: 'YUYV'
Name : YUYV 4:2:2
Size: Discrete 1280x720
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.133s (7.500 fps)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.133s (7.500 fps)

How can I adjust the frame rate from 30/1 to 10/1 so I can use 720p?

This is on the Coral dev board.

Neural network

I would like to ask about the flow chart and architecture diagram of network neural and how to train my own model, thank you!

Error: no module named 'edgetpu'

Traceback (most recent call last):
File "simple_pose.py", line 18, in
from pose_engine import PoseEngine
File "/home/pi/coral/tflite/python/examples/project-posenet/pose_engine.py", line 20, in
from edgetpu import version as edgetpu_version
ModuleNotFoundError: No module named 'edgetpu'

Where can I collect the keypoints and do activity recognition?

Hi.
I'm going to use this to do a human activity recognition project.
I want to collect the keypoints each frame into a numpy array.
After getting a sequence of keypoints.
I'll send it into a model to predict human action.

I want to know where can I do such a logic above?
It's seems that the streamming function just streamming out all of the frames in gstreammer.py line 159 pipeline.set_state(Gst.State.PLAYING)
Would someone like to give a hints or something?
Tnanks!

OpenPose model to Posenet?

Hi,
Is there any way you could recommend to convert an openpose model to one for posenet? And then optimized for edge-tpu?
Thanks!

Many thanks and one question

Thanks very much for the pose estimation APIs for coral. My question is did you use the open pose as the backbone network?

Error running ResNet50 on Pi 4 with USB Accelerator

The model I'm currently testing is posenet_resnet_50_640_480_16_quant_edgetpu_decoder.tflite.
Loading the model with PoseEngine API seems to be just fine, but as soon as the model starts to run inferences, it aborts and throws me this error

F :842] transfer on tag 2 failed. Abort. Deadline exceeded: USB transfer error 2 [LibUsbDataOutCallback]

And that's not the case with the MobileNet version posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite at all. What gives?

Hardwares:

  • Raspberry Pi 4
  • Raspberry Pi Camera Module v2
  • Coral USB Accelerator (connected to a USB 3 port on the Pi)
  • Official USB-C power supply (3A)

Softwares:

  • python3-edgetpu v13.0
  • libedgetpu1-std:armhf v13.0

Unable to load/allocate model with TFLite API

Hi, if possible, what is needed to be able to load one of the models using the TFLite API?

If not, what are the differences with TFLite vs. Edget-TPU API when dealing with CustomOP? Where do those resides for this project?

interpreter = Interpreter(
    model_path='models/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite',
    experimental_delegates=[load_delegate('libedgetpu.so.1.0')]
   )
interpreter.allocate_tensors() # => RunTime Error

=> Encountered unresolved custom op: PosenetDecoderOp.Node number 1

Thank you.

Random errors leading to core dumped

Host: Ubuntu 18.04, x86_64
Edge TPU runtime: Installed libedgetpu1-std from repo deb https://packages.cloud.google.com/apt coral-edgetpu-stable main, version 12-1
Demo: checked out master at 9a7b16c

After running install_requirements.sh script, I run python3 pose_camera.py and
get different errors. I've even tried different USB ports:

Loading model:  models/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite
Segmentation fault (core dumped)
Loading model:  models/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite
terminating with uncaught exception of type std::logic_error: basic_string::_M_construct null not valid
Aborted (core dumped)

I've not seen any pattern on how they appear, but error is throw at

engine = PoseEngine(model, mirror=args.mirror)
and
BasicEngine.__init__(self, model_path)

Any ideas what could be wrong ?

FYI, I've tflite_runtime 2.0.0 installed and I'm able to run my tflite models converted to edgetpu models on the same host, therefore, I'm discarding installation issues (lib location, permissions)

How to stream the video to a file?

I tried added this line

videoconvert ! x264enc tune=zerolatency  ! mp4mux ! queue ! filesink location=v.mp4 sync=false

to the pipeline but it looks like that the EOS event is not sent to the pipeline and therefore the file is not streamable as reported by ubuntu movie player.

DetectPosesInImage() takes forever to do inference

First of all, thanks for making this package opensource. I'm using your package on a Raspberry Pi 4B with EdgeTpu to do pose estimation.
The instructions in the readme worked perfectly for me, and I was able to get the keypoints positions and confidence scores. However, I am facing a problem when I'm trying to use your package in my own project. In my project I have a method which uses DetectPosesInImage() to get the pose estimate from an image. My method is stuck at DetectPosesInImage(), and taking forever to do inferencing. I m unable to find where the problem is

Here is the code where it is getting stuck

        # All initial Image Processing Starts here
        count +=1
        ok, frame = self.video.read()
        if not ok:
            break
        img = frame

        hsl_img = cv2.cvtColor(img,cv2.COLOR_BGR2HLS)
        hsl_img[:,:,1] = cv2.equalizeHist(hsl_img[:,:,1])
        img = cv2.cvtColor(hsl_img,cv2.COLOR_HLS2BGR_FULL)

        # All initial Image Processing Ends here

        frame_ = cv2.resize(img, (641, 481))
        poses_, inference_time_ = self.pose_estimator.engine.DetectPosesInImage(frame_)

Additional information:
frame size = (480X640) # (widthxheight)
model = posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite

This method which I implemented uses only 1 core out of 4 cores available on Raspberry Pi 4. I'm doing this using multiprocessing package in python.

Any help is much appreciated. Do let me know if additional information is required. Thanks in advance.

gst-stream-error-quark: Internal data stream error on 1280x720

My camera source is 1920x1080 30fps. On raspberry Pi4 B+ aarch64, I have no problem running
python3 pose_camera.py --res 480x360
python3 pose_camera.py --res 640x480
They work seamlessly. However, I get error on:
python3 pose_camera.py --res 1280x720

Output:

Loading model:  models/mobilenet/posenet_mobilenet_v1_075_721_1281_quant_decoder_edgetpu.tflite
Gstreamer pipeline:  v4l2src device=/dev/video0 ! video/x-raw,width=1280,height=720,framerate=30/1 ! decodebin ! videoflip video-direction=identity ! tee name=t`
               t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert ! freezer name=freezer ! rsvgoverlay name=overlay
                  ! videoconvert ! autovideosink
               t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert ! videoscale ! video/x-raw,width=1281,height=720 ! videobox name=box autocrop=true
                  ! video/x-raw,format=RGB,width=1281,height=721 ! appsink name=appsink emit-signals=true max-buffers=1 drop=true
Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)

Any idea?

Can't run simple_pose.py

OSError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./posenet_lib/x86_64/posenet_decoder.so)

Any idea on this?

I compiled my own posenet_decoder as well, which results in a different error. Is this .so only working with ubuntu 20?

"self._output_offsets" elements should be integers

I'm running the last version of PoseNET on a Google Coral TPU connected to a JetsonNano.
My Python version is 3.6.9

I'm calling the following function in my code:
poses, inference_time = engine.DetectPosesInImage(np.uint8(resized))
Where resized is an image with the following shape: (481, 641, 3)
And i get the following error:

Traceback (most recent call last):
File "/home/gabby/.vscode-server/extensions/ms-python.python-2020.3.71113/pythonFiles/ptvsd_launcher.py", line 48, in
main(ptvsdArgs)
File "/home/gabby/.vscode-server/extensions/ms-python.python-2020.3.71113/pythonFiles/lib/python/old_ptvsd/ptvsd/main.py", line 432, in main
run()
File "/home/gabby/.vscode-server/extensions/ms-python.python-2020.3.71113/pythonFiles/lib/python/old_ptvsd/ptvsd/main.py", line 316, in run_file
runpy.run_path(target, run_name='main')
File "/usr/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/gabby/Downloads/project-posenet/testPoint.py", line 584, in
poses, inference_time = engine.DetectPosesInImage(np.uint8(resized))
File "/home/gabby/Downloads/project-posenet/pose_engine.py", line 125, in DetectPosesInImage
return self.ParseOutput(self.run_inference(img.flatten()))
File "/home/gabby/Downloads/project-posenet/pose_engine.py", line 129, in ParseOutput
outputs = [output[i:j] for i, j in zip(self._output_offsets, self._output_offsets[1:])]
File "/home/gabby/Downloads/project-posenet/pose_engine.py", line 129, in
outputs = [output[i:j] for i, j in zip(self._output_offsets, self._output_offsets[1:])]
TypeError: slice indices must be integers or None or have an index method
Terminated

In this error, in the following function:

def ParseOutput(self, output):
inference_time, output = output
outputs = [output[i:j] for i, j in zip(self._output_offsets, self._output_offsets[1:])]

the array self.output_offset has the following elements: [0, 340.0, 510.0, 520.0, 521.0]
And that array gets its elements from here:
for size in self.get_all_output_tensors_sizes():
offset += size
self._output_offsets.append(offset)

I solved that error, by casting the elements to integers:
self._output_offsets.append(int(offset))
And now it works perfectly, and can correctly detect poses.

The latest Edge TPU runtime (v15.0) possibly breaks old models?

Running the models against the latest edgetpu runtime gave me an error that previously had been caused by package upgrades.

python simple_pose.py
Traceback (most recent call last):
  File "simple_pose.py", line 25, in <module>
    engine = PoseEngine('models/mobilenet/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite')
  File "/home/pi/WorkingDirectory/opensource/project-posenet/pose_engine.py", line 85, in __init__
    BasicEngine.__init__(self, model_path)
  File "/home/pi/.pyenv/versions/dash_app/lib/python3.7/site-packages/edgetpu/basic/basic_engine.py", line 92, in __init__
    self._engine = BasicEnginePythonWrapper.CreateFromFile(model_path)
RuntimeError: Internal: Unsupported data type in custom op handler: 1006632960Node number 0 (edgetpu-custom-op) failed to prepare.
Failed to allocate tensors.

Here's the setup on my Pi 4 running Raspbian Buster.

libedgetpu1-max:armhf==15.0
pycoral==1.0.0
tflite-runtime==2.5.0

That was not an issue previously under

libedgetpu1-max:armhf==14.1
python3-edgetpu==??
tflite-runtime==2.1.0

which I can't figure out how to downgrade to.

Can't run simple_pose.py

mendel@tuned-ibis:~/project-posenet$ python3 simple_pose.py

--2020-12-24 14:17:45-- https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Hindu_marriage_ceremony_offering.jpg/640px-Hindu_marriage_ceremony_offering.jpg
Resolving upload.wikimedia.org (upload.wikimedia.org)... 103.102.166.240, 2001:df2:e500:ed1a::2:b
Connecting to upload.wikimedia.org (upload.wikimedia.org)|103.102.166.240|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 83328 (81K) [image/jpeg]
Saving to: ‘couple.jpg’

couple.jpg 100%[======================================================>] 81.38K 509KB/s in 0.2s

2020-12-24 14:17:46 (509 KB/s) - ‘couple.jpg’ saved [83328/83328]

Traceback (most recent call last):
File "simple_pose.py", line 25, in
engine = PoseEngine('models/mobilenet/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite')
File "/home/mendel/project-posenet/pose_engine.py", line 85, in init
BasicEngine.init(self, model_path)
File "/home/mendel/.local/lib/python3.7/site-packages/edgetpu/basic/basic_engine.py", line 40, in init
self._engine = BasicEnginePythonWrapper.CreateFromFile(model_path)
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 0 (edgetpu-custom-op) failed to prepare.
Failed to allocate tensors.

How to slove this? Thanks all

Pi 4 w/ 4GB RAM getting a memory allocation error

I have a USB accelerator and a Raspberry Pi 4 w/ 4GB RAM. I've installed the software with no errors, but when I run python3 pose_camera.py, I get this output:

> python3 pose_camera.py 2>&1 | tee output
Error: gst-resource-error-quark: Failed to allocate required memory. (13): gstv4l2src.c(658): gst_v4l2src_decide_allocation (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Buffer pool activation failed
Loading model:  models/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite
Gstreamer pipeline:  v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=30/1 ! decodebin ! videoflip video-direction=identity ! tee name=t
               t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert ! freezer name=freezer ! rsvgoverlay name=overlay
                  ! videoconvert ! autovideosink
               t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert ! videoscale ! video/x-raw,width=641,height=480 ! videobox name=box autocrop=true
                  ! video/x-raw,format=RGB,width=641,height=481 ! appsink name=appsink emit-signals=true max-buffers=1 drop=true

I'm not really sure why I'm getting a memory allocation error - I should have more than enough for this application with 4GB RAM, right?

Failed to allocate tensors.

Device: Coral Dev board
After running install_requirements.sh script, I run python3 simple_pose.py and
get different errors.

mendel@bored-horse:~/project-posenet$ python3 simple_pose.py
--2020-01-31 05:12:31-- https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Hindu_marriage_ceremony_offering.jpg/640px-Hindu_marriage_ceremony_offering.jpg
Resolving upload.wikimedia.org (upload.wikimedia.org)... 103.102.166.240, 2001:df2:e500:ed1a::2:b
Connecting to upload.wikimedia.org (upload.wikimedia.org)|103.102.166.240|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 83328 (81K) [image/jpeg]
Saving to: ‘couple.jpg’

couple.jpg 100%[====================================================================================================================>] 81.38K --.-KB/s in 0.1s

2020-01-31 05:12:32 (841 KB/s) - ‘couple.jpg’ saved [83328/83328]

Traceback (most recent call last):
File "simple_pose.py", line 25, in
engine = PoseEngine('models/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite')
File "/home/mendel/project-posenet/pose_engine.py", line 85, in init
BasicEngine.init(self, model_path)
File "/usr/lib/python3/dist-packages/edgetpu/basic/basic_engine.py", line 92, in init
self._engine = BasicEnginePythonWrapper.CreateFromFile(model_path)
RuntimeError: Internal: Unsupported data type: 1Node number 0 (edgetpu-custom-op) failed to prepare.
Failed to allocate tensors.

let me ask you a question.

How many fps can coral dev board run?
I need realtime pose report but raspi is lower performance than my expected.
Thank you.

Can the included model be loaded by "Edge TPU runtime library" (on mac) and run with the USB Accelerator?

Hi,

I download and unpack the Edge TPU runtime library on mac using:


https://dl.google.com/coral/edgetpu_api/edgetpu_runtime_20201105.zip

and I run the fallowing code (from coral detection example) to load the posenet model.


import argparse
import time
import tensorflow.lite as tflite
import numpy as np

EDGETPU_SHARED_LIB = {
  'Linux': 'libedgetpu.so.1.0',
  'Darwin': 'libedgetpu.1.dylib',
  'Windows': 'edgetpu.dll'
}[platform.system()]

def make_interpreter(model_file):
  model_file, *device = model_file.split('@')
  return tflite.Interpreter(
      model_path=model_file,
      experimental_delegates=[
          tflite.experimental.load_delegate(EDGETPU_SHARED_LIB,
                               {'device': device[0]} if device else {})
      ])

model_path='models/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite'
interpreter = make_interpreter(model_path)
interpreter.allocate_tensors()

Then I get the error


RuntimeError                              Traceback (most recent call last)
 in 
     11 labels = load_labels(labels_path) if labels_path else {}
     12 interpreter = make_interpreter(model_path)
---> 13 interpreter.allocate_tensors()

~/.pyenv/versions/3.8.6/envs/tf_2.3.0/lib/python3.8/site-packages/tensorflow/lite/python/interpreter.py in allocate_tensors(self)
    241   def allocate_tensors(self):
    242     self._ensure_safe()
--> 243     return self._interpreter.AllocateTensors()
    244 
    245   def _safe_to_run(self):
RuntimeError: Encountered unresolved custom op: PosenetDecoderOp.Node number 1 (PosenetDecoderOp) failed to prepare.

I ran example detection model and it worked well.
Have the latest library not supported posenetDecoderOp yet?

Stop execution

Hello,

It's is possible to stop the execution of the pose_camera.py script, dynamically? Not with CTRL+C command.

Thank you!

Multi-person detection issues

I've been having some issues when there are multiple people in the frame and the model connecting the people together, instead of recognizing the separate joints/pieces.
According to this article
https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5
there is a separate posenet model (or algorithm it looks like) which works better when there are multiple people in the frame.
Is this pre-trained model (or algorithm) also available for the coral-optimized code?
Thanks!

no element "glvideoflip" error for coral dev board

pose_camera.py seems to be failing with:

$> python3 pose_camera.py 
Loading model:  models/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite
Detected Edge TPU dev board.
Gstreamer pipeline:  v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=30/1 ! decodebin ! glupload ! glvideoflip video-direction=identity ! tee name=t
               t. ! queue max-size-buffers=1 leaky=downstream ! freezer name=freezer ! glsvgoverlaysink name=overlaysink
               t. ! queue max-size-buffers=1 leaky=downstream ! glfilterbin filter=glbox name=glbox ! video/x-raw,format=RGB,width=641,height=481 ! appsink name=appsink emit-signals=true max-buffers=1 drop=true
            
Traceback (most recent call last):
  File "pose_camera.py", line 163, in <module>
    main()
  File "pose_camera.py", line 159, in main
    run(run_inference, render_overlay)
  File "pose_camera.py", line 124, in run
    jpeg=args.jpeg
  File "<...>/project-posenet/gstreamer.py", line 368, in run_pipeline
    pipeline = GstPipeline(pipeline, inf_callback, render_callback, src_size)
  File "<...>/project-posenet/gstreamer.py", line 42, in __init__
    self.pipeline = Gst.parse_launch(pipeline)
GLib.Error: gst_parse_error: no element "glvideoflip" (1)

Is it possible to run the Custom OP on CPU with TFLite?

Coral team, thank you for the great example.

We are working on an open source project that allows users to take advantage of EdgeTPU when available. Otherwise inference falls back to the CPU.

The README for this PoseNet example states that the CustomOP is embedded in the graph itself. Does that mean EdgeTPU knows how to resolve the CustomOp reference in the graph and execute it?

Trying to run the graph on TFLite on CPU without EdgeTPU produces the following error, which is a known limitation of TFLite:

def AllocateTensors(self):
>       return _interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
E       RuntimeError: Encountered unresolved custom op: PosenetDecoderOp.Node number 32 (PosenetDecoderOp) failed to prepare.

Is there a way to help TFLite resolve the CustomOp reference in the graph or that's an EdgeTPU feature only?

Looks like one way to inform TFLite of custom ops is to rebuild it from source. However that requires the CustomOp implementation to be available at build time.

Any guidance would be appreciated.

Thank you,

Ivelin

gi.repository.GLib.Error: gst_parse_error: no element "freezer"

(synthesizer.py:3621): GStreamer-WARNING **: 22:02:24.716: Element factory metadata for 'freezer' has no valid long-name field
fluidsynth: warning: Requested a period size of 64, got 444 instead
Loading model:  models/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite
Gstreamer pipeline:  v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=30/1 ! decodebin ! videoflip video-direction=identity ! tee name=t
               t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert ! freezer name=freezer ! rsvgoverlay name=overlay
                  ! videoconvert ! autovideosink
               t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert ! videoscale ! video/x-raw,width=641,height=480 ! videobox name=box autocrop=true
                  ! video/x-raw,format=RGB,width=641,height=481 ! appsink name=appsink emit-signals=true max-buffers=1 drop=true
            
Traceback (most recent call last):
  File "synthesizer.py", line 155, in <module>
    main()
  File "synthesizer.py", line 151, in main
    pose_camera.run(run_inference, render_overlay)
  File "/home/pi/workspace/project-posenet/pose_camera.py", line 124, in run
    jpeg=args.jpeg
  File "/home/pi/workspace/project-posenet/gstreamer.py", line 368, in run_pipeline
    pipeline = GstPipeline(pipeline, inf_callback, render_callback, src_size)
  File "/home/pi/workspace/project-posenet/gstreamer.py", line 42, in __init__
    self.pipeline = Gst.parse_launch(pipeline)
gi.repository.GLib.Error: gst_parse_error: no element "freezer" (1)

I am trying to run this on raspberry pi with edgeTPU but have the above error. How can I fix it?

Segmentation fault when running on RASPI4 w/ Coral

Is this example supposed to also run on RASPI 4 or am I just wasting my time?

Independent of the model used I get this error.

Loading model: models/mobilenet/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite
[1] 18593 segmentation fault python3 anonymizer.py

'PoseEngine' object has no attribute 'run_inference'

I get the error:
AttributeError: 'PoseEngine' object has no attribute 'run_inference' when I run pose_camera.py.
There doesn't seem to be an object named 'run_inference' in pose_engine.py.
Could you please let me know if I'm missing something?

Thanks

gdk_monitor_get_scale_factor: assertion 'GDK_IS_MONITOR (monitor)' failed

I am trying to run python3 pose_camera.py --h264 and I get the following error:

Loading model:  models/posenet_mobilenet_v1_075_481_641_quant_decoder_edgetpu.tflite
Detected Edge TPU dev board.
Gstreamer pipeline:  v4l2src device=rtsp://administrator:@192.168.1.110/defaultPrimary?streamType=m ! video/x-h264,width=640,height=480,framerate=30/1 ! decodebin ! glupload ! glvideoflip video-direction=identity ! tee name=t
               t. ! queue max-size-buffers=1 leaky=downstream ! freezer name=freezer ! glsvgoverlaysink name=overlaysink
               t. ! queue max-size-buffers=1 leaky=downstream ! glfilterbin filter=glbox name=glbox ! video/x-raw,format=RGB,width=641,height=481 ! appsink name=appsink emit-signals=true max-buffers=1 drop=true
            

(pose_camera.py:3988): Gdk-CRITICAL **: 11:18:22.098: gdk_monitor_get_scale_factor: assertion 'GDK_IS_MONITOR (monitor)' failed

Note I am getting the video source from a rtsp link from a camera and the Mendel version and Python API version I am using are:

  • Python API Version ---> 2.13.0 ---> pip3 list

  • Mendel Version --> 4.0 (Mendel Day) ---> cat /etc/mendel_version

GStreamer Segmentation Fault on Raspberry Pi 4

I ran install_requirements.sh but I'm getting this issue when running pose_camera.py on a Raspberry Pi 4 (using VNCViewer over ssh for remote display):

Gstreamer pipeline:  v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=30/1 ! decodebin ! videoflip video-direction=identity ! tee name=t
               t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert ! freezer name=freezer ! rsvgoverlay name=overlay
                  ! videoconvert ! autovideosink
               t. ! queue max-size-buffers=1 leaky=downstream ! videoconvert ! videoscale ! video/x-raw,width=641,height=480 ! videobox name=box autocrop=true
                  ! video/x-raw,format=RGB,width=641,height=481 ! appsink name=appsink emit-signals=true max-buffers=1 drop=true
            
Segmentation fault

Any suggestion?

pose output order

Does anyone know which is the criterion of order for the output poses? To be clear, when I cycle for pose in poses, which is the first, the second one, etc with respect to the scene?

Disable video display in pose_camera.py

Hi,
I need to disable the video output displayed by pose_camera.py.
I am unable to understand where in the code the command to display video on a screen is. Could someone please point me to this?

Run headless on Dev Board inside a Container

I am trying to run Posenet inside a Docker container, on the DevBoard. For the container, I am using a Debain:latest base image. I have done an 'apt-get install weston' with the image.

Running posenet_camera.py on the base Mendel install works fine. However, if I try that inside a container it crashes with an error glfilterbin. The weird thing is, if I attach a monitor it can run inside the same container fine.

Are there some additional configurations needed for the Docker container to make gstreamer happy without a monitor?

Here is the error I am getting:

Loading all_models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite with all_models/coco_labels.txt labels.
Detected Edge TPU dev board.
Gstreamer pipeline:  v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! queue max-size-buffers=1 leaky=downstream  ! glupload ! tee name=t
                t. ! queue max-size-buffers=1 leaky=downstream ! glfilterbin filter=glcolorscale
                ! video/x-raw,format=RGBA,width=320,height=180 ! videoconvert ! video/x-raw,format=RGB,width=320,height=180 ! appsink name=appsink sync=false emit-signals=true max-buffers=1 drop=true
            
error: XDG_RUNTIME_DIR not set in the environment.
Error: gst-resource-error-quark: Failed to initialize egl: EGL_NOT_INITIALIZED (3): gstglbasefilter.c(395): gst_gl_base_filter_decide_allocation (): /GstPipeline:pipeline0/GstGLFilterBin:glfilterbin0/GstGLDownloadElement:gldownloadelement0

Resizing video output on Coral Dev Board with HDMI monitor

I'm trying to get the output video from the pose-camera demo to run full-screen on the Dev Board through an HDMI monitor. Is this supported? How can I modify the demo to change just the output video without increasing the input image? I'm not interested in resolution, I just want to fill the whole screen on a 4k monitor.

about API 2.11.1 upgrade

As you may see, we can not upgrade Dev board just because we can not visit googl.com or similar web sites. I try to copy the libedgetpu_arm64.so to /usr/lib/aarchxxxx and use it under C++. The demo says can not recognize DecompOP with version 1, which means the dev board still under version 1.0, but the lib is version 2.11.1 API, how can we do to work around it by not using internet upgrading?
thanks for any suggestion

project-posenet and others not working with enterprise-eagle-20200724205123

We have 32 Coral Dev Boards and most of the demos now fail after flashing enterprise-eagle-20200724205123 because edgetpu https://github.com/google-coral/edgetpu has been branded Legacy and replaced by libedgetpu

mendel@k3s-tpu-09:~/project-posenet$ python3 simple_pose.py Traceback (most recent call last): File "simple_pose.py", line 18, in <module> from pose_engine import PoseEngine File "/home/mendel/project-posenet/pose_engine.py", line 20, in <module> from edgetpu import __version__ as edgetpu_version ModuleNotFoundError: No module named 'edgetpu'

Are there any plans to update the example code soon ?
Can i substitute libedgetpu easily ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.