GithubHelp home page GithubHelp logo

dusty-nv / jetson-utils Goto Github PK

View Code? Open in Web Editor NEW
679.0 36.0 282.0 1.33 MB

C++/CUDA/Python multimedia utilities for NVIDIA Jetson

License: MIT License

CMake 3.89% C++ 62.67% Cuda 7.02% C 25.18% Python 1.23% Shell 0.01%

jetson-utils's Introduction

jetson-utils

C++/CUDA/Python multimedia utilities for NVIDIA Jetson:

/ Filesystem, CSV/JSON/XML parsing, command-line
camera/ GStreamer-based camera capture (V4L2, MIPI CSI)
codec/ GStreamer-based hardware video encoder/decoder
cuda/ CUDA image processing functions
display/ OpenGL window & rendering
image/ Image loading & saving
input/ Human Interface Devices (HID) from /dev/input
network/ Sockets, IPv4/IPv6, WebRTC/RTSP server
python/ Python bindings and examples
threads/ Multithreading, locks, and events
video/ Video streaming interfaces

Documentation

Documentation for jetson-utils can be found here:

Building from Source

jetson-utils is typically built as a submodule of jetson-inference, but it can also be compiled/installed standalone:

git clone https://github.com/dusty-nv/jetson-utils
mkdir build
cd build
cmake ../
make -j$(nproc)
sudo make install
sudo ldconfig

If you're missing dependencies, run the jetson-inference/CMakePreBuild.sh script.

jetson-utils's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jetson-utils's Issues

Compact format for gstcamera

I am using the gstcamera code to capture image with the jetson tx2 dev kit on board CSI camera. My application is to collect images for deep learning training. My deep learning network input is 640x480 color images. Since I need to collect a lot of images, I really need a compact format to reduce my data size. The smallest format this repo code gstCamera.cpp suppots is 1280x720 10 bit RGBA, which is too much for me. Is there a way to capture 640x480 YUY2 format?

TX2 dev kit CSI camera strong motion blur

I am using camera code to drive the jetson TX2 dev kit onboard CSI camera. The board was on a robocar running at 12-15 mph. I found the image captured had severe motion blur. Please see this video http://www.artlystyles.com/tmp/cl_full.mp4

I created the camera as

myCamera = gstCamera.create(640, 360);

I noticed that when the camera initialized itself, it printed out some config:

`Available Sensor modes :

2592 x 1944 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10

2592 x 1458 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10

1280 x 720 FR=120.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10`

None of them fit my settings where width=640 height=360. So which one of the three mode is used for me? ALso, what's the frame rate for me?

how can i use gstEncoder to push raw h264 with udp?

hi,professor:
i want to push raw h264 with udp , then how can i do?
i use gst pipeline like this:
[gstreamer] appsrc name=mysource is-live=true do-timestamp=true format=3 ! omxh264enc bitrate=2000000 ! video/x-h264 ! udpsink host=172.16.100.240 port=9999 auto-multicast=true

the program run with no error. BUT!
when i use vlc to play , i got nothing! i use udp debug tool, i can get hex data. so how can gstEncoder realize this function? please help me!
@socieboy @dusty-nv

how Sending/Receiving packets over socket??

Hi, I have an application which requires me to write the client side of a network, in the first part I have to write 3 functions, one which sets up a socket and connects to the server, one which sends a packet and one which receives a packet.
how use Socket class in Socket.cpp??
this class dont a test sample??
It will be useful to know!!
Best of luck!

run video-viewer,gpu usage around 100%, jetson shutdown itself

ENV:

device1: jetson nano 4G
device2: sony video ,2160*2048, fcb-cv7520a
jetson power mode: MAXN
test software or app: video-viewer (jetson-utils/build/)
test tools: gpuGraphTX https://github.com/jetsonhacks/gpuGraphTX

Hello
 I am using the video-viewer to test camera and video on the jetson nano, and I use gpuGraphTX to check the gpu status. When I run "./video-viewer /dev/video*" to display the video stream, the gpu usage is up to around 100%, and somtimes jetson nano would shutdown itself. However, when i use the mplayer tools to display the videostream, such as running "mplayer tv:// -tv driver=v4l2:device=/dev/video*", the jetson is working good and never shutdown itself.

 It seems the video device is ok, and i don't know if there is problem using the video-viewer with 2160*2048 device.

Need to read the stream in only 3 channels i.e. RGB not 4 channels i.e. RGBA. How to do it?

I'm doing object detection using the TensorFlow object detection API. For that, the detection model accepts the input tensor in the shape [<tf.Tensor 'image_tensor:0' shape=(None, None, None, 3) dtype=uint8>]; but after using CudaToNumpy the shape is (None, None, None, 4). I tried to convert 4 channel feed to 3 channel feed but there is a loss of information in that.

Captured by gStreamer, original image with 4 channels:
134

After conversion to 3 channels:
139

Please tell me if there is any way to convert the 4 channels to 3 channels without losing the information.

For this conversion, I have used :
img = cv2.cvtColor(img, cv2.COLOR_BGRA2BGR)

and passed the numpy array as img.

TensorRT requests

root@demo:~/test_gpu/jetson-inference/build# make -j4
[ 1%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaColormap.cu.o
[ 2%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaCrop.cu.o
[ 2%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaFilterMode.cu.o
[ 3%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaYUV-YV12.cu.o
[ 4%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaFont.cu.o
[ 5%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaGrayscale.cu.o
[ 6%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaNormalize.cu.o
[ 7%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaOverlay.cu.o
[ 8%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaPointCloud.cu.o
[ 8%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaRGB.cu.o
[ 9%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaResize.cu.o
[ 10%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaWarp-affine.cu.o
[ 11%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaWarp-fisheye.cu.o
[ 12%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaWarp-intrinsic.cu.o
[ 13%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaYUV-NV12.cu.o
[ 13%] Building NVCC (Device) object utils/CMakeFiles/jetson-utils.dir/cuda/jetson-utils_generated_cudaYUV-YUYV.cu.o
Scanning dependencies of target jetson-utils
[ 14%] Building CXX object utils/CMakeFiles/jetson-utils.dir/URI.cpp.o
[ 15%] Building CXX object utils/CMakeFiles/jetson-utils.dir/XML.cpp.o
[ 15%] Building CXX object utils/CMakeFiles/jetson-utils.dir/filesystem.cpp.o
[ 16%] Building CXX object utils/CMakeFiles/jetson-utils.dir/commandLine.cpp.o
[ 17%] Building CXX object utils/CMakeFiles/jetson-utils.dir/logging.cpp.o
[ 18%] Building CXX object utils/CMakeFiles/jetson-utils.dir/timespec.cpp.o
[ 19%] Building CXX object utils/CMakeFiles/jetson-utils.dir/camera/gstCamera.cpp.o
[ 20%] Building CXX object utils/CMakeFiles/jetson-utils.dir/camera/v4l2Camera.cpp.o
/home/demo/test_gpu/jetson-inference/utils/camera/gstCamera.cpp:37:10: fatal error: NvInfer.h: No such file or directory
#include "NvInfer.h"
^~~~~~~~~~~
compilation terminated.
utils/CMakeFiles/jetson-utils.dir/build.make:318: recipe for target 'utils/CMakeFiles/jetson-utils.dir/camera/gstCamera.cpp.o' failed
make[2]: *** [utils/CMakeFiles/jetson-utils.dir/camera/gstCamera.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
[ 21%] Building CXX object utils/CMakeFiles/jetson-utils.dir/codec/gstDecoder.cpp.o
CMakeFiles/Makefile2:713: recipe for target 'utils/CMakeFiles/jetson-utils.dir/all' failed
make[1]: *** [utils/CMakeFiles/jetson-utils.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

videoviewer only can achieve 15fps on Jetson Xavier NX board

I tested videoviewer with an 1920x1080 .avi file with h264 codec, it only can achieve 15fps.

But when i used "gst-launch-1.0 filesrc location=../data/video_test/afternoon_2380_6.avi ! avidemux ! queue ! h264parse ! omxh264dec ! video/x-raw ! fpsdisplaysink text-overlay=false -e -v" comandline, output video can get 25fps.

The JetPack version is 4.4.1.

An issue on a video encoding

Hello
How are you?
Thanks for contributing this project.
I've tried to encode a H264 video with this library on Jetson.
I set videoOpetion as the following.

                        videoOptions video_option;
			video_option.resource = "file:///" + video_file_path;
			video_option.width = in_w;
			video_option.height = in_h;
			video_option.frameRate = fps;
			video_option.zeroCopy = true;
			video_option.deviceType = videoOptions::DEVICE_FILE;
			video_option.ioType = videoOptions::OUTPUT;
			video_option.codec = videoOptions::CODEC_H264;
			input_writer = gstEncoder::Create(video_option);
			input_writer->Open();

Then, the redering is following.

                      cv::VideoCapture cap(0);
                      cv:: Mat img;
                      cap >> img;
                      input_writer->Render(img.data, img.cols, img.rows, IMAGE_RGB8);

But when this code is run, the following issue occurs.

[cuda] an illegal memory access was encountered (error 700) (hex 0x2BC)
[cuda] /home/wakesys/Jin/jetson-utils/cuda/cudaYUV-YV12.cu:257
[cuda] an illegal memory access was encountered (error 700) (hex 0x2BC)
[cuda] /home/wakesys/Jin/jetson-utils/cuda/cudaColorspace.cpp:128
[cuda] an illegal memory access was encountered (error 700) (hex 0x2BC)
[cuda] /home/wakesys/Jin/jetson-utils/codec/gstEncoder.cpp:568
[gstreamer] gstEncoder::Render() -- unsupported image format (rgb8)
[gstreamer] supported formats are:
[gstreamer] * rgb8
[gstreamer] * rgba8
[gstreamer] * rgb32f
[gstreamer] * rgba32f

So the video encoding is failed.
Is this due to try to process an image on CPU memory rather than GPU memory?
Please let me know any solution.
Thanks

Reconnecting to video stream

Hello
How are you?
Thanks for contributing this project.
I tried to reconnect to the video stream when a video stream was disconnected because of network issue.
After the network issue was fixed, I created a new gstDecoder instance again and deleted the old instance.
I opened a video stream and tried to capture a frame but failed.
Please let me know how to fix this.
Thanks

[gstreamer] gstDecoder -- Could not demultiplex stream.

Hi,
I am trying to use detectnet to use the model I trained and run inference on my jetson nano. I need to get the video stream from a file located on an external hard drive so I did clone the dev branch and I'm trying to make the detectnet example work. My video is an .avi file with the mjpg codec.

I have an issue when I run the following command:

jetson@Jetson:~/jetson-inference/build/aarch64/bin$ detectnet --prototxt=$NET/deploy.prototxt --model=$NET/snapshot_iter_52000.caffemodel --input_blob=data --output-cvg=coverage --output-bbox=bboxes /media/5C71-CE67/TB/Video/Mouse_short.avi --input-codec=mjpeg display://

I get two errors: the gstDecoder cannot demultiplex the stream and the plugin creator FlattnContact_TRT does not register. I don't really understand what that mean but the result is that the frame don't get extracted.

My terminal:

[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstDecoder -- creating decoder for /media/5C71-CE67/TB/Video/Mouse_short.avi
[gstreamer] gstDecoder -- Could not demultiplex stream.
[gstreamer] gstDecoder -- pipeline string:
[gstreamer] filesrc location=/media/5C71-CE67/TB/Video/Mouse_short.avi ! avidemux ! queue ! nvjpegdec ! video/x-raw ! appsink name=mysink
[video] created gstDecoder from file:///media/5C71-CE67/TB/Video/Mouse_short.avi

gstDecoder video options:

-- URI: file:///media/5C71-CE67/TB/Video/Mouse_short.avi
- protocol: file
- location: /media/5C71-CE67/TB/Video/Mouse_short.avi
- extension: avi
-- deviceType: file
-- ioType: input
-- codec: mjpeg
-- width: 0
-- height: 0
-- frameRate: 0.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
-- loop: 0

URI -- using default display device 0
[OpenGL] glDisplay -- X screen 0 resolution: 1920x1080
[OpenGL] glDisplay -- X window resolution: 1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
[video] created glDisplay from display://

glDisplay video options:

-- URI: display://
- protocol: display
- location:
-- deviceType: display
-- ioType: output
-- codec: raw
-- width: 1920
-- height: 1080
-- frameRate: 0.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
-- loop: 0

detectNet -- loading detection network model from:
-- prototxt /home/jetson/jetson-inference/build/aarch64/bin/networks/Mouse_780_1000ep_nosub/deploy.prototxt
-- model /home/jetson/jetson-inference/build/aarch64/bin/networks/Mouse_780_1000ep_nosub/snapshot_iter_52000.caffemodel
-- input_blob 'data'
-- output_cvg 'coverage'
-- output_bbox 'bboxes'
-- mean_pixel 0.000000
-- mean_binary NULL
-- class_labels NULL
-- threshold 0.500000
-- batch_size 1

[TRT] TensorRT version 7.1.0
[TRT] loading NVIDIA plugins...
[TRT] Plugin creator registration succeeded - ::GridAnchor_TRT
[TRT] Plugin creator registration succeeded - ::NMS_TRT
[TRT] Plugin creator registration succeeded - ::Reorg_TRT
[TRT] Plugin creator registration succeeded - ::Region_TRT
[TRT] Plugin creator registration succeeded - ::Clip_TRT
[TRT] Plugin creator registration succeeded - ::LReLU_TRT
[TRT] Plugin creator registration succeeded - ::PriorBox_TRT
[TRT] Plugin creator registration succeeded - ::Normalize_TRT
[TRT] Plugin creator registration succeeded - ::RPROI_TRT
[TRT] Plugin creator registration succeeded - ::BatchedNMS_TRT
[TRT] Could not register plugin creator: ::FlattenConcat_TRT
[TRT] Plugin creator registration succeeded - ::CropAndResize
[TRT] Plugin creator registration succeeded - ::DetectionLayer_TRT
[TRT] Plugin creator registration succeeded - ::Proposal
[TRT] Plugin creator registration succeeded - ::ProposalLayer_TRT
[TRT] Plugin creator registration succeeded - ::PyramidROIAlign_TRT
[TRT] Plugin creator registration succeeded - ::ResizeNearest_TRT
[TRT] Plugin creator registration succeeded - ::Split
[TRT] Plugin creator registration succeeded - ::SpecialSlice_TRT
[TRT] Plugin creator registration succeeded - ::InstanceNormalization_TRT
[TRT] detected model format - caffe (extension '.caffemodel')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /home/jetson/jetson-inference/build/aarch64/bin/networks/Mouse_780_1000ep_nosub/snapshot_iter_52000.caffemodel.1.1.7100.GPU.FP16.engine
[TRT] loading network plan from engine cache... /home/jetson/jetson-inference/build/aarch64/bin/networks/Mouse_780_1000ep_nosub/snapshot_iter_52000.caffemodel.1.1.7100.GPU.FP16.engine
[TRT] device GPU, loaded /home/jetson/jetson-inference/build/aarch64/bin/networks/Mouse_780_1000ep_nosub/snapshot_iter_52000.caffemodel
[TRT] Deserialize required 5707631 microseconds.
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] -- layers 69
[TRT] -- maxBatchSize 1
[TRT] -- workspace 0
[TRT] -- deviceMemory 47227904
[TRT] -- bindings 3
[TRT] binding 0
-- index 0
-- name 'data'
-- type FP32
-- in/out INPUT
-- # dims 3
-- dim #0 3 (SPATIAL)
-- dim #1 480 (SPATIAL)
-- dim #2 752 (SPATIAL)
[TRT] binding 1
-- index 1
-- name 'coverage'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1 (SPATIAL)
-- dim #1 30 (SPATIAL)
-- dim #2 47 (SPATIAL)
[TRT] binding 2
-- index 2
-- name 'bboxes'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 4 (SPATIAL)
-- dim #1 30 (SPATIAL)
-- dim #2 47 (SPATIAL)
[TRT]
[TRT] binding to input 0 data binding index: 0
[TRT] binding to input 0 data dims (b=1 c=3 h=480 w=752) size=4331520
[TRT] binding to output 0 coverage binding index: 1
[TRT] binding to output 0 coverage dims (b=1 c=1 h=30 w=47) size=5640
[TRT] binding to output 1 bboxes binding index: 2
[TRT] binding to output 1 bboxes dims (b=1 c=4 h=30 w=47) size=22560
[TRT]
[TRT] device GPU, /home/jetson/jetson-inference/build/aarch64/bin/networks/Mouse_780_1000ep_nosub/snapshot_iter_52000.caffemodel initialized.
[TRT] detectNet -- number object classes: 1
[TRT] detectNet -- maximum bounding boxes: 1410
[gstreamer] opening gstDecoder for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvjpegdec0
[gstreamer] gstreamer changed state from NULL to READY ==> queue0
[gstreamer] gstreamer changed state from NULL to READY ==> avidemux1
[gstreamer] gstreamer changed state from NULL to READY ==> filesrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvjpegdec0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> queue0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer stream status CREATE ==> sink
[gstreamer] gstreamer changed state from READY to PAUSED ==> avidemux1
[gstreamer] gstreamer changed state from READY to PAUSED ==> filesrc0
[gstreamer] gstreamer stream status ENTER ==> sink
[gstreamer] gstreamer avidemux1 ERROR Could not demultiplex stream.
[gstreamer] gstreamer Debugging info: gstavidemux.c(4413): gst_avi_demux_stream_header_pull (): /GstPipeline:pipeline0/GstAviDemux:avidemux1:
pull_range flow reading header: eos
detectnet: failed to capture video frame
detectnet: failed to capture video frame
detectnet: failed to capture video frame
.......

How to detect only Persons and How to draw simple bounding boxes ?

Dear @dusty-nv sir,
Hope you are taking good care of yourself.
I want to detect only humans and also draw simple rectangular boxes (also control its color)
I am not getting the right code where I can change.

I am making AI based social distance compliance which detects humans and then calculates distance.

Thank you

How to include this package from a CMakefile

I am trying to use gstCamera code in my ROS node. After build this repo, I run
sudo make install

And I put this line in my ROS node CMakefile:
find_package(gst-camera REQUIRED)

However, when I "#include <gstCamera.h>" in my ROS node C++, it complained the gst-camera package can not be found.

Did I do something wrong?

An issue when writing a video stream on a docker container

Hello
How are you?
I encoded a video file with your app on a Jetson host.
Here the target bitrate is 10000 kbps and fps is 20.
The real bitrate of the recorded video with H264/H265 codec is 5000 ~ 6000 kbps.
This can be understood.
Now, I am going to encode a video file with your app on a docker container.
Of course, I installed your app on my docker image.
The real bitrate of the video recorded from docker app is less than 1000 kbps so the recorded video quality is very bad. And the recorded video FPS is 12 fps.
If I use VP9 codec, this issue disappears.
How can I this understand?
Please refer this blog as similar issue.
https://forums.developer.nvidia.com/t/degraded-h-264-encoding-quality-with-docker-and-openmax/112162
Thanks

About the framerate of the video which video viewer played

We use your code,video-viewer,simply pull the local 1080P video and play it,but it seems the framerate drops from 24 to about 16 fps.So why this happend and how can I improve the framerate?Please help us and tell us the approach.Our device is Jetson Xavier NX.

jetson.utils.cudaToNumpy not working as expected

https://github.com/dusty-nv/jetson-utils/blob/master/python/examples/cuda-to-numpy.py#LC44

array = jetson.utils.cudaToNumpy(cuda_mem, opt.width, opt.height, opt.depth)

Try to save the image giving black image, so tried to plot the array getting converted using matplotlib

plt.imshow(array)

It gave me a blank image with this in the terminal

Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).

Saw a solution here

plt.imshow((array * 255).astype(np.uint8))

This is the image i am able to get, which is wrong
Figure_1

This is the image getting saved
img_ssd_iv2_1

What is going wrong?

CUDA ERROR when use gstEncoder!please help!

hi,prrofessor:
i use gstEncoder API to encode OpenCV Mat frame ,then push stream with RTMP, i use gst pipeline :
image
and is OK!
then i use Render API to encode a Mat frame , code this:
image
then run the program, i got a CUDA ERROR!:
image

what's the problem?? and how can i do ?
help me,please!

Unable to Update jetson-utils to latest commit

Hi, I wanted to start a project using the nano, and went to practice with the library of jetson.utils but found a problem that I have been unable to resolve.
noticed that the some functions were absent while reading the gldisplay.h file
I've tried both
git submodule update --remote --merge and git pull origin master
and did

$ cmake ../
$ make
$ sudo make install

on Build directory, but the modules are never updated.

Disable gstreamer log

I am putting the gstreamer camera code inside a ROS node, everytime I ran it, it printout a lot of logs, as below.

GST_ARGUS: 2592 x 1944 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 16.000000; Exposure Range min 34000, max 550385000;

GST_ARGUS: 2592 x 1458 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 16.000000; Exposure Range min 34000, max 550385000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 16.000000; Exposure Range min 22000, max 358733000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 2
Output Stream W = 1280 H = 720
seconds to Run = 0
Frame Rate = 120.000005
GST_ARGUS: PowerService: requested_clock_Hz=12096000
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer msg async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0

The log comes from gstreamer original code. How can I disable it?

Multiple glDisplay streams error

Hello!
I am trying to implement multiple camera streams and show them on one display using glDisplay class. There are no problems when I have one stream, but when I start displaying the second one I have segmentation fault on my Jetson Nano after 1-3 frames. I think that it may have something to do with the multi-threading - every camera has its own thread and they are being processes and send to videoOutput simultaneously. Any advice? Is it possible to have multiple streams displayed in real time on my Jetson Nano at all?

Here is part of my code in the Camera class:

videoOutput* m_pScreenOutput = videoOutput::Create(0, nullptr);
cv::cuda::GpuMat	m_gmImgDisplay;	

// streaming loop
{
  // process the frame and put it into m_gmImgDisplay
  m_pScreenOutput->Render((uchar4*) m_gmImgDisplay.data, m_uWidth,  m_uHeight, IMAGE_RGBA8);
  m_pScreenOutput->SetStatus(m_acDevName);
}

cudaWarpAffine

I'm trying to rotate an image saved in a raw uchar4 pointer but the result I get in a single color green image.

My transformation matrix is:

const float transform[2][3] = {{0,-1,0},{1,0,0}};
and then I call

uchar4 *temp = NULL, *temp1 = NULL;
cudaWarpAffine( temp, temp1, IMG_ROW, IMG_COL, transform);		

It works when I use the identity matrix, but I don't get what I am doing wrong when calling the function.

An issue when reopen and recapture a RTSP video stream

Hello
How are you?
Thanks for continuous updating of this project.
I have an issue when recapture again the video stream disconnected by network issue.
it open the video stream but fails to cature from it.
I am using gstDecorder.
What is the reason?
Thanks

gstreamer stalling (part 2)

Hello, I am facing a similar issue to gstreamer stalling?. I am trying to stream 120 fps videos from 2 CSI cameras on the jetson nano using the C++ API. The Capture() call times out continuously for one of the cameras/streams after a while of streaming (~5 mins) and then when I do a SAFE_DELETE() of that input stream, it hangs after [gstreamer] gstCamera -- stopping pipeline, transitioning to GST_STATE_NULL at which point I have to force kill the program. I'm not sure if the problem arises from the same place, i.e. at gst_element_set_state(mPipeline, GST_STATE_NULL);. I tried to use the suggestion posted on the previous issue by commenting out Close() in gstDecoder.cpp, but it didn't resolve the problem.

Add RTSP support

Hi,

Please add RTSP support to jetson.utils - Please merge this PR -> #8

timeAdd() is incorrect resulting in Event::Wait() with timeout never timing out

The timeAdd() function in timespec incorrectly calculates the carry from nsec to sec.

const time_t sec=t.tv_nsec/1e-9;
t.tv_sec+=sec;
t.tv_nsec-=sec*1e-9;

should probably be:

const time_t sec=t.tv_nsec/1e9;
t.tv_sec+=sec;
t.tv_nsec-=sec*1e9;

As this is used with the timeout versions of Event::Wait(), which in turn is used is gstCamera::Capture(), this bug makes the capture practically never time out.

WarpAffine

I'm trying to rotate an image saved in a raw uchar4 pointer but the result I get in a single color green image.

My transformation matrix is:

const float transform[2][3] = {{0,-1,0},{1,0,0}};
and then I call

uchar4 *temp = NULL, *temp1 = NULL;
cudaWarpAffine( temp, temp1, IMG_ROW, IMG_COL, transform);		

It works when I use the identity matrix, but I don't get what I am doing wrong when calling the function.

gstreamer stalling?

Hi, I'm having a problem while running an RTSP videoSource from the dev branch (the one that allows for authenticated uris). It happens from time to time (maybe 1/3 of the runs) ... the thing is that the videoSource.Capture() method kinda stalls, waiting for some condition on the gstreamer pipeline to finish, and while it cant resolve what's going on in the pipeline, it would appear as if it didn't return ...

This is part of the gstreamer output that I see while this condition is active ... and while this happens my program basically hangs waiting for something that apparently never happens ...

[gstreamer] opening gstDecoder for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> videorate3
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter18
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv3
[gstreamer] gstreamer changed state from NULL to READY ==> omxh264dec-omxh264dec8
[gstreamer] gstreamer changed state from NULL to READY ==> h264parse18
[gstreamer] gstreamer changed state from NULL to READY ==> rtph264depay18
[gstreamer] gstreamer changed state from NULL to READY ==> queue8
[gstreamer] gstreamer changed state from NULL to READY ==> rtspsrc8
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline8
[gstreamer] gstreamer changed state from READY to PAUSED ==> videorate3
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter18
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv3
[gstreamer] gstreamer changed state from READY to PAUSED ==> omxh264dec-omxh264dec8
[gstreamer] gstreamer changed state from READY to PAUSED ==> h264parse18
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtph264depay18
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> queue8
[gstreamer] gstreamer message progress ==> rtspsrc8
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtspsrc8
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline8
[gstreamer] gstreamer message new-clock ==> pipeline8
[gstreamer] gstreamer message progress ==> rtspsrc8
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videorate3
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter18
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv3
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> omxh264dec-omxh264dec8
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> h264parse18
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtph264depay18
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> queue8
[gstreamer] gstreamer message progress ==> rtspsrc8
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtspsrc8
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)11583360, maximum-bitrate=(uint)32654560, bitrate=(uint)15395729;
[gstreamer] gstreamer message progress ==> rtspsrc8
[gstreamer] gstreamer rtspsrc8 ERROR Could not read from resource.
[gstreamer] gstreamer Debugging info: gstrtspsrc.c(5917): gst_rtsp_src_receive_response (): /GstPipeline:pipeline8/GstRTSPSrc:rtspsrc8:
Could not receive message. (System error)
[gstreamer] gstreamer message progress ==> rtspsrc8
[gstreamer] gstreamer message progress ==> rtspsrc8
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)2345800, maximum-bitrate=(uint)20655400, bitrate=(uint)3813294;
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)7878000, maximum-bitrate=(uint)33955800, bitrate=(uint)11630612;
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)2345800, maximum-bitrate=(uint)20690000, bitrate=(uint)3869547;
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)2342200, maximum-bitrate=(uint)20690000, bitrate=(uint)3807197;
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)11583360, maximum-bitrate=(uint)32654560, bitrate=(uint)15721958;
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)7878000, maximum-bitrate=(uint)33955800, bitrate=(uint)11870816;
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)2336400, maximum-bitrate=(uint)20690000, bitrate=(uint)3815523;
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)2326800, maximum-bitrate=(uint)20690000, bitrate=(uint)3808952;
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)884400, maximum-bitrate=(uint)33955800, bitrate=(uint)11931955;
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)870800, maximum-bitrate=(uint)33955800, bitrate=(uint)11945827;
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)749800, maximum-bitrate=(uint)33955800, bitrate=(uint)11903179;
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)166000, maximum-bitrate=(uint)33955800, bitrate=(uint)11890159;
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)2326800, maximum-bitrate=(uint)20690600, bitrate=(uint)3813485;
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)2326800, maximum-bitrate=(uint)20693000, bitrate=(uint)3813473;
[gstreamer] gstreamer mysink taglist, video-codec=(string)"H.264\ (Main\ Profile)", minimum-bitrate=(uint)2313800, maximum-bitrate=(uint)20693000, bitrate=(uint)3806887;

I've already learned to recognize this pattern as a "ok, ill have to kill it and run it again" ... it's like if gstreamer has some problem starting the pipeline ... but it doesnt give me the chance to reconnect or anything.

I can attach the whole session logs if you need them.
Thank you.

RuntimeError: FATAL: module compiled as little endian, but detected different endianness at runtime

Hi, I am trying to port jetson-inference and jetson-utils to Arch Linux and Yocto Project.
The system has CMake 3.16 and Python 3.8.
I am having problem with Python import. When I run python3 /usr/bin/my-detection.py, I get:

root@jetson-nano-qspi-sd:/usr/bin# python3 my-detection.py                                                                                                               
RuntimeError: FATAL: module compiled as little endian, but detected different endianness at runtime                                                                      
Traceback (most recent call last):                                                                                                                                       
  File "my-detection.py", line 24, in <module>                                                                                                                           
    import jetson.inference                                                                                                                                              
  File "/usr/lib/python3.8/site-packages/jetson/inference/__init__.py", line 5, in <module>                                                                              
    import jetson.utils                                                                                                                                                  
  File "/usr/lib/python3.8/site-packages/jetson/utils/__init__.py", line 4, in <module>                                                                                  
    from jetson_utils_python import *                                                                                                                                    
SystemError: initialization of jetson_utils_python raised unreported exception

I found a similar error being solved here: pni-libraries/python-pninexus@924bb45
I was wondering if it is possible to do it in PyNumpy.cpp where there is also import_array function.
Though, I have no clue why this happens in Arch Linux / Yocto but not in Ubuntu 16.04.

Any pointers are greatly appreciated.

An issue when building a docker image

Hello
How are you?
I am making a docker image for Jetson.
The Dockerfile is following.

FROM nvcr.io/nvidia/l4t-tensorflow:r32.4.3-tf2.2-py3
RUN apt-get update && apt-get install -y --no-install-recommends make g++ curl git libcurl4-openssl-dev cmake
RUN git clone https://github.com/dusty-nv/jetson-utils.git
RUN cd jetson-utils && mkdir build && cd build && cmake ../ && make install
CMD ["cmake", "--version"]

In configuring this repo with cmake, the following issue appears.

image

gcc version: 7.5.0
cmake version: 3.10.2
cuda version: 10.2
Please let me know the reason asap.
Thanks

Image Crop isn't working

Even when using example implementation in build/aarch64/bin/cuda-examples.py, it produces invalid images when cropping
Not sure what is this related to, because when tracked back to cuda implementation, it seems, that is just copies the underlying memory, which is just a 1 dimensional array .. so no idea why the images gets messed up like this

output[out_y * outWidth + out_x] = input[in_y * inWidth + in_x];

out_kitten

Note:
Copied issue from dusty-nv/jetson-inference#875
as the code is related to this repo

Examples for Video Encoding/Decoding

Hello
How are you?
Thanks for contributing this project.
I think that the content of README is poor.
Could u provide some examples for video encoding/decoding in C++?
Thanks.

jetson-utils gstencoder module encoding mp4 file failed

hi, dusty, I am a newer about gstreamer, I use jetson-utils lib to codec my own video stream with the following parameter:
appsrc name=mysource ! video/x-raw,width=1920,height=1200,format=(string)I420,framerate=30/1 ! omxh264enc ! video/x-h264 ! h264parse ! qtmux ! filesink location=./2020-4-26-16-47-59.mp4
the application could encode normally, but according the nvidia offical doc: as follows:
gst-launch-1.0 videotestsrc ! 'video/x-raw, format=(string)I420,
width=(int)640, height=(int)480' ! omxh264enc ! 'video/x-h264, stream-
format=(string)byte-stream' ! h264parse ! qtmux ! filesink
location=test.mp4 -e

I add parameter "-e" in my gstreamer launch string, liake this:
appsrc name=mysource ! video/x-raw,width=1920,height=1200,format=(string)I420,framerate=30/1 ! omxh264enc ! video/x-h264 ! h264parse ! qtmux ! filesink location=./2020-4-26-16-52-5.mp4 -e
the result: gstEncoder - failed to create pipeline.......
can you tell me why?
another question is: I use gstEncoder module to encode mp4 file, when i stoped my program and stoped encode video stream, the saved mp4 file could not play, where is wrong?
I use the nvidia jetson tx2 platform and call the jetson-utils lib in QT env..

error: ‘at::cudaCheckError’ declared as an ‘inline’ variable

Hi,
First of all I am sorry if the following question is very basic or a silly mistake but I am new at using cuda.
I am using VSCode to do my project and I am trying to resize an image using the function yout provided resizeCuda but when compiling my code an error pops up

error: ‘at::cudaCheckError’ declared as an ‘inline’ variable
#define CUDA(x) cudaCheckError((x), #x, FILE, LINE)

I have tried to search information in forums but I have not found anything. Can you pleas tell me why am I getting this error.

Thank you so much

the full trace error is the following:

In file included from /home/user/Desktop/venturi/src/../include/cudaResize.h:27:0,
from /home/user/Desktop/venturi/src/main.cpp:4:
/home/user/Desktop/venturi/src/../include/cudaUtility.h:39:34: error: ‘at::cudaCheckError’ declared as an ‘inline’ variable
#define CUDA(x) cudaCheckError((x), #x, FILE, LINE)
^
/home/user/Desktop/venturi/include/libtorch/include/ATen/Context.h:153:41: error: expected primary-expression before ‘s’
static inline DeprecatedTypeProperties& CUDA(ScalarType s) {
^
/home/user/Desktop/venturi/include/libtorch/include/ATen/Context.h:153:41: error: expected ‘)’ before ‘s’
In file included from /home/user/Desktop/venturi/include/libtorch/include/ATen/ATen.h:5:0,
from /home/user/Desktop/venturi/include/libtorch/include/torch/csrc/api/include/torch/types.h:3,
from /home/user/Desktop/venturi/include/libtorch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/user/Desktop/venturi/include/libtorch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/user/Desktop/venturi/include/libtorch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/user/Desktop/venturi/include/libtorch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/user/Desktop/venturi/include/libtorch/include/torch/csrc/api/include/torch/data.h:3,
from /home/user/Desktop/venturi/include/libtorch/include/torch/csrc/api/include/torch/all.h:4,
from /home/user/Desktop/venturi/include/libtorch/include/torch/csrc/api/include/torch/torch.h:3,
from /home/user/Desktop/venturi/src/main.cpp:5:
/home/user/Desktop/venturi/include/libtorch/include/ATen/Context.h:237:1: error: expected ‘,’ or ‘;’ before ‘}’ token
} // namespace at
^
In file included from /home/user/Desktop/venturi/include/libtorch/include/torch/csrc/api/include/torch/nn/cloneable.h:5:0,
from /home/user/Desktop/venturi/include/libtorch/include/torch/csrc/api/include/torch/nn.h:3,
from /home/user/Desktop/venturi/include/libtorch/include/torch/csrc/api/include/torch/all.h:7,
from /home/user/Desktop/venturi/include/libtorch/include/torch/csrc/api/include/torch/torch.h:3,
from /home/user/Desktop/venturi/src/main.cpp:5:
/home/user/Desktop/venturi/include/libtorch/include/torch/csrc/api/include/torch/utils.h:12:11: error: ‘at::manual_seed’ has not been declared

Problem?: use gstEncoder with rtmp push

hi,professor:

i use gstEncoder API to encode opencv Mat frame , then push stream to rtmp server with URI like this "rtmp://192.168.1.102:1935/live/livestream", my init code like this:

image
when i run my program, i get an error:
image
then ,why the GST PIPELINE of rtmp uri become short? just have ip, "live/livestream" has gone! why?
what can i do?,PLEASE HELP!

Can't handle letters not in typical english when using Jetson.utils.cudaFont.OverlayText() function

I am trying to make a program that detects objects then translates the object name into Danish, but when overlaying the text for the translation on the image itself, the letters not found in english, such as Æ, Ø, and Å, always show up terribly (such as fængsel showing up as fÃ|ngsel). In the command line, I am printing out the text that I'm putting into the parameter of the function and it comes out fine on the command prompt but bad when overlaid on the image.

headless mode by --headless=true

Hi, I believe that the correct way to set the --headless mode is by passing --headless=true instead of --headless alone as I've seen in one of the examples (detectnet-camera at least).

Here is an example session where you can see --headless=true working, while --headless alone fails.

drakorg@drakorg-desktop:~/workspace/nspi$ python3
Python 3.6.9 (default, Apr 18 2020, 01:56:04)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> import jetson.utils
>>> vs = jetson.utils.videoSource('rtsp://admin:[email protected]', sys.argv + ['--headless'])
URI -- missing/invalid IP port from rtsp://admin:[email protected], default to port 554
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstDecoder -- creating decoder for admin
nvbuf_utils: Could not get EGL display connection
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
[gstreamer] gstDecoder -- failed to discover stream info
[gstreamer] gstDecoder -- try manually setting the codec with the --input-codec option
[gstreamer] gstDecoder -- failed to create decoder for rtsp://admin:[email protected]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
Exception: jetson.utils -- failed to create videoSource device
>>> vs = jetson.utils.videoSource('rtsp://admin:[email protected]', sys.argv + ['--headless=true'])
URI -- missing/invalid IP port from rtsp://admin:[email protected], default to port 554
[gstreamer] gstDecoder -- creating decoder for admin
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
[gstreamer] gstDecoder -- discovered video resolution: 1280x720  (framerate 25.000000 Hz)
[gstreamer] gstDecoder -- discovered video caps:  video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)3.1, profile=(string)main, width=(int)1280, height=(int)720, framerate=(fraction)25/1, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true
[gstreamer] gstDecoder -- pipeline string:
[gstreamer] rtspsrc location=rtsp://admin:[email protected] ! queue ! rtph264depay ! h264parse ! omxh264dec ! video/x-raw ! appsink name=mysink
[video]  created gstDecoder from rtsp://admin:[email protected]
------------------------------------------------
gstDecoder video options:
------------------------------------------------
  -- URI: rtsp://admin:[email protected]
     - protocol:  rtsp
     - location:  admin
     - port:      554
  -- deviceType: ip
  -- ioType:     input
  -- codec:      h264
  -- width:      1280
  -- height:     720
  -- frameRate:  25.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
------------------------------------------------
>>>

That's the reason detectnet-camera example does not work from commandline despite specifying the headless switch, as seen below:

drakorg@drakorg-desktop:~/workspace/nspi/scripts/jetson-nano/jetson-inference$ detectnet-camera "rtsp://admin:[email protected]" --headless
URI -- missing/invalid IP port from rtsp://admin:[email protected], default to port 554
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstDecoder -- creating decoder for admin
nvbuf_utils: Could not get EGL display connection
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
[gstreamer] gstDecoder -- failed to discover stream info
[gstreamer] gstDecoder -- try manually setting the codec with the --input-codec option
[gstreamer] gstDecoder -- failed to create decoder for rtsp://admin:[email protected]
detectnet:  failed to create input stream
drakorg@drakorg-desktop:~/workspace/nspi/scripts/jetson-nano/jetson-inference$ detectnet-camera "rtsp://admin:[email protected]" --headless=true
URI -- missing/invalid IP port from rtsp://admin:[email protected], default to port 554
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstDecoder -- creating decoder for admin
nvbuf_utils: Could not get EGL display connection
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
[gstreamer] gstDecoder -- failed to discover stream info
[gstreamer] gstDecoder -- try manually setting the codec with the --input-codec option
[gstreamer] gstDecoder -- failed to create decoder for rtsp://admin:[email protected]
detectnet:  failed to create input stream

Now, if you see how detectnet-camera handles the switch, it only adds "--headless" to the argv (under some condition), while it should be "--headless=true".

drakorg@drakorg-desktop:~/workspace/nspi/scripts/jetson-nano/jetson-inference$ cat python/examples/detectnet.py
#!/usr/bin/python3
#
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#

import jetson.inference
import jetson.utils

import argparse
import sys

# parse the command line
parser = argparse.ArgumentParser(description="Locate objects in a live camera stream using an object detection DNN.",
                                 formatter_class=argparse.RawTextHelpFormatter, epilog=jetson.inference.detectNet.Usage() +
                                 jetson.utils.videoSource.Usage() + jetson.utils.videoOutput.Usage() + jetson.utils.logUsage())

parser.add_argument("input_URI", type=str, default="", nargs='?', help="URI of the input stream")
parser.add_argument("output_URI", type=str, default="", nargs='?', help="URI of the output stream")
parser.add_argument("--network", type=str, default="ssd-mobilenet-v2", help="pre-trained model to load (see below for options)")
parser.add_argument("--overlay", type=str, default="box,labels,conf", help="detection overlay flags (e.g. --overlay=box,labels,conf)\nvalid combinations are:  'box', 'labels', 'conf', 'none'")
parser.add_argument("--threshold", type=float, default=0.5, help="minimum detection threshold to use")
parser.add_argument("--width", type=int, default=1280, help="desired width of camera stream (default is 1280 pixels)")
parser.add_argument("--height", type=int, default=720, help="desired height of camera stream (default is 720 pixels)")

is_headless = ["--headless"] if sys.argv[0].find('console.py') != -1 else [""]

try:
        opt = parser.parse_known_args()[0]
except:
        print("")
        parser.print_help()
        sys.exit(0)

# load the object detection network
net = jetson.inference.detectNet(opt.network, sys.argv, opt.threshold)

# create video sources & outputs
input = jetson.utils.videoSource(opt.input_URI, argv=sys.argv)
output = jetson.utils.videoOutput(opt.output_URI, argv=sys.argv+is_headless)

# process frames until the user exits
while True:
        # capture the next image
        img = input.Capture()

        # detect objects in the image (with overlay)
        detections = net.Detect(img, overlay=opt.overlay)

        # print the detections
        print("detected {:d} objects in image".format(len(detections)))

        for detection in detections:
                print(detection)

        # render the image
        output.Render(img)

        # update the title bar
        output.SetStatus("{:s} | Network {:.0f} FPS".format(opt.network, net.GetNetworkFPS()))

        # print out performance info
        net.PrintProfilerTimes()

        # exit on input/output EOS
        if not input.IsStreaming() or not output.IsStreaming():
                break

Thank you.
Regards.

QWait Event Causing Issues

Hello,

I have a thread (or rather 4 for four cameras) that are al implementing gstCamera class. I have a function that is being run in an endless loop (serviceCamera) for each thread. I have gstream pipelines created on TX2 to capture at 30fps and appsink them. In the serviceCmaera thread is where I lock and call gstCamera->Capture(). After some time, I will get a wait_Result that is not 1 in the gst::Capture() function. I print out like the following:

    if( !wait_result ) 
    {
        fprintf(stderr, "%s: returning false because waiting on mutex failed!\n", __FUNCTION__);
        return false;
    }

What I have noticed is when that happens, it never clears. It happens from that point forward until I restart my program. My question, is what would cause the wait() event to fail, and why would it not recover?

If I set my gstream pipeline to run at 20fps the failing takes much much longer, if at all.

Can this be applied to the Nano?

hello
Is this library compatible with nano correctly? When I use nano, I cannot open camera correctly through C++ and opencv,
camera module:Raspberry PI camera V2

gstreamer rtsp lagging badly on nano on big frames

Hi, the new videoSource with the gstreamer backend for rtsp streams appears to be lagging badly (up to 30 secs) on a single channel while the nano is sitting idle except for this. I can see it lag even with the camera-viewer stock app. I will update with more information when I have more time, but I wanted to open the issue not to forget about it. With this level of lag I'll be forced again to move away from this component. I'm running Jetson 4.3 without any package updates.

Regards.
Eduardo.

VideoOutput to buffer

Video output only renders to uri specified, can you add option to render to a (void *) buffer or lead me to it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.