GithubHelp home page GithubHelp logo

aaronyking / deepstream-plugins Goto Github PK

View Code? Open in Web Editor NEW

This project forked from nvidia-ai-iot/deepstream_reference_apps

0.0 2.0 0.0 1.12 MB

Samples for TensorRT/Deepstream for Tesla & Jetson

License: MIT License

Makefile 7.81% C++ 89.49% Cuda 2.70%

deepstream-plugins's Introduction

Reference Apps for Video Analytics using TensorRT 5 and DeepStream SDK 3.0

DS3 Workflow

Installing Pre-requisites:

If the target platform is a dGPU, download and install DeepStream 3.0. For Tegra platforms, flash your device with Jetpack 3.3 and install Deepstream 1.5.

Install GStreamer pre-requisites using:
$ sudo apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev

Install Google flags using:
$ sudo apt-get install libgflags-dev

To use just the stand alone trt-yolo-app, Deepstream Installation can be skipped. However CUDA 10.0 and TensorRT 5 should be installed. See Note for additional installation caveats.

Setup

Update all the parameters in Makefile.config file present in the root directory

Building NvYolo Plugin

  1. Go to the data directory and add your yolo .cfg and .weights file.

    For yolo v2,
    $ wget https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov2.cfg
    $ wget https://pjreddie.com/media/files/yolov2.weights

    For yolo v2 tiny,
    $ wget https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov2-tiny.cfg
    $ wget https://pjreddie.com/media/files/yolov2-tiny.weights

    For yolo v3,
    $ wget https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3.cfg
    $ wget https://pjreddie.com/media/files/yolov3.weights

    For yolo v3 tiny,
    $ wget https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3-tiny.cfg
    $ wget https://pjreddie.com/media/files/yolov3-tiny.weights

  2. Set the right macro in the network_config.h file to choose a model architecture

  3. [OPTIONAL] Update the paths of the .cfg and .weights file and other network params in network_config.cpp file if required.

  4. Add absolute paths of images to be used for calibration in the calibration_images.txt file within the data directory.

  5. Run the following command from sources/plugins/gst-yoloplugin-tesla for dGPU's or from sources/plugins/gst-yoloplugin-tegra for tegra devices to build and install the plugin make && sudo make install

Object Detection using DeepStream

sample output

There are multiple apps that can be used to perform object detection in deepstream.

deepstream-yolo-app

The deepStream-yolo-app located at sources/apps/deepstream_yolo is a sample app similar to the Test-1 & Test-2 apps available in the DeepStream SDK. Using the yolo app we build a sample gstreamer pipeline using various components like H264 parser, Decoder, Video Converter, OSD and Yolo plugin to run inference on an elementary h264 video stream.

$ cd sources/apps/deepstream-yolo
$ make && sudo make install
$ cd ../../../
$ deepstream-yolo-app /path/to/sample_video.h264

deepstream-app [TESLA platform only]

Following steps describe how to run the YOLO plugin in the deepstream-app

  1. The section below in the config file corresponds to ds-example(yolo) plugin in deepstream. The config file is located at config/deepstream-app_yolo_config.txt. Make any changes to this section if required.

    [ds-example]
    enable=1
    processing-width=1280
    processing-height=720
    full-frame=1
    unique-id=15
    gpu-id=0
    
  2. Update path to the video file source in the URI field under [source0] group of the config file uri=file://relative/path/to/source/video

  3. Go to the root folder of this repo and run $ deepstream-app -c config/deepstream-app_yolo_config.txt

nvgstiva-app [TEGRA platform only]

  1. The section below in the config file corresponds to ds-example(yolo) plugin in deepstream. The config file is located at config/nvgstiva-app_yolo_config.txt. Make any changes to this section if required.

    [ds-example]
    enable=1
    processing-width=640
    processing-height=480
    full-frame=1
    unique-id=15
    
  2. Update path to the video file source in the URI field under [source0] group of the config file uri=file://path/to/source/video

  3. Go to the root folder of this repo and run $ nvgstiva-app -c config/nvgstiva-app_yolo_config.txt

Object Detection using trt-yolo-app

The trt-yolo-app located at sources/apps/trt-yolo is a sample standalone app, which can be used to run inference on test images. This app does not have any deepstream dependencies and can be built independently. Add a list of absolute paths of images to be used for inference in the test_images.txt file located at data and run trt-yolo-app from the root directory of this repo. Additionally, the detections on test images can be saved by setting kSAVE_DETECTIONS config param to true in network_config.cpp file. The images overlayed with detections will be saved in the data/detections/ directory.

This app has three command line arguments(optional).

1. batch_size - Integer value to be used for batch size of TRT inference. Default value is 1.   
2. decode - Boolean value representing if the detections have to be decoded. Default value is true.   
3. seed - Integer value to set the seed of random number generators. Default value is `time(0)`.

$ cd sources/apps/trt-yolo
$ make && sudo make install
$ cd ../../../

To run the app with default arguments
$ trt-yolo-app

To change the batch_size of the TRT engine
$ trt-yolo-app --batch_size=4

Note

  1. If you are using the plugin with deepstream-app (located at /usr/bin/deepstream-app), register the yolo plugin as dsexample. To do so, replace line 671 in gstyoloplugin.cpp with return gst_element_register(plugin, "dsexample", GST_RANK_PRIMARY, GST_TYPE_YOLOPLUGIN); This registers the plugin with the name dsexample so that the deepstream-app can pick it up and add to it's pipeline. Now go to sources/gst-yoloplugin/ and run $ make && sudo make install to build and install the plugin.

  2. Tegra users working with Deepstream 1.5 and Jetpack 3.3 will have to regenerate the .cache files to use the standard caffe models available in the SDK. This can be done by deleting all the .cache files in /home/nvidia/Model directory and all its subdirectories and then running the nvgstiva-app using the default config file.

  3. Tesla users working with Deepstream 2.0/TensorRT 4.x/CUDA 9.2, checkout the DS2 version of this repo to avoid any build conflicts.

deepstream-plugins's People

Contributors

nvcjr avatar vat-nvidia avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.