GithubHelp home page GithubHelp logo

objectdetector's Introduction

JdeRobot

Current Release Version

THIS REPOSITORY HAS BEEN ARCHIVED AND IS DEPRECATED. Since 2020 we have adopted ROS as our main framework for developing Robotics, AI and ComputerVision applications and tools.

Introduction

JdeRobot is a software development suite for robotics, home-automation and computer vision applications. These domains include sensors for instance, cameras, actuators, and intelligent software in between. It has been designed to help in programming with such intelligent softwares. It is mainly written in C++ language and provides a distributed component-based programming environment where the application program is made up of a collection of several concurrent asynchronous components. Each component may run in different computers and they are connected using Ice communication middleware. Components may be written in C++, python, Java... and all of them interoperate through explicit Ice interfaces.

JdeRobot simplifies the access to hardware devices from the control program. Getting sensor measurements is as simple as calling a local function, and ordering motor commands as easy as calling another local function. The platform attaches those calls to the remote invocation on the components connected to the sensor or the actuator devices. They can be connected to real sensors and actuators or simulated ones, both locally or remotely using the network. Those functions build the API for the Hardware Abstraction Layer. The robotic application get the sensor readings and order the actuator commands using it to unfold its behavior. Several driver components have been developed to support different physical sensors, actuators and simulators. The drivers are used as components installed at will depending on your configuration. They are included in the official release. Currently supported robots and devices:

  • RGBD sensors: Kinect from Microsoft, Asus Xtion
  • Pioneer robot from MobileRobotics Inc.
  • Kobuki robot (TurtleBot) from Yujin Robot
  • Nao humanoid from Aldebaran
  • ArDrone quadrotor from Parrot
  • Firewire cameras, USB cameras, video files (mpeg, avi...), IP cameras (like Axis)
  • Pantilt unit PTU-D46 from Directed Perception Inc.
  • Laser Scanners: LMS from SICK and URG from Hokuyo
  • EVI PTZ camera from Sony
  • Gazebo and Stage simulators
  • Wiimote
  • X10 home automation devices

JdeRobot includes several robot programming tools and libraries. First, viewers and teleoperators for several robots, its sensors and motors. Second, a camera calibration component and a tuning tool for color filters. Third, VisualStates tool for programming robot behavior using hierarchical finite state machines. It includes many sample components using OpenCV, PCL, OpenGL, etc.. In addition, it also provides a library to develop fuzzy controllers, a library for projective geometry and some computer vision processing.

Each component may have its own independent Graphical User Interface or none at all. Currently, GTK and Qt libraries are supported, and several examples of OpenGL for 3D graphics with both libraries are included.

JdeRobot is open-source software, licensed as GPL and LGPL. It also uses third-party softwares like Gazebo simulator, OpenGL, GTK, Qt, Player, Stage, Gazebo, GSL, OpenCV, PCL, Eigen, Ogre.

JdeRobot is a project developed by Robotics Group of Universidad Rey Juan Carlos (Madrid, Spain).

Installation

Table of Contents

  • Add the latest ROS sources:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
  • Add the latest Gazebo sources:
sudo sh -c 'echo "deb http://packages.osrfoundation.org/gazebo/ubuntu-stable `lsb_release -cs` main" > /etc/apt/sources.list.d/gazebo-stable.list'
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key 67170598AF249743
  • Add the latest zeroc-ice sources:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv B6391CB2CFBA643D
sudo apt-add-repository "deb http://zeroc.com/download/Ice/3.7/ubuntu18.04 stable main"
  • Add JdeRobot repository (using dedicated file /etc/apt/sources.list.d/jderobot.list):
sudo sh -c 'echo "deb [arch=amd64] http://wiki.jderobot.org/apt `lsb_release -cs` main" > /etc/apt/sources.list.d/jderobot.list''
  • Get and add the public key from the JdeRobot repository
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 24E521A4
  • Update the repositories
sudo apt update
  • Install JdeRobot:
sudo apt install jderobot
sudo apt install jderobot-assets
  • After installing the package, you can close the terminal and reopen it to source the environment variables, OR just type:
source ~/.bashrc
  • If you already have a previous version of the packages installed, you only have to do:
sudo apt update && sudo apt upgrade

If you want to run JdeRobot in MS-Windows, MacOS or other Linux distributions you can use Docker containers. We have created a Docker image with current JdeRobot Release and all the necessary components to be used with JdeRobot-Academy. To download it, use:

docker pull jderobot/jderobot

For more information follow this link

Downloading the source code from the GitHub is strongly NOT RECOMMENDED for new users unless you know what you are doing.

You have two options here:

  1. Install all the dependencies from a binary package
  2. Install all the dependencies manually (NOT RECOMMENDED)

For the first one, you only have to type the following:

sudo apt install jderobot-deps-dev

and skip to the next section of this README.

For the second one... keep reading

JdeRobot has different external dependencies to build its structure. There are two types of dependencies: necessary dependencies and other dependencies. The firsts are needed to compile and install the basics of JdeRobot, that is, all its libraries and interfaces that are needed for the components or to develop components that use JdeRobot. The seconds are needed just for some components, but are not really necessary unless you want to use that components. For instance, the component gazeboserver uses Gazebo, a 3D robot simulator, so to use this component you may install Gazebo first.

First of all repeat the step Getting environment ready

Some libraries are required to compile, link or run JdeRobot. Just type the following commands:

  • Basic libraries:

sudo apt install build-essential libtool cmake g++ gcc git make

  • OpenGL libraries:

sudo apt install freeglut3 freeglut3-dev libgl1-mesa-dev libglu1-mesa

  • GTK2 libraries:
sudo apt install libgtk2.0-0 libgtk2.0-bin libgtk2.0-cil libgtk2.0-common libgtk2.0-dev libgtkgl2.0-1
sudo apt install libgtkgl2.0-dev libgtkglext1 libgtkglext1-dev libglademm-2.4-dev libgtkmm-2.4-dev 
sudo apt install libgnomecanvas2-0 libgnomecanvas2-dev  libgtkglext1-doc libgnomecanvasmm-2.6-dev
sudo apt install libgnomecanvasmm-2.6-1v5 libgtkglextmm-x11-1.2-0v5 libgtkglextmm-x11-1.2-dev
  • Gtk3 libraries:

sudo apt install libgoocanvasmm-2.0-6 libgoocanvasmm-2.0-dev

  • GSL libraries:

sudo apt install libgsl23 gsl-bin libgsl-dev

  • LibXML:

sudo apt install libxml++2.6-2v5 libxml++2.6-dev libtinyxml-dev

  • Eigen:

sudo apt install libeigen3-dev

  • Fireware:

sudo apt install libdc1394-22 libdc1394-22-dev

  • USB:

sudo apt install libusb-1.0-0 libusb-1.0-0-dev

  • CWIID:

sudo apt install libcwiid-dev

  • Python components:

sudo apt install python-matplotlib python-pyqt5 python-pip python-numpy python-pyqt5.qtsvg

  • Qfi

It can be compiled and installed from source: https://github.com/JdeRobot/ThirdParty/tree/master/qflightinstruments

  • Qt 5

sudo apt install qtbase5-dev libqt5script5 libqt5svg5-dev

  • Boost

sudo apt install libboost-system-dev libboost-filesystem-dev

  • ROS
sudo apt install ros-melodic-roscpp ros-melodic-std-msgs ros-melodic-cv-bridge ros-melodic-image-transport ros-melodic-roscpp-core ros-melodic-rospy ros-melodic-nav-msgs ros-melodic-geometry-msgs ros-melodic-mavros ros-melodic-gazebo-plugins ros-melodic-kobuki-msgs

Once all ros packages are installed, install the script that tunes the environment variables ROS in your .bashrc configuration file, and run it for the current shell:

echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc 
source ~/.bashrc 
  • Google glog (logging)

sudo apt install libgoogle-glog-dev

  • GStreamer

sudo apt-get install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev

  • ICE
sudo apt install libdb5.3-dev libdb5.3++-dev libssl-dev libbz2-dev libmcpp-dev \
            libzeroc-ice3.7 libzeroc-icestorm3.7 zeroc-ice-slice libzeroc-ice-dev

compile ice:

git clone -b 3.7 https://github.com/zeroc-ice/ice.git 
cd ice/cpp
make CPP11=yes OPTIMIZE=yes
make install

Configure ICE for Python with pip

sudo pip2 install --upgrade pip
sudo pip2 install zeroc-ice==3.7.2
  • OpenNI 2

sudo apt-get install libopenni2-dev libopenni-dev

  • Point Cloud Library

sudo apt-get install libpcl-dev

  • OpenCV

sudo apt-get install libopencv-dev

  • NodeJS

sudo apt-get install nodejs

  • Kobuki robot libraries

You can find the source code in our git repository (http://github.com/jderobot/thirdparty.git)

  • SDK Parrot for ArDrone

If you want to install it manually from our third party repository, you only have to:

  1. Create a folder to compile the code mkdir ardronelib-build && cd ardronelib-build

  2. Download the installer, a CMakeLists.txt file

wget https://raw.githubusercontent.com/RoboticsURJC/JdeRobot-ThirdParty/master/ardronelib/CMakeLists.txt
wget https://raw.githubusercontent.com/RoboticsURJC/JdeRobot-ThirdParty/master/ardronelib/ffmpeg-0.8.pc.in
wget https://raw.githubusercontent.com/RoboticsURJC/JdeRobot-ThirdParty/master/ardronelib/ardronelib.pc.in
  1. Compile and install as usual:
cmake .
make
sudo make install

After installing all the dependencies you can compile the project, you can clone this repo and build it doing the following:

  • Download the source code from git:
git clone http://github.com/RoboticsURJC/JdeRobot.git
cd JdeRobot/
  • Check system and dependencies
mkdir build && cd build
cmake ..
  • Compile
make
  • Install
sudo make install

How To Contribute

To see the collaborate workflow and coding style of JdeRobot community, please refer to the wiki page.

Copyright and license

Copyright 2015 - JdeRobot Developers

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <http://www.gnu.org/licenses/>.

=======

objectdetector's People

Contributors

ayushmankumar7 avatar bikram99 avatar dependabot[bot] avatar jmplaza avatar maigar avatar naxvm avatar rkachach avatar vinay0410 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

objectdetector's Issues

Problem when using full_model.h5

Hi, I am having some problems when loading the provided keras model (full_model.h5). The load_model from Keras is not working and it does not create the model. I printed the exception from
line 53 in network.py and it shows bad marshal data (unknown type code). So I searched for the problem and it seems to be a version mismatch when saving and loading models with Keras...
My versions of the main libraries needed are the following:

  • Keras==2.2.4
  • Keras-Applications==1.0.6
  • Keras-Preprocessing==1.0.5
  • tf==1.11.9
  • Python 2.7.12

Thanks
(I tried the provided Tensorflow model ssd_mobilenet_v2_coco_2018_03_29.pb and it works perfect! Great work!)

Get a more fluent video flow

The thread timers and the shared memory lockings which they perform have to be revised, in order to get a more fluent usability, in both video and detection flows.

Enhancements

  • Make inferences using YOLO architecture.

  • Save the resulting inferences to an output video file.

Tensorflow Version

The Tensorflow code used here is from the Tensorflow 1.x but in the requirements.txt it's just mentioned tensorflow which automatically installs the lastest verison of Tensroflow (TF2.1.0).
Now, there can be two things done:

  1. mention the version of tensorflow to be installed in "requirements.txt"
  2. Convert the code base to TF2.x

I would be happy to work on this Issue.

Automatic installation of dependencies

We can focus on automatizing the installation of the required Python packages for executing the network, thanks to the python-pip, creating a requirements.txt file, with this format:

opencv_python >= 3.3.1
tensorflow >= 1.4.0
scipy >= 1.0.0
sklearn >= 0.0
progressbar2 >= 3.34.3
h5py >= 2.7.1
cprint >= 1.1

Although the version is not mandatory, you can consult it by executing pip list | grep yourpackage, and adding a new line to the file requirements.txt, which already contains the TensorFlow middleware dependencies.

After this, we'll have to update the README file, guiding the user to execute pip install -r requirements.txt (obviously, JdeRobot must be installed along, via apt, or from sources. We can simply add a link to the installation page).

Model path specified in the YML file

The path to the model which is desired to embed into the network (between all the models included on the model zoo) can be specified in the YML file, so an user can download a model, place it in a directory and load it directly into the Detection Network. @vinay0410 is on it!

Implement an asynchronous design

To avoid unnecessary locks and blocking calls, we have to implement an asynchronous design (as in dl-digitclassifier).

This will allow us to implement a GUI-less mode in the node.

YML parsing on the root node

The YML parsing process should be moved to the root node (objectdetector.py), as the file is passed as an argument to it. Once it is done, the parsed parameters can be passed to the corresponding invoked nodes (GUI, Network...).

Re-design interface

A re-design of the interface has to be done.

The resulting GUI should have a couple of buttons for on-demand classifying of the last captured frame, and toggling on and off the real-time detection.

As an extra, it should count with a JdeRobot logo (already available at resources/jderobot.png) somewhere in the component window.

Mistake on line 31 of Camera/local_video.py

Hi, I think you made a little mistake on line 31 when using device_idx variable instead of video_path. There is no device_idx variable there. Maybe a forgotten copy-paste :)

Device agnostic version of the detector

Following the setup instructions, all dependencies are satistied, but using Pip3 to install them, in order to ease compatibility with PyQt5. The configuration file "objectdetector.yml" is set to "Local".

The problem is when the software is executed. The command to do that is: python3 objectdetector.py objectdetector.yml. But different dynamic libraries are not satisfied:

  • Could not load dynamic library 'libnvinfer.so.6'.
  • Could not load dynamic library 'libnvinfer_plugin.so.6'.

The following message "If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly." seems to say I necessarily need Nvidia.

So, the question is: do I necessarily need an Nvidia GPU to execute the code?

Integrate the Keras framework

We want to develop a CNN based on Keras, to be embedded on this component and detect objects in the same conditions than a TensorFlow one.

Display FPS

Implement a framerate counter on the processed image

Darknet support

Thanks a lot for this, guys, it works great for quick tests of various models.

I wonder if you have considered extending the backend to also support Darknet. I know there are various conversion tools to Keras or TensorFlow, but I'd rather drop the native model files and use the darknet library instead of having to go through additional steps.

If you don't mind me doing it, I can take an initial look at how such an extension could work.

Congratulations anyway for your work!

Add support to generate labels (used for object detection) automatically

In order to retrain some model (using transfer learning) we normally need to have some dataset or generate one. This normally implies tagging objects in images manually using tools such as label image. Unfortunately, this manual process requires a lot of effort/time since we have to mark manually all the objects within an image and specify their classes.

Object-detector is a powerful tool which already has a good support for different models/frameworks and is able to detect object automatically in video/images. It would be very helpful to extend the tool so it can be used to generate detection datasets automatically (in VOC PASCAL format for example). For this, the tool can detect automatically the objects within a frame and then let the user select which ones are correct and discard the bad ones. This would be very helpful to speedup the process of data labeling.

[Pending] YAML config format

comm library will receive an update soon, the way to indicate it which framework is being used to serve the images (ICE/ROS) will change. Until now, the YAML node Camera.Server has had numerical values (0=ICE, 1=ROS...). But from this now update, it will be indicated via a string value ("ice", "ros", etc.). As this is not implemented yet, we shall not change the implementation until the comm library is updated.

Allow loading new models easily

Load new Keras/TensorFlow models easily inside the generic DetectionNetwork class, specifying a couple parameters on the YML file.

Possible improvements

  1. Every person could use the object detector from their cellphone.
  2. The object detector can be used in reduce area to see how it works when there are crowded people. Example: in the stores where the lines are long

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.