GithubHelp home page GithubHelp logo

drewnf / tensorflow_object_tracking_video Goto Github PK

View Code? Open in Web Editor NEW
501.0 53.0 199.0 124.29 MB

Object Tracking in Tensorflow ( Localization Detection Classification ) developed to partecipate to ImageNET VID competition

License: MIT License

Python 90.92% Makefile 0.03% C++ 9.05%
detection video yolo tensorflow inception imagenet object-detection classification tensorbox dataset

tensorflow_object_tracking_video's Introduction

Tensorflow_Object_Tracking_Video

(Version 0.3, Last Update 10-03-2017)

alt text alt text alt text alt text

The Project follow the below index:

  1. Introduction;
  2. Requitements & Installation;
  3. YOLO Script Usage
    1. Setting Parameters;
    2. Usage.
  4. VID TENSORBOX Script Usage
    1. Setting Parameters;
    2. Usage.
  5. TENSORBOX Tests Files;
  6. Dataset Scripts;
  7. Copyright;
  8. State of the Project.
  9. DOWNLOADS.
  10. Acknowledgements.
  11. Bibliography.

1.Introduction

This Repository is my Master Thesis Project: "Develop a Video Object Tracking with Tensorflow Technology" and it's still developing, so many updates will be made. In this work, I used the architecture and problem solving strategy of the Paper T-CNN(Arxiv), that won last year IMAGENET 2015 Teaser Challenge VID. So the whole script architecture will be made of several component in cascade:

  1. Still Image Detection (Return Tracking Results on single Frame);
  2. Temporal Information Detection( Introducing Temporal Information into the DET Results);
  3. Context Information Detection( Introducing Context Information into the DET Results);

Notice that the Still Image Detection component could be unique or decompose into two sub-component:

  1. First: determinate "Where" in the Frame;
  2. Second: determinate "What" in the Frame.

My project use many online tensorflow projects, as:

2.Requirement & Installation

To install the script you only need to download the Repository. To Run the script you have to had installed:

  • Tensorflow;
  • OpenCV;
  • Python;

All the Python library necessary could be installed easily trought pip install package-name. If you want to follow a guide to install the requirements here is the link for a tutorial I wrote for myself and for a course of Deep Learning at UPC.

3.YOLO Script Usage

You only look once (YOLO) is a state-of-the-art, real-time object detection system.## i.Setting Parameters This are the inline terminal argmunts taken from the script, most of them aren't required, only the video path must be specified when we call the script:

  parser = argparse.ArgumentParser()
  parser.add_argument('--det_frames_folder', default='det_frames/', type=str)
  parser.add_argument('--det_result_folder', default='det_results/', type=str)
  parser.add_argument('--result_folder', default='summary_result/', type=str)
  parser.add_argument('--summary_file', default='results.txt', type=str)
  parser.add_argument('--output_name', default='output.mp4', type=str)
  parser.add_argument('--perc', default=5, type=int)
  parser.add_argument('--path_video', required=True, type=str)

Now you have to download the weights for YOLO and put them into /YOLO_DET_Alg/weights/.

For YOLO knowledge here you can find Original code(C implementation) & paper.

ii.Usage

After Set the Parameters, we can proceed and run the script:

  python VID_yolo.py --path_video video.mp4

You will see some Terminal Output like:

alt tag

You will see a realtime frames output(like the one here below) and then finally all will be embedded into the Video Output( I uploaded the first two Test I've made in the folder /video_result, you can download them and take a look to the final result. The first one has problems in the frames order, this is why you will see so much flickering in the video image,the problem was then solved and in the second doesn't show frames flickering ):

alt tag

4.VID TENSORBOX Script Usage

i.Setting Parameters

This are the inline terminal argmunts taken from the script, most of them aren't required. As before, only the video path must be specified when we call the script:

  parser.add_argument('--output_name', default='output.mp4', type=str)
  parser.add_argument('--hypes', default='./hypes/overfeat_rezoom.json', type=str)
  parser.add_argument('--weights', default='./output/save.ckpt-1090000', type=str)
  parser.add_argument('--perc', default=2, type=int)
  parser.add_argument('--path_video', required=True, type=str)

I will soon put a weight file to download. For train and spec on the multiclass implementation I will add them after the end of my thesis project.

ii.Usage

Download the .zip files linked in the Download section and replace the folders.

Then, after set the parameters, we can proceed and run the script:

  python VID_tensorbox_multi_class.py --path_video video.mp4

5.Tensorbox Tests

In the folder video_result_OVT you can find files result of the runs of the VID TENSOBOX scripts.

6.Dataset Scripts

All the scripts below are for the VID classes so if you wonna adapt them for other you have to simply change the Classes.py file where are defined the correspondencies between codes and names. All the data on the image are made respect a specific Image Ratio, because TENSORBOX works only with 640x480 PNG images, you will have to change the code a little to adapt to your needs. I will provide four scripts:

  1. Process_Dataset_heavy.py: Process your dataset with a brute force approach, you will obtain more bbox and files for each class;
  2. Process_Dataset_lightweight.py: Process your dataset with a lightweight approach making, you will obtain less bbox and files for each class;
  3. Resize_Dataset.py: Resize your dataset to 640x480 PNG images;
  4. Test_Processed_Data.py: Will test that the process end well without errors.

I've also add some file scripts to pre process and prepare the dataset to train the last component, the Inception Model, you can find them in a subfolder of the dataset script folder.

7.Copyright

According to the LICENSE file of the original code,

  • Me and original author hold no liability for any damages;
  • Do not use this on commercial!.

8.State of the Project

  • Support YOLO (SingleClass) DET Algorithm;
  • Support Training ONLY TENSOBOX and INCEPTION Training;
  • USE OF TEMPORAL INFORMATION [This are retrieved through some post processing algorithm I've implemented in the Utils_Video.py file NOT TRAINABLE];
  • Modular Architecture composed in cascade by: Tensorbox (as General Object Detector), Tracker and Smoother and Inception (as Object Classifier);

9.Downloads

Here below the links of the weights file for Inception and Tensorbox from my retraining experiments:

10.Acknowledgements

Thanks to Professors:

  • Elena Baralis from Politecnico di Torino Dipartimento di Automatica e Informatica;
  • Jordi Torres from BSC Department of Computer Science;
  • Xavi Giro ”I” Nieto from UPC Department of Image Processing.

11.Bibliography

i.Course

ii.Classification

iii.Detection

iv.Tracking

tensorflow_object_tracking_video's People

Contributors

drewnf avatar titan-pycompat avatar vishnuprasad1998 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow_object_tracking_video's Issues

code style

hey.
your code can't run on python3.
it have some code style error .
eg. python 2 use print
but python 3 use print()

about caffe

when I run program, Terminal has a message about no kaffe module. waht should i do?

Why I get NAN for df.class_name.max?

Dear Ferri:
Thank you for your codes.
I am using Tensorflow with opencv3.3.1, and I changed some of the codes according to the demandas of different opencv versions (for example, cv2.cv.CV_CAP_PROP_FRAME_COUNT is changed to cv2.CV_CAP_PROP_FRAME_COUNT). After these minor changes, the code can be run without errors in my computer.
However, the output seems a little strange when I am running VID_yolo.py (the 3rd step in your README.md). The results are as follows:
Starting Loading Results
[==========================================================] 100% Time: 0:00:00
Finished Loading Results
Computing Final Mean Reasults..
Class:
nan
Max Value:
nan
Min Value:
nan
Elapsed Time:6 Seconds
Running Completed with Success!!!
It is a little confusing why it gives out NANs for I haven't changed any parameters in your code.

Also, although I saw the frames output, it is not quite the same with that in the folder /video_result, and the green bounding box is not seen.

Could you kindly tell me how to solve this problem? many thanks.

Fine Tuning

How to fine tune your model?
I don't have sufficient data to retrain your model from scratch.
I want to fine tune your model on my data which has only two classes ?

UnicodeDecodeError: "ascii"

The first run I made with:

python VID_yolo.py --path_video video.mp4

I got this error:

Traceback (most recent call last):
File "VID_yolo.py", line 11, in
import Utils_Video
File "/home/roberto/Programación/TENSORFLOW/Tensorflow_Object_Tracking_Video-master/Utils_Video.py", line 5, in
import utils_image
ImportError: No module named utils_image

I changed this

import utils_image

To this

import Utils_Image

In Utils_Video.py

And now I get this error:

Traceback (most recent call last):
File "VID_yolo.py", line 11, in
import Utils_Video
File "/home/roberto/Programación/TENSORFLOW/Tensorflow_Object_Tracking_Video-master/Utils_Video.py", line 5, in
import Utils_Image
File "/home/roberto/Programación/TENSORFLOW/Tensorflow_Object_Tracking_Video-master/Utils_Image.py", line 10, in
import matplotlib.pyplot as plt
File "/home/roberto/.local/lib/python2.7/site-packages/matplotlib/pyplot.py", line 72, in
from matplotlib.backends import pylab_setup
File "/home/roberto/.local/lib/python2.7/site-packages/matplotlib/backends/init.py", line 14, in
line for line in traceback.format_stack()
File "/home/roberto/.local/lib/python2.7/site-packages/matplotlib/backends/init.py", line 16, in
if not line.startswith(' File "<frozen importlib._bootstrap'))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 32: ordinal not in range(128)

What's wrong? Thanks in advance!!!

can not open a mp4 video

hello, firstly, I find that your code have import error. The reason should be that the names are not uniform between Utils_Video.py and utils_video.py or Utils_Image.py and utils_image.py. Please check it.

Importantly, I run python VID_yolo.py --path_video xxx.mp4, but it shows error:

Opening File Video:2.mp4 
could Not Open : 2.mp4
Traceback (most recent call last):
  File "VID_yolo.py", line 120, in <module>
    main()
  File "VID_yolo.py", line 108, in main
    frame_list, frames = utils_video.extract_frames(args.path_video, args.perc)
TypeError: 'NoneType' object is not iterable

why can not open a mp4 file? can you give some advises?

Indentation for reregress option

It seem there is an indentation mistake when you construct graph
reregress is an option of rezoom

    if H['use_rezoom']:
        pred_boxes, pred_logits, pred_confidences, pred_confs_deltas, pred_boxes_deltas = build_forward(H, tf.expand_dims(x_in, 0), googlenet, 'test', reuse=None)
        grid_area = H['grid_height'] * H['grid_width']
        pred_confidences = tf.reshape(tf.nn.softmax(tf.reshape(pred_confs_deltas, [grid_area * H['rnn_len'], H['num_classes']])), [grid_area, H['rnn_len'], H['num_classes']], name='pred_confidences')
        pred_logits = tf.reshape(tf.nn.softmax(tf.reshape(pred_logits, [grid_area * H['rnn_len'], H['num_classes']])), [grid_area, H['rnn_len'], H['num_classes']])
    if H['reregress']:
        pred_boxes = pred_boxes + pred_boxes_deltas
    else:
        pred_boxes, pred_logits, pred_confidences = build_forward(H, tf.expand_dims(x_in, 0), googlenet, 'test', reuse=None)

must be

    if H['use_rezoom']:
        pred_boxes, pred_logits, pred_confidences, pred_confs_deltas, pred_boxes_deltas = build_forward(H, tf.expand_dims(x_in, 0), googlenet, 'test', reuse=None)
        grid_area = H['grid_height'] * H['grid_width']
        pred_confidences = tf.reshape(tf.nn.softmax(tf.reshape(pred_confs_deltas, [grid_area * H['rnn_len'], H['num_classes']])), [grid_area, H['rnn_len'], H['num_classes']], name='pred_confidences')
        pred_logits = tf.reshape(tf.nn.softmax(tf.reshape(pred_logits, [grid_area * H['rnn_len'], H['num_classes']])), [grid_area, H['rnn_len'], H['num_classes']])
        if H['reregress']:
            pred_boxes = pred_boxes + pred_boxes_deltas
    else:
        pred_boxes, pred_logits, pred_confidences = build_forward(H, tf.expand_dims(x_in, 0), googlenet, 'test', reuse=None)

TENSORBOX for multiclass

Have you changed the basic TENSORBOX to run on multiclass? If you did, do you have some documentation on how to use your updated version of TENSORBOX with multiclass? If you didn't, on which network do you base your multiclass detection, is it YOLO? And what was the purpose of using TENSORBOX?
Thank you...

AttributeError: 'module' object has no attribute 'add_rectangles

Hello,

I'm testing your code for videos and when I executed the script "train_multiclass.py" to obtain the model, it gave me the following error:

Traceback (most recent call last):
  File "TENSORBOX/train_multiclass.py", line 610, in <module>
    main()
  File "TENSORBOX/train_multiclass.py", line 607, in main
    train(H, test_images=[])
  File "TENSORBOX/train_multiclass.py", line 555, in train
    test_output_to_log = train_utils_multiclass.add_rectangles(H,
AttributeError: 'module' object has no attribute 'add_rectangles'

Is this expected?

Probably we should use "train_utils.add_rectangles" I guess?

Thanks in advance.

Broken import

In Utils_Video.py line 5:
import utils_image should be changed to: import Utils_image.

Where is the progressbar?

Hello, DrewNF

  • when I run VID_yolo.py ,the error was : ModuleNotFoundError: No module named 'progressbar'
  • then I search all the directories, I can't find a file named progressbar
  • then what is that ? where can I get that?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.