smorodov / multitarget-tracker Goto Github PK
View Code? Open in Web Editor NEWMultiple Object Tracker, Based on Hungarian algorithm + Kalman filter.
License: Apache License 2.0
Multiple Object Tracker, Based on Hungarian algorithm + Kalman filter.
License: Apache License 2.0
Thanks for your awsome project.With better detector,the tracking system works well.The only way to achieve real-time performan is to use GPU?
Hello,
This is a very interesting and great work! I am tying to understand the implementation as I am not very much familiar with Kalman Filter. I have read this article but that s too generic.
The first question that comes to my mind is how are the Kalman Filter matrices determined?
For e.g.:
Here in CreateLinear:
`m_linearKalman->transitionMatrix = (cv::Mat_<track_t>(4, 4) <<
1, 0, m_deltaTime, 0,
0, 1, 0, m_deltaTime,
0, 0, 1, 0,
0, 0, 0, 1);
m_linearKalman->processNoiseCov = (cv::Mat_<track_t>(4, 4) <<
pow(m_deltaTime,4.0)/4.0 ,0 ,pow(m_deltaTime,3.0)/2.0 ,0,
0 ,pow(m_deltaTime,4.0)/4.0 ,0 ,pow(m_deltaTime,3.0)/2.0,
pow(m_deltaTime,3.0)/2.0 ,0 ,pow(m_deltaTime,2.0) ,0,
0 ,pow(m_deltaTime,3.0)/2.0 ,0 ,pow(m_deltaTime,2.0));`
Similarly, for create linear for Rect here
`m_linearKalman->transitionMatrix = (cv::Mat_<track_t>(6, 6) <<
1, 0, 0, 0, m_deltaTime, 0,
0, 1, 0, 0, 0, m_deltaTime,
0, 0, 1, 0, 0, 0,
0, 0, 0, 1, 0, 0,
0, 0, 0, 0, 1, 0,
0, 0, 0, 0, 0, 1);
m_linearKalman->processNoiseCov = (cv::Mat_<track_t>(6, 6) <<
pow(m_deltaTime,4.)/4., 0, 0, 0, pow(m_deltaTime,3.)/2., 0,
0, pow(m_deltaTime,4.)/4., 0, 0, pow(m_deltaTime,3.)/2., 0,
0, 0, pow(m_deltaTime,4.)/4., 0, 0, 0,
0, 0, 0, pow(m_deltaTime,4.)/4., 0, 0,
pow(m_deltaTime,3.)/2., 0, 0, 0, pow(m_deltaTime,2.), 0,
0, pow(m_deltaTime,3.)/2., 0, 0, 0, pow(m_deltaTime,2.));`
Why is the process noise covariance matrix is not symmetric in second case? Is there any reference you can provide which explain the reasoning of choosing these matrices? What are the assumptions made for defining them?
What does pow(m_deltaTime,4.)/4., pow(m_deltaTime,2.) and pow(m_deltaTime,3.)/2. stand for?
Help is much appreciated.
Thanks!
Hi Mr Smorodov
thank you for sharing your code
I want to compile this project in ubuntu 14.04 with opencv 3.1.0.
But I do not know how to compile it with gcc ??
Hi Smorodov
I'm currently using the DNN detector, it works fine, I need to track people, I managed to use Kalman filtering for one person, but it failed drastically for multiple persons, but this is not even the big issue, correct me if I am wrong, apparently DNN detector ( based in MobileNet SSD) start detecting the objects ( persons in my case) with the highest score to the lowest, I was wondering how did you solve this problem ( assuming you had it too), since that could be problematic when you measure the spatial position of the objects in the frames to feed the the Kalman filter.
I saw the " MobileNet SSD and tracking for low resolution and low quality videos from car DVR" demo, I wanted to have the same result, I downloaded your project and manage to run it fine, but I did not quite understand how you put the DNN+tracking together, do you have some paper about your project ? That could helpful.
Best regards
Kuntima K. M.
I have some test videos which I need to rotate or scale before testing. Can you help me with this? Thanks.
Hello Mr Smorodov,
First, I want to you thanks for sharing your code.
I have an problem in installing step :
[ 6%] Building CXX object CMakeFiles/MultitargetTracker.dir/Detector/BackgroundSubtract.cpp.o
/Multitarget-tracker-master/Detector/BackgroundSubtract.cpp: In constructor ‘BackgroundSubtract::BackgroundSubtract(BackgroundSubtract::BGFG_ALGS, int, int, int, int, int, int)’:
/Multitarget-tracker-master/Detector/BackgroundSubtract.cpp:40:26: error: ‘createBackgroundSubtractorCNT’ is not a member of ‘cv::bgsegm’
m_modelOCV = cv::bgsegm::createBackgroundSubtractorCNT(15, true, 15 * 60, true);
^
CMakeFiles/MultitargetTracker.dir/build.make:100: recipe for target 'CMakeFiles/MultitargetTracker.dir/Detector/BackgroundSubtract.cpp.o' failed
make[2]: *** [CMakeFiles/MultitargetTracker.dir/Detector/BackgroundSubtract.cpp.o] Error 1
CMakeFiles/Makefile2:60: recipe for target 'CMakeFiles/MultitargetTracker.dir/all' failed
make[1]: *** [CMakeFiles/MultitargetTracker.dir/all] Error 2
Makefile:76: recipe for target 'all' failed
Can you help me plz!
Thanks you in advance.
hi, @Smorodov
It's a great work from you. and I wonder whether you have ever tested the speed of this work.
Best regards,
Joshua
Hi, I was wondering if there is a possibility to use this code when the input is from webcam instead of a file path and name. I explored the option and I think if I overload the function Capture and Detect from the class VideoExample in VideoExample.cpp I can achieve something like this. Is my understanding correct?
I would need to pass '0' as value instead of the path and file name in Capture().
I might be oversimplifying it here. Your feedback would be great!
Hi.
Based on last dnn examples from opencv. I have created a new CDetector that give classificator rectangles.
This is the code, maybe I should pull request it.
CClassificatorDetector::CClassificatorDetector(
bool collectPoints,
cv::UMat& gray
)
{
m_fg = gray.clone();
m_minObjectSize.width = std::max(5, gray.cols / 100);
m_minObjectSize.height = m_minObjectSize.width;
m_collectPoints = collectPoints;
cv::String protoTxtName = "MobileNetSSD_deploy.prototxt.txt";
cv::String caffeModelName = "MobileNetSSD_deploy.caffemodel";
deepLearningModel = cv::dnn::readNetFromCaffe(protoTxtName, caffeModelName);
}
CClassificatorDetector::~CClassificatorDetector() {
// TODO Auto-generated destructor stub
}
const std::vector<cv::Point_<float>>& CClassificatorDetector::Detect(cv::UMat& gray)
{
m_fg = gray.clone();
cv::UMat resizedMat;
m_regions.clear();
m_centers.clear();
std::vector<cv::Rect> rects;
std::vector<cv::Vec4i> hierarchy;
cv::resize(m_fg, resizedMat, cv::Size(300, 300));
cv::Mat blob = cv::dnn::blobFromImage(resizedMat.getMat(cv::ACCESS_READ), 0.007843, cv::Size(300, 300), cv::Scalar(127.5));
deepLearningModel.setInput(blob);
cv::Mat prob = deepLearningModel.forward();
cv::Mat probMat(prob.size[2], prob.size[3], CV_32F, prob.ptr<float>());
for(int i = 0; i < probMat.rows; i++)
{
if (probMat.at<float>(i, 2) > 0.70 && probMat.at<float>(i,1) == 15)
{
double startX = m_fg.cols*(probMat.at<float>(i, 3));
double startY = m_fg.rows*(probMat.at<float>(i, 4));
double width = m_fg.cols*(probMat.at<float>(i, 5));
double height = m_fg.rows*(probMat.at<float>(i, 6));
rects.push_back(cv::Rect(startX, startY, width, height));
}
}
if (rects.size() > 0)
{
for (size_t i = 0; i < rects.size(); i++)
{
cv::Rect r = rects[i];
if (r.width >= m_minObjectSize.width &&
r.height >= m_minObjectSize.height)
{
CRegion region(r);
cv::Point2f center(r.x + 0.5f * r.width, r.y + 0.5f * r.height);
if (m_collectPoints)
{
const int yStep = 5;
const int xStep = 5;
for (int y = r.y; y < r.y + r.height; y += yStep)
{
cv::Point2f pt(0, static_cast<float>(y));
for (int x = r.x; x < r.x + r.width; x += xStep)
{
pt.x = static_cast<float>(x);
if (r.contains(pt))
{
region.m_points.push_back(pt);
}
}
}
if (region.m_points.empty())
{
region.m_points.push_back(center);
}
}
m_regions.push_back(region);
m_centers.push_back(cv::Point_<float>(center.x, center.y));
}
}
}
return m_centers;
}
If I want to use yolo v3 with GPU , do I need to tick the WITH_CUDA option when compiling opencv?
If I compile opencv without WITH_CUDA , can I use GPU to run the code?
My Goal is to have real-time MultiTracker with Learning.
I used Kalman filter to track an object but i found errors in the estimation while tracking.
The object was not been tracked continuously. I want to implement some learning mechanism along with Tracking.
One way i thought of doing this is,
Scalar
or Vec3b
)I tried KCF, MIL e.t.c they are not realtime. can you recommend any realtime learning mecahnism or ways to improve above proposed one.
I just want to use the function VideoExample::Detection to detect objects for one frame by ssd.
And I add this function in a cycle in which each frame of the video is read by function VideoCapture.
I also add the code as follows:
SSDMobileNetExample dnn_detector;
Mat frame_mat;
capture >> frame_mat;
cv::UMat grayFrame1;
regions_t regions;
dnn_detector.Detection(frame_mat, grayFrame1, regions);
But the error shows that m_detector in the function detection is empty.
Could you please tell me why?
Thank you.
I noticed that you are using an ROI to do KCF tracking here. If I'm not misunderstanding this scope, ROI tracking does not speed up the KCF algorithm. Because it is a local based tracking algorithm, global area does not affect the speed of KCF.
I'm trying to build Multitarget-tracker using CMake. Initially i had the warning below,
You should manually point CMake variable OpenCV_DIR to your build of OpenCV library.
Call Stack (most recent call first):
CMakeLists.txt:57 (find_package)
I manually pointed out the directory of build version of OpenCV library and tried to build.
Warning:
CMake Warning at CMakeLists.txt:23 (FIND_PACKAGE):
Found package configuration file:
C:/OpenCV/opencv/build/x86/vc12/lib/OpenCVConfig.cmake
but it set OpenCV_FOUND to FALSE so package "OpenCV" is considered to be
NOT FOUND.
I checked the OpenCV_FOUND, but after i press 'configure' again, it automatically unchecks.
I had installed opencv_contrib,but how could i use of it in code?
I'm using OpenCV 3.2.0 on Ubuntu 14.04
when I try to run "make" I get the following error:
"
In file included from /home/mina/Desktop/Hungarian/Multitarget-tracker/main.cpp:2:0:
/home/mina/Desktop/Hungarian/Multitarget-tracker/vibe_src/BackgroundSubtract.h:6:30: fatal error: opencv2/bgsegm.hpp: No such file or directory
#include <opencv2/bgsegm.hpp>
^
compilation terminated.
make[2]: *** [CMakeFiles/MultitargetTracker.dir/main.cpp.o] Error 1
make[1]: *** [CMakeFiles/MultitargetTracker.dir/all] Error 2
make: *** [all] Error 2
"
Lately, I've learned that bgsegm might work on older OpenCV versions (e.g. 3.0.0), but I'm currently using 3.2.0 for my projects, so is there a way for the code to run on 3.2.0?
Thanks in advance.
Hi Smorodov
I want to use Yolo v3 tiny model.
So I downloaded .cfg and .weight files.
but it doesn't work How I can fix it ?
OpenCV(4.0.0-pre) Error: Unspecified error (Requested layer "detection_out" not found) in cv::dnn::experimental_dnn_v4::Net::Impl::getLayerData, file c:\opencv-master\modules\dnn\src\dnn.cpp, line 936
Hi, Thanks very much for sharing the code, I run your project using SSD detector, the result I got is very bad, there are many wrong detection. Do you have any ideas? Thank you.
Sir,I try to run your code.But i have not clear idea about which files needs to be added.It means which header and source files required.could you please tell me how to run this project correctly.
Hi Smorodov,
Thank you for your share and source code for our learning.
I try to build and run the source code in Clion Mac os x. I am using g++-7 for c++14 compatibility.
It went well until below message:
[100%] Linking CXX executable ../build/MultitargetTracker Undefined symbols for architecture x86_64: "cv::KeyPointsFilter::runByImageBorder(std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >&, cv::Size_<int>, int)", referenced from: LBSP::computeImpl(cv::Mat const&, std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >&, cv::Mat&) const in LBSP.cpp.o LBSP::compute2(cv::Mat const&, std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >&, cv::Mat&) const in LBSP.cpp.o LBSP::validateKeyPoints(std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >&, cv::Size_<int>) in LBSP.cpp.o "cv::KeyPointsFilter::runByKeypointSize(std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >&, float, float)", referenced from: LBSP::computeImpl(cv::Mat const&, std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >&, cv::Mat&) const in LBSP.cpp.o LBSP::compute2(cv::Mat const&, std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >&, cv::Mat&) const in LBSP.cpp.o "cv::CascadeClassifier::detectMultiScale(cv::_InputArray const&, std::vector<cv::Rect_<int>, std::allocator<cv::Rect_<int> > >&, double, int, int, cv::Size_<int>, cv::Size_<int>)", referenced from: HybridFaceDetectorExample::ProcessFrame(cv::Mat, cv::UMat) in main.cpp.o FaceDetector::Detect(cv::UMat&) in FaceDetector.cpp.o "cv::Feature2D::detectAndCompute(cv::_InputArray const&, cv::_InputArray const&, std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >&, cv::_OutputArray const&, bool)", referenced from: construction vtable for cv::Feature2D-in-LBSP in LBSP.cpp.o vtable for LBSP in LBSP.cpp.o "cv::Feature2D::detect(cv::_InputArray const&, std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >&, cv::_InputArray const&)", referenced from: construction vtable for cv::Feature2D-in-LBSP in LBSP.cpp.o vtable for LBSP in LBSP.cpp.o "cv::Feature2D::detect(cv::_InputArray const&, std::vector<std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >, std::allocator<std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> > > >&, cv::_InputArray const&)", referenced from: construction vtable for cv::Feature2D-in-LBSP in LBSP.cpp.o vtable for LBSP in LBSP.cpp.o "cv::Feature2D::compute(cv::_InputArray const&, std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >&, cv::_OutputArray const&)", referenced from: construction vtable for cv::Feature2D-in-LBSP in LBSP.cpp.o vtable for LBSP in LBSP.cpp.o "cv::Feature2D::compute(cv::_InputArray const&, std::vector<std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >, std::allocator<std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> > > >&, cv::_OutputArray const&)", referenced from: construction vtable for cv::Feature2D-in-LBSP in LBSP.cpp.o vtable for LBSP in LBSP.cpp.o "cv::HOGDescriptor::detectMultiScale(cv::_InputArray const&, std::vector<cv::Rect_<int>, std::allocator<cv::Rect_<int> > >&, double, cv::Size_<int>, cv::Size_<int>, double, double, bool) const", referenced from: PedestrianDetector::Detect(cv::UMat&) in PedestrianDetector.cpp.o ld: symbol(s) not found for architecture x86_64 collect2: error: ld returned 1 exit status make[3]: *** [../build/MultitargetTracker] Error 1 make[2]: *** [CMakeFiles/MultitargetTracker.dir/all] Error 2 make[1]: *** [CMakeFiles/MultitargetTracker.dir/rule] Error 2 make: *** [MultitargetTracker] Error 2
I have already installed Opencv with contrib ( I assume)
what could be the steps to solve this._
Best
Based on this paper
https://medusa.fit.vutbr.cz/traffic/data/papers/2014-CESCG-TrafficAnalysis.pdf
from the DataSet you send me to try the difference between raw detections and track boxes.
Could be possible to add another checkto track accurancy based on trayectory of a line
For example in a road
Orange lines are the director trayectory (added by user). Then we have a green tracker trayectory that could be possible, and another red tracker trayectory that could not be possible or not desirable.
Becouse of the thermal image sometimes the background substractor gives random boxes that not follow the director trayectory so with this test maybe it could be possible to avoid false tracks.
I have generated a solution file(.sln) by cmake and opened it by VS.
When I run the code, the run interface shows the following words:
Examples of the Multitarget tracking algorithm
Usage:
./MultitargetTracker [--example]=<number of example 0..5> [--start_frame]= [--end_frame]= [--end_delay]= [--out]= [--show_logs]=
Press:
'm' key for change mode: play|pause. When video is paused you can press any key for get next frame.
Press Esc to exit from video
OpenCL not used
[ INFO:0] VIDEOIO: Enabled backends(6, sorted by priority): FFMPEG(1000); MSMF(990); DSHOW(980); VFW(970); CV_IMAGES(960); CV_MJPEG(950)
warning: Error opening file (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:856)
warning: ../data/atrium.avi (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:857)
Can't open ../data/atrium.avi
Because I run it in windows,so I am confused how to add parameters to my command.
Thank you for your help!
Hi , i am working on your code, i have two confusion
1 why the KalmanUnscented is not good to predict when a track lose detections, since the motion of car is mostly linear.
2 the KCF is very good to deal with the situation i mentioned above, and mostly the KCF part need 1~2ms , but some time the KCF need more than 100 ms , i think it has something to do wih the size of target , because this situation happened when the target is very big, and the detetion part failed to detet it , KCF need to predit the location. i am not very sure, maybe i am wrong , so please help me .
I have tested TrackerKCF in opencv_contrib, the performance is not satisfactory because earlier implementation(I used this repo) is much robuster. Did you ever test and compare those two repos. And I think code in opencv_contrib should be carefully trusted. UKF should also be further tested.
Hi Mr Smorodov
thanks for sharing your code
I use visual 2012
when I run the code I give an error "namespace std has no member make_unique".
it's in Ctracker.cpp
I searched on net and found pot that I should use visual 2014 for "make_unique"(if I'm not mistaken).
Is there any way that I can run the code in my visual 2012?
thank you in advance.
I can see that the tracker works by assigning id to the tracks, but the ID is not assigned to the points togethers as an object, so that we can use it further.
I am expecting something like this,
track.object.m_trackID
track.object.rect
track.object.pt <--- this point should get updated, as the object moves
Please help me with the same.
Thank you.
----update----
sorry, I just saw it. All info I wanted is in GetLastRect(). But Is there any possibility for changing RotatedRect instead of normal Rect ?. So, that we can use minarearect instead of boundingrect.
A good reference of the state of art in computer vision. We should consider add new trackers and object detections
Hi I don't know why its working like this.
These captures are showing the mog2 background substractor frame processed and the rectangles
The green rectangles are the raw rectangles found by the function cv::findContours
and the blue rectangles are the ones the tracker are generating
as you can see the tracker rectangles are smaller and centered in the topleft corner of the raw rectangle
Thanks
When objects move fast,the kalman filter can't predict the next location well.Is there any proposed method to solve this problem?
Hi,
I would like to know what functionality implements the red colored halo behind the pedestrians.Thanks again.
mkdir build && cd build && cmake .. && make
get the errors:
/home/dh/Workspace/Test/Multitarget-tracker/Tracker/track.cpp: In member function ‘void CTrack::RectUpdate(const CRegion&, bool, cv::UMat, cv::UMat)’: /home/dh/Workspace/Test/Multitarget-tracker/Tracker/track.cpp:260:75: error: no matching function for call to ‘cv::Tracker::init(cv::UMat, cv::Rect2d&)’ m_tracker->init(cv::UMat(prevFrame, roiRect), lastRect); ^ In file included from /usr/local/include/opencv2/tracking.hpp:304:0, from /home/dh/Workspace/Test/Multitarget-tracker/Tracker/track.h:9, from /home/dh/Workspace/Test/Multitarget-tracker/Tracker/track.cpp:1: /usr/local/include/opencv2/tracking/tracker.hpp:542:16: note: candidate: bool cv::Tracker::init(const cv::Mat&, const Rect2d&) CV_WRAP bool init( const Mat& image, const Rect2d& boundingBox ); ^ /usr/local/include/opencv2/tracking/tracker.hpp:542:16: note: no known conversion for argument 1 from ‘cv::UMat’ to ‘const cv::Mat&’ /home/dh/Workspace/Test/Multitarget-tracker/Tracker/track.cpp:268:94: error: no matching function for call to ‘cv::Tracker::update(cv::UMat, cv::Rect2d&)’ if (!m_tracker.empty() && m_tracker->update(cv::UMat(currFrame, roiRect), newRect)) ^ In file included from /usr/local/include/opencv2/tracking.hpp:304:0, from /home/dh/Workspace/Test/Multitarget-tracker/Tracker/track.h:9, from /home/dh/Workspace/Test/Multitarget-tracker/Tracker/track.cpp:1: /usr/local/include/opencv2/tracking/tracker.hpp:553:16: note: candidate: bool cv::Tracker::update(const cv::Mat&, cv::Rect2d&) CV_WRAP bool update( const Mat& image, CV_OUT Rect2d& boundingBox ); ^ /usr/local/include/opencv2/tracking/tracker.hpp:553:16: note: no known conversion for argument 1 from ‘cv::UMat’ to ‘const cv::Mat&’ /home/dh/Workspace/Test/Multitarget-tracker/Tracker/track.cpp: In member function ‘void CTrack::CreateExternalTracker()’: /home/dh/Workspace/Test/Multitarget-tracker/Tracker/track.cpp:349:20: error: ‘struct cv::TrackerKCF::Params’ has no member named ‘detect_thresh’ params.detect_thresh = 0.3; ^ /home/dh/Workspace/Test/Multitarget-tracker/Tracker/track.cpp:411:29: error: ‘cv::TrackerMOSSE’ has not been declared
Hello,
Could you please provide the source videos of demo? I cannot find the source videos of demo 1 of car DVR, demo 3 of Motion Detection and tracking and demo 5 of Simple Abandoned detector from dataset.
Thank you for your help. Hope for your response.
I don't know if I am doing something wrong, but when I try to create the YOLO detector I found this problem.
OpenCV(3.4.1) Error: Assertion failed (ngroups > 0 && inpCn % ngroups == 0 && outCn % ngroups == 0) in getMemoryShapes, file /home/ubuntu/Downloads/opencv/opencv-3.4.1/modules/dnn/src/layers/convolution_layer.cpp, line 234
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(3.4.1) /home/ubuntu/Downloads/opencv/opencv-3.4.1/modules/dnn/src/layers/convolution_layer.cpp:234: error: (-215) ngroups > 0 && inpCn % ngroups == 0 && outCn % ngroups == 0 in function getMemoryShapes
Hi. I'm testing with MOG2 Background substraction. and I don't know exactly the purpose of this variable and what does to m_regions.
Sometimes with my video feed, a truck or a car can have multiple boxes instead of only one. How can I combine into one?? In the repo there is nms.h (Non-Maximum Suppression) but what you don't use it??
Also about the m_fps variable? What we can control with it? I think is the number of frames without a detection of a saved track. But this is not the frame-rate we are updating tthe tracker and detector am I right?
Do you test with feeds with different frame-rate? I have a camera with 50fps, I assume it don't be neccesary to make the full process at that rate so I can leave one frame of two without process
Sometimes I have a car running from left to right of the scene but in te middle of the image I have a wall .
The car is trackered reasonabily good on the left and right areas but with different Ids. How can I make the car is assigned the same track ID??
Sorry for the amount of questions... jejeje
Thanks.
I see that in the new release of OPENCV .3.4.1 They added the new CSR-DCF tracker.
Anybody has tested it??
Also.. I read about this one
https://github.com/mcamplan/DSKCF_CPP
Maybe we can add to this project.
Hi, I'm just trying to build this project and take some test, after I run the building command:
cd build
cmake ..
-- The C compiler identification is GNU 4.9.3
-- The CXX compiler identification is GNU 4.9.3
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found CUDA: /usr/local/cuda-8.0 (found suitable exact version "8.0")
-- Found OpenCV: /usr/local (found version "3.2.0")
-- Configuring done
-- Generating done
-- Build files have been written to: /home/yangxw/xiaoren/Multitarget-tracker/build
But after I run 'make', There is an error:
Scanning dependencies of target MultitargetTracker
[ 2%] Building CXX object CMakeFiles/MultitargetTracker.dir/main.cpp.o
In file included from /home/yangxw/xiaoren/Multitarget-tracker/main.cpp:2:0:
/home/yangxw/xiaoren/Multitarget-tracker/Detector/BackgroundSubtract.h:8:30: fatal error: opencv2/bgsegm.hpp: No such file or directory
#include <opencv2/bgsegm.hpp>
^
And then I try to use "cmake -DUSE_OCV_BGFG=OFF ..", another error appears:
[ 2%] Building CXX object CMakeFiles/MultitargetTracker.dir/main.cpp.o
In file included from /home/yangxw/xiaoren/Multitarget-tracker/Tracker/Ctracker.h:8:0,
from /home/yangxw/xiaoren/Multitarget-tracker/main.cpp:6:
/home/yangxw/xiaoren/Multitarget-tracker/Tracker/track.h:8:32: fatal error: opencv2/tracking.hpp: No such file or directory
#include <opencv2/tracking.hpp>
^
Could you please give a clue? BTW, I checked the CMakeCache.txt file in my opencv build directory, and there is no 'BUILD_opencv_bgsegm:BOOL=ON' item, but there is another one: 'BUILD_opencv_cudabgsegm:BOOL=ON'.
Thank you for your help!
Hi,
I have integrated a custom face detector in your code but i have noticed that the video is flickering.
Tracker Settings
TrackerSettings settings;
settings.m_useLocalTracking = m_useLocalTracking;
settings.m_distType = tracking::DistJaccard;
settings.m_kalmanType = tracking::KalmanUnscented;
settings.m_filterGoal = tracking::FilterRect;
settings.m_lostTrackType = tracking::TrackKCF; // Use KCF tracker for collisions resolving
settings.m_matchType = tracking::MatchHungrian;
settings.m_dt = 0.3f; // Delta time for Kalman filter
settings.m_accelNoiseMag = 0.1f; // Accel noise magnitude for Kalman filter
settings.m_distThres = 0.8f; // Distance threshold between region and object on two frames
settings.m_maximumAllowedSkippedFrames = m_fps; // Maximum allowed skipped frames
settings.m_maxTraceLength = 5 * m_fps; // Maximum trace length
Also flickering increased if i change m_lostTrackType to TrackNone
Also the response time is below 30 ms
Frame 189: tracks = 16, time = 27
Frame 190: tracks = 18, time = 28
Frame 191: tracks = 19, time = 26
Frame 192: tracks = 19, time = 27
Frame 193: tracks = 19, time = 28
Frame 194: tracks = 17, time = 28
Frame 195: tracks = 17, time = 28
Frame 196: tracks = 17, time = 28
Frame 197: tracks = 18, time = 27
Frame 198: tracks = 18, time = 28
Frame 199: tracks = 18, time = 28
Frame 200: tracks = 17, time = 27
Frame 201: tracks = 16, time = 27
Frame 202: tracks = 13, time = 28
Frame 203: tracks = 12, time = 28
Frame 204: tracks = 12, time = 27
Frame 205: tracks = 12, time = 28
Frame 206: tracks = 12, time = 28
Frame 207: tracks = 12, time = 28
Frame 208: tracks = 12, time = 28
Frame 209: tracks = 12, time = 28
Frame 210: tracks = 15, time = 28
Frame 211: tracks = 15, time = 28
Frame 212: tracks = 15, time = 28
Frame 213: tracks = 15, time = 27
Frame 214: tracks = 17, time = 28
Frame 215: tracks = 17, time = 28
Frame 216: tracks = 17, time = 28
Frame 217: tracks = 17, time = 28
Frame 218: tracks = 16, time = 28
Frame 219: tracks = 15, time = 28
Frame 220: tracks = 15, time = 28
Frame 221: tracks = 13, time = 28
Frame 222: tracks = 14, time = 28
Frame 223: tracks = 15, time = 28
Frame 224: tracks = 16, time = 28
Frame 225: tracks = 15, time = 28
Frame 226: tracks = 15, time = 28
Frame 227: tracks = 16, time = 29
Frame 228: tracks = 17, time = 28
Frame 229: tracks = 17, time = 28
Frame 230: tracks = 17, time = 29
Frame 231: tracks = 17, time = 28
Frame 232: tracks = 17, time = 30
Frame 233: tracks = 17, time = 29
Frame 234: tracks = 18, time = 29
Frame 235: tracks = 18, time = 29
Frame 236: tracks = 18, time = 29
Frame 237: tracks = 19, time = 29
Frame 238: tracks = 19, time = 29
Frame 239: tracks = 20, time = 29
Frame 240: tracks = 20, time = 29
Frame 241: tracks = 21, time = 28
Frame 242: tracks = 21, time = 29
Frame 243: tracks = 21, time = 28
Frame 244: tracks = 22, time = 29
Frame 245: tracks = 22, time = 28
Frame 246: tracks = 25, time = 29
Frame 247: tracks = 27, time = 28
Frame 248: tracks = 27, time = 28
Frame 249: tracks = 27, time = 27
Frame 250: tracks = 26, time = 28
Frame 251: tracks = 25, time = 28
Frame 252: tracks = 24, time = 27
Frame 253: tracks = 26, time = 28
Frame 254: tracks = 26, time = 28
Any idea why it is flickering ?
Hi,
There is one issue with Kalman filter prediction.
I wonder how Kalman filter works in some first frames, since the first tracking prediction always starts some where in the tenth frame, even the object contours are there from the beginning frame in the background model. Can you explain for me the reason?
Thank you
Hi Smorodov,
I wan to utilise object tracking for face detection/recognition.
When we detect the face first time we are calculating its features for detection. But when the same people changes its angle row pitch its features changes. But its is the same object and the same face.
Can we use the tracker to tell algorithm this is the SAME people but just moved/changed pose etc. So we can add this calculated face feature same people.
What will be your suggestion to track same face object (non rigid object trackin)?
thx
Hello, Mr. Smorodov! When running make on a Raspbian Stretch, there is an error when building SSDMobileNetDetector.cpp, which I have attached as a screenshot below --
I installed OpenCV following instructions set here and am a little lost how to proceed. What do you recommend?
Sir
may i know the opencv version and the libraries required to be built for successful building of this codes.thank you in advance
Hello.
I have a problem about OpenCL.
I use Nvidia geforce gtx 1050.
I installed cuda but OpenCL doesn't work..
How can I fix it ?
[ INFO:1] Successfully initialized OpenCL cache directory: C:\Users\sjRim\AppData\Local\Temp\opencv\4.0.0-pre\opencl_cache
[ INFO:1] Preparing OpenCL cache configuration for context: NVIDIA_Corporation--GeForce_GTX_1050--388_19
[ WARN:1] DNN: OpenCL target is not supported with current OpenCL device (tested with Intel GPUs only), switching to CPU.
Hi.
Can you give more info about the Ctracker parameters?? I mean
track_t dt_,
track_t accelNoiseMag_,
track_t dist_thres_,
size_t maximum_allowed_skipped_frames_,
size_t max_trace_length_
Hi ! I try example 1, motion detection+tracking, but it seems I cannot get a reliable deterministic result: I almost always end up with "CaptureAndDetect: Tracking timeout!" at different frames in the video. Any idea what's going wrong ? Thanks.
./MultitargetTracker -e=1
OpenCL not used
Frame 1: tracks = 1, time = 5
...
Frame 109: tracks = 31, time = 23
CaptureAndDetect: Tracking timeout!Process: Frame capture timeout
work time =
0.930499
Frame 89: tracks = 28, time = 21
CaptureAndDetect: Tracking timeout!Process: Frame capture timeout
work time =
0.540753
Frame 141: tracks = 45, time = 27
CaptureAndDetect: Tracking timeout!Process: Frame capture timeout
work time =
1.85506
Hi I'm trying to implement a Abandoned Object detection
i could see with the background substraction detector the beggining of track when someone left a luggage. The track will remain there until the background Method learn that the luggage is part of the background. So my idea is begin a KCF automatically when the Background method begin to fail to follow this luggage. I don't know what happen but always finally the track is lost
Do you know wich parameters I need to tune?? Will the KCF stays there forever (or until the luggage left...)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.