opencv (C++ Implementation) for Multiobjective Tracking Counting

Today, Xiaobo posted the code of multi-target detection, tracking and counting for everyone to play with. If you have any questions, you can send an email to 1039463596@qq.com. Welcome to make comments and corrections:
Let's first look at the effect of tracking counts.

1. Algorithmic purposes:

The purpose of moving object tracking algorithm is to analyze the image sequence in video and calculate the position of the target in each frame of image. This paper calculates the displacement of the target according to the position of the center of mass of the target given in the process of region segmentation, and judges the moving direction of the target according to the change of the position of the center of mass, and whether the moving target is in the observation window, so as to realize the statistics of passenger flow. Because this tracking is a multi-target tracking, it is necessary to find the corresponding region of the moving target in the adjacent frames.

The system has inherent noise, and the background interference around the target may cause errors, but these noises have been removed in the previous process, and can be adjusted and corrected if necessary.
2. Algorithmic difficulties:

(1) In order to track multiple targets, it is necessary to find the corresponding moving target area between adjacent frames without chaotic tracking.

(2) How to judge whether the moving target area is a new target entering the observation window?

(3) Whether the moving target has left the observation window and the direction of departure, that is, when and whether the counter adds 1 or not.

(4) Some deviations and problems in the process of tracking should be corrected.
3. Algorithmic description:

(1) The first judgment of tracking is how to match moving objects between frames. Tracking characteristics in tracking process

The feature is the center of mass of the object (given in the process of segmentation of the moving area). The corresponding target can be judged here as follows: a. The shortest distance between the centers of mass is used as the feature only; b. The shortest distance, the length, width, aspect ratio and area of the moving object area are combined as the feature by using the weighting coefficient.

(2) Target chains are set according to the judgment features, and the latest centroid position of each target is recorded, which provides conditions for the next judgment. In addition, by storing the centroid position of each target, the motion of the target can be grasped at any time, which is the basis for outputting the motion curve of the target in the future.

(3) Every time a new moving target enters the observation window, its latest centroid position is added to the target chain. How to judge the movement target is new: setting threshold value ymin,ymax (when ymin)

#include <iostream>  
#include <opencv2/opencv.hpp>  
//#include <opencv2/video/background_segm.hpp>  
using namespace std;  
using namespace cv;  

#define drawCross( img, center, color, d )\  
line(img, Point(center.x - d, center.y - d), Point(center.x + d, center.y + d), color, 2, CV_AA, 0);\  
line(img, Point(center.x + d, center.y - d), Point(center.x - d, center.y + d), color, 2, CV_AA, 0 )\  

vector<Point> mousev,kalmanv;  


cv::KalmanFilter KF;  
cv::Mat_<float> measurement(2,1);   
Mat_<float> state(4, 1); // (x, y, Vx, Vy)  
int incr=0;  


void initKalman(float x, float y)  
{  
    // Instantate Kalman Filter with  
    // 4 dynamic parameters and 2 measurement parameters,  
    // where my measurement is: 2D location of object,  
    // and dynamic is: 2D location and 2D velocity.  
    KF.init(4, 2, 0);  

    measurement = Mat_<float>::zeros(2,1);  
    measurement.at<float>(0, 0) = x;  
    measurement.at<float>(0, 0) = y;  


    KF.statePre.setTo(0);  
    KF.statePre.at<float>(0, 0) = x;  
    KF.statePre.at<float>(1, 0) = y;  

    KF.statePost.setTo(0);  
    KF.statePost.at<float>(0, 0) = x;  
    KF.statePost.at<float>(1, 0) = y;   

    setIdentity(KF.transitionMatrix);  
    setIdentity(KF.measurementMatrix);  
    setIdentity(KF.processNoiseCov, Scalar::all(.005)); //adjust this for faster convergence - but higher noise  
    setIdentity(KF.measurementNoiseCov, Scalar::all(1e-1));  
    setIdentity(KF.errorCovPost, Scalar::all(.1));  
}  

Point kalmanPredict()   
{  
    Mat prediction = KF.predict();  
    Point predictPt(prediction.at<float>(0),prediction.at<float>(1));  

    KF.statePre.copyTo(KF.statePost);  
    KF.errorCovPre.copyTo(KF.errorCovPost);  

    return predictPt;  
}  

Point kalmanCorrect(float x, float y)  
{  
    measurement(0) = x;  
    measurement(1) = y;  
    Mat estimated = KF.correct(measurement);  
    Point statePt(estimated.at<float>(0),estimated.at<float>(1));  
    return statePt;  
}  

int main()  
{  
  Mat frame, thresh_frame;  
  vector<Mat> channels;  
  VideoCapture capture;  
  vector<Vec4i> hierarchy;  
  vector<vector<Point> > contours;  

   // cv::Mat frame;  
    cv::Mat back;  
    cv::Mat fore;  
    cv::BackgroundSubtractorMOG2 bg;  

    //bg.nmixtures = 3;//nmixtures  
    //bg.bShadowDetection = false;  
    int incr=0;  
    int track=0;  

    capture.open("4.avi");  

  if(!capture.isOpened())  
    cerr << "Problem opening video source" << endl;  


  mousev.clear();  
  kalmanv.clear();  

initKalman(0, 0);  

  while((char)waitKey(1) != 'q' && capture.grab())  
    {  

   Point s, p;  

  capture.retrieve(frame);  

        bg.operator ()(frame,fore);  
        bg.getBackgroundImage(back);  
        erode(fore,fore,Mat());  
        erode(fore,fore,Mat());  
        dilate(fore,fore,Mat());  
        dilate(fore,fore,Mat());  
        dilate(fore,fore,Mat());  
        dilate(fore,fore,Mat());  
        dilate(fore,fore,Mat());  
        dilate(fore,fore,Mat());  
        dilate(fore,fore,Mat());  

        cv::normalize(fore, fore, 0, 1., cv::NORM_MINMAX);  
        cv::threshold(fore, fore, .5, 1., CV_THRESH_BINARY);  


      split(frame, channels);  
      add(channels[0], channels[1], channels[1]);  
      subtract(channels[2], channels[1], channels[2]);  
      threshold(channels[2], thresh_frame, 50, 255, CV_THRESH_BINARY);  
      medianBlur(thresh_frame, thresh_frame, 5);  

//       imshow("Red", channels[1]);  
      findContours(fore, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));  
      vector<vector<Point> > contours_poly( contours.size() );  
      vector<Rect> boundRect( contours.size() );  

      Mat drawing = Mat::zeros(thresh_frame.size(), CV_8UC1);  
      for(size_t i = 0; i < contours.size(); i++)  
        {  
//          cout << contourArea(contours[i]) << endl;  
          if(contourArea(contours[i]) > 500)  
            drawContours(drawing, contours, i, Scalar::all(255), CV_FILLED, 8, vector<Vec4i>(), 0, Point());  
        }  
      thresh_frame = drawing;  

      findContours(fore, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));  

      drawing = Mat::zeros(thresh_frame.size(), CV_8UC1);  
      for(size_t i = 0; i < contours.size(); i++)  
        {  
//          cout << contourArea(contours[i]) << endl;  
          if(contourArea(contours[i]) > 3000)  
            drawContours(drawing, contours, i, Scalar::all(255), CV_FILLED, 8, vector<Vec4i>(), 0, Point());  
      }  
      thresh_frame = drawing;  

// Get the moments  
      vector<Moments> mu(contours.size() );  
      for( size_t i = 0; i < contours.size(); i++ )  
      {   
          mu[i] = moments( contours[i], false ); }  

//  Get the mass centers:  
      vector<Point2f> mc( contours.size() );  
      for( size_t i = 0; i < contours.size(); i++ )   

      {   
          mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 );   


     /*   
      for(size_t i = 0; i < mc.size(); i++) 
        { 

     //       drawCross(frame, mc[i], Scalar(255, 0, 0), 5); 
          //measurement(0) = mc[i].x; 
          //measurement(1) = mc[i].y; 


//        line(frame, p, s, Scalar(255,255,0), 1); 

//          if (measurement(1) <= 130 && measurement(1) >= 120) { 
  //            incr++;           
    //         cout << "Conter " << incr << " Loation " << measurement(1) << endl; 
      //   } 
      }*/  
      }  


        for( size_t i = 0; i < contours.size(); i++ )  
       { approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );  
         boundRect[i] = boundingRect( Mat(contours_poly[i]) );  

     }  

              p = kalmanPredict();  
//        cout << "kalman prediction: " << p.x << " " << p.y << endl;  
          mousev.push_back(p);  

      for( size_t i = 0; i < contours.size(); i++ )  
       {  
           if(contourArea(contours[i]) > 1000){  
         rectangle( frame, boundRect[i].tl(), boundRect[i].br(), Scalar(0, 255, 0), 2, 8, 0 );  
        Point center = Point(boundRect[i].x + (boundRect[i].width /2), boundRect[i].y + (boundRect[i].height/2));  
        cv::circle(frame,center, 8, Scalar(0, 0, 255), -1, 1,0);  



         s = kalmanCorrect(center.x, center.y);  
        drawCross(frame, s, Scalar(255, 255, 255), 5);  

        if (s.y <= 130 && p.y > 130 && s.x > 15) {  
            incr++;  
             cout << "Counter " << incr << endl;  
           }  


             for (int i = mousev.size()-20; i < mousev.size()-1; i++) {  
                 line(frame, mousev[i], mousev[i+1], Scalar(0,255,0), 1);  
                 }  

              }  
       }  


      imshow("Video", frame);  
      imshow("Red", channels[2]);  
      imshow("Binary", thresh_frame);  
    }  
  return 0;  
} 

The tracking method combining moving target detection with Kalman filter tracking achieves this process... `

Keywords: OpenCV

Added by philicious on Fri, 07 Jun 2019 01:19:57 +0300