Vehicle tracking and classification in challenging scenarios via slice sampling
- Marcos Nieto^{1}Email author,
- Luis Unzueta^{1},
- Javier Barandiaran^{1},
- Andoni Cortés^{1},
- Oihana Otaegui^{1} and
- Pedro Sánchez^{2}
DOI: 10.1186/1687-6180-2011-95
© Nieto et al; licensee Springer. 2011
Received: 10 May 2011
Accepted: 27 October 2011
Published: 27 October 2011
Abstract
This article introduces a 3D vehicle tracking system in a traffic surveillance environment devised for shadow tolling applications. It has been specially designed to operate in real time with high correct detection and classification rates. The system is capable of providing accurate and robust results in challenging road scenarios, with rain, traffic jams, casted shadows in sunny days at sunrise and sunset times, etc. A Bayesian inference method has been designed to generate estimates of multiple variable objects entering and exiting the scene. This framework allows easily mixing different nature information, gathering in a single step observation models, calibration, motion priors and interaction models. The inference of results is carried out with a novel optimization procedure that generates estimates of the maxima of the posterior distribution combining concepts from Gibbs and slice sampling. Experimental tests have shown excellent results for traffic-flow video surveillance applications that can be used to classify vehicles according to their length, width, and height. Therefore, this vision-based system can be seen as a good substitute to existing inductive loop detectors.
Keywords
vehicle tracking Bayesian inference MRF particle filter shadow tolling ILD slice sampling real time1 Introduction
The advancements of the technology as well as the reduction of costs of processing and communications equipment are promoting the use of novel counting systems by road operators. A key target is to allow free flow tolling services or shadow tolling to reduce traffic congestion on toll roads.
This type of systems must meet a set of requirements for its implementation. Namely, on the one hand, they must operate real time, i.e. they must acquire the information (through its corresponding sensing platform), process it, and send it to a control center in time to acquire, process, and submit new events. On the other hand, these systems must have a high reliability in all situations (day, night, adverse weather conditions). Finally, if we focus on shadow tolling systems, then the system is considered to be working if it is not only capable of counting vehicles, but also classifying them according to their dimensions or weight.
There are several existing technologies capable of addressing some of these requirements, such as intrusive systems like radar and laser, sonar volumetric estimation, or counting and mass measurement by inductive loop detectors (ILDs). The latter, being the most mature technology, has been used extensively, providing good detection and classification results. However, ILDs present three significant drawbacks: (i) these systems involve the excavation of the road to place the sensing devices, which is an expensive task, and requires disabling the lanes in which the ILDs are going to operate; (ii) typically, an ILD sensor is installed per lane, so that there are miss-detections and/or false positives when vehicles travel between lanes; and (iii) ILD cannot correctly manage the count in situations of traffic congestion, e.g. this technology cannot distinguish two small vehicles circulating slowly or standing over an ILD sensor from a large vehicle.
Technologies based on time-of-flight sensors represent an alternative to ILD, since they can be installed with a much lower cost, and can deliver similar counting and classifying results. There are, however, as well, two main aspects that make operators reluctance to use them: (i) on the one hand, despite the existence of the technology for decades, applied for counting and classification in traffic surveillance is relatively new, and there are no solutions that represent real competition against ILD in terms of count and classification results; and (ii) these systems can be called intrusive with the electromagnetic spectrum because they emit a certain amount of radiation that is reflected on objects and returns to the sensor. The emission of radiation is a contentious point, since it requires to meet the local regulations in force, as well as to overcome the reluctance of public opinion regarding radiation emission.
Recently, a new trend is emerging based on the use of video processing. The use of vision systems is becoming an alternative to the mentioned technologies. Their main advantage, as well as radar and laser systems one, is that their cost is much lower than ILDs, while its ability to count and classify is potentially the same. Moreover, as it only implies image processing, no radiation is emitted to the road, so they can be considered completely non-intrusive. Nevertheless, vision-based systems should still be considered as in a prototype stage until they are able to achieve correct detection and classification rates high enough for real implementation in free tolling or shadow tolling systems. In this article, a new vision-based system is introduced, which represents a real alternative to traditional intrusive sensing systems for shadow tolling applications, since it provides the required levels of accuracy and robustness to the detection and classification tasks. It uses a single camera and a processor that captures images and processes them to generate estimates of the vehicles circulating on a road stretch.
As a summary, the proposed method is based on a Bayesian inference theory, which provides an unbeatable framework to combine different nature information. Hence, the method is able to track a variable number of vehicles and classify them according to their estimated dimensions. The proposed solution has been tested with a set of long video sequences, captured under different illumination conditions, traffic load, adverse weather conditions, etc., where it has been proven to yield excellent results.
2 Related work
Typically, the literature associated with traffic video surveillance is focused on counting vehicles using basic image processing techniques to obtain statistics about lane usage. Nevertheless, there are many works that aim to provide more complex estimates of vehicle dynamics and dimensions to classify them as light or heavy. In urban scenarios, typically at intersections, the relative rotation of the vehicles is also of interest [1].
Among the difficulties that these methods face, shadows casted by vehicles are the hardest one to tackle robustly. Perceptually, shadows are moving objects that differ from the background. This is a relatively critical problem for single-camera setups. There are many works that do not pay special attention to this issue, which dramatically limits the impact of the proposed solutions in real situations [2–4].
Regarding the camera view point, it is quite typical to face the problem of tracking and counting vehicles with a camera that is looking down on the road from a pole, with a high angle [5]. In this situation, the problem is simplified since the perspective effect is less pronounced and vehicle dimensions do not vary significantly and the problem of occlusion can be safely ignored. Nevertheless, real solutions shall consider as well the case of low angle views of the road, since it is not always possible to install the camera so high. Indeed, this issue has not been explicitly tackled by many researchers, being of particular relevance the work by [3], which is based on a feature tracking strategy.
There are many methods that claim to track vehicles for a traffic counting solution but without explicitly using a model whose dimensions or dynamics are fitted to the observations. In these works, the vehicle is simply treated as a set of foreground pixels [4], or as a set of feature points [2, 3].
Works more focused on the tracking stage, typically define a 3D model of the vehicles, which are somehow parameterized and fitted using optimization procedures. For instance, in [1], a detailed wireframe vehicle model that is fitted to the observations is proposed. Improvements on this line [6, 7] comprise a variety of vehicle models, including detailed wireframe corresponding to trucks, cars, and other vehicle types, which provide accurate representations of the shape, volume, and orientation of vehicles. An intermediate approach is based on the definition of a cuboid model of variable size [8, 9].
Regarding the tracking method, some works have just used simple data association between detections in different time instants [2]. Nevertheless, it is much more efficient and robust to use Bayesian approaches like the Kalman filter [10], the extended Kalman filter [11], and, as a generalization, particle filter methods [8, 12]. The work by [8] is particularly significant in this field, since they are able to efficiently handle entering and exiting vehicles in a single filter, being as well able to track multiple objects in real time. For that purpose, they use an MCMC-based particle filter. This type of filter has been widely used since it was proven to yield stable and reliable results for multiple object tracking [13]. One of the main advantages of this type of filters is that the required number of particles is a linear function of the number of objects, in contrast to the exponentially growing demand of traditional particle filters (like the sequential importance resampling algorithm [14]).
As described by [13], the MCMC-based particle filter uses the Metropolis-Hastings algorithm to directly sample from the joint posterior distribution of the complete state vector (containing the information of the objects of the scene). Nevertheless, as happens with many other sampling strategies, the use of this algorithm guarantees the convergence only when using an infinite number of samples. In real conditions, the number of particles shall be determined experimentally. In traffic-flow surveillance applications, the scene will typically contain from none to 4 or 5 vehicles, and the required number of particles should be around 1,000 (the need of as few as 200 particles was reported in [8]).
In the authors opinion, this load is still excessive, and thus have motivated the proposal of a novel sampling procedure devised as a combination of the Gibbs and Slice sampling [15]. This method is more adapted to the scene proposing moves on those dimensions that require more change between consecutive time instants. As it will be shown in next sections, this approach requires an average between 10 and 70 samples to provide accurate estimates of several objects in the scene.
Besides, and as a general criticism, almost all of the above-mentioned works have not been tested with large enough datasets to provide realistic evaluations of its performance. For that purpose, we have focused on providing a large set of tests that demonstrate how the proposed system works in many different situations.
3 System overview
The first processing step extracts the background of the scene, and thus generates a segmentation of the moving objects. This procedure is based on the well-known codewords approach, which generates an updated background model through time according to the observations [16].
The foreground image is used to generate blobs or groups of connected pixels, which are described by their bounding boxes (shown in Figure 1 as red rectangles). At this point, the rest of the processing is carried out only on the data structures that describe these bounding boxes, so that no other image processing stage is required. Therefore, the computational cost of the following steps is significantly reduced.
As the core of the system, the Bayesian inference step takes as input the detected boxes, and generates estimates of the position and dimensions of the vehicles in the scene. As it will be described in next sections, this module is a recursive scheme that takes into account previous estimates and current observations to generate accurate and coherent results. The appearance and disappearance of objects is controlled by an external module, since, in this type of scenes, vehicles are assumed to appear and disappear in pre-defined regions of the scene.
4 Camera calibration
In any case, the perspective of the input images must be described, and it can be done obtaining the calibration of the camera. Although there are methods that can retrieve the rectified views of the road without knowing the camera calibration [5], we require it for the tracking stage. Hence, we have used a simple method to calibrate the camera that only requires the selection of four points on the image that forms a rectangle on the road plane, and two metric references.
where the value of the parameter K can be obtained using five correspondences and applying the Levenberg-Marquardt algorithm.
where r_{1} and r_{2} are the two rotation vectors that define the rotation of the camera (the third rotation vector can be obtained as the cross product r_{3} = r_{1} × r_{2}), and t is the translation vector. If we left multiply Equation 2 by K^{-1} we obtain the rotation and translation directly from the columns of H.
The calibration matrix K can be then found by applying a non-linear optimization procedure that minimizes the reprojection error.
5 Background segmentation and blob extraction
The background segmentation stage extracts those regions of the image that most likely correspond to moving objects. The proposed approach is based on the code-words approach [16] at pixel level.
Given the segmentation, the bounding boxes of blobs with at least a certain area are detected using the approach described in [18]. Then, a recursive process is undertaken to join boxes into larger bounding boxes which satisfy d_{ x } < t_{ X } , d_{ y } < t_{ Y } , where d_{ x } and d_{ y } are the minimal distances in X and Y from box to box, t_{ X } and t_{ Y } are the corresponding distance thresholds. The recursive process stops when no larger rectangles can be obtained that meet the conditions.
6 3D tracking
The 3D tracking stage is fed with the set of observed 2D boxes in the current instant, which we will denote as z_{ t } = {z _{t, m}}, with m = 1 ... M. Each box is parameterized as z_{ t, m } = {z_{ t, m, x } , z_{ t, m, y } , z_{ t, m, w } , z_{ t, m, h } ) in this domain, i.e. a reference point and a width and height.
The result of the tracking process is the estimate of x _{t}, which is a vector containing the 3D information of all the vehicles in the scene, i.e. x_{ t } = {x _{t, n}}, with n = 1 ... N_{ t } , where N is the number of vehicles in the scene at time t, and x_{ t, n } is a vector containing the position, width, height, and length of the 3D box fitting vehicle n.
Using these observations and the predictions of the existing vehicles at the previous time instant, an association data matrix is generated, and used within the observation model and for the detection of entering and exiting vehicles.
6.1 Bayesian inference
Bayesian inference methods provide an estimation of p(x _{t}|Z^{ t } ), the posterior density distribution of state x _{t}, which is the parameterization of the existing vehicles in the scene, given all the estimations up to current time, Z^{ t } .
where p(z _{t}|x _{t}) is the likelihood function that models how likely the measurement z_{ t } would be observed given the system state vector x _{t}, and p(x _{t}|Z^{t-1}) is the prediction information, since it provides all the information we know about the current state before the new observation is available. The constant k is a scale factor that ensures that the density integrates to one.
where Φ(·)is a function that governs the interaction between two elements n and n' of the state vector.
Particle filters are tools that generate this set of samples and the corresponding estimation of the posterior distribution. Although there are many different alternatives, MCMC-based particle filters have been shown to obtain the more efficient estimations of the posterior for high-dimensional problems [13] using the Metropolis-Hastings sampling algorithm. Nevertheless, these methods rely on the definition of a Markov chain over the space of states such that the stationary distribution of the chain is equal to the target posterior distribution. In general, a long chain must be used to reach the stationary distribution, which implies the computation of hundreds or thousands of samples.
In this article, we will see that a much more efficient approach can be used by substituting the Metropolis-Hastings sampling strategy by a line search approach inspired in the slice sampling technique [15].
6.2 Data association
The association between 2D boxes with 3D vehicles is carried out by projecting the 3D box into the rectified road domain, and then compute its rectangular hull, that we will denote as ${\mathbf{x}}_{n}^{\prime}$ (let us remove the time index t from here on for the sake of clarity), i.e. the projected version of vehicle x _{n}. As a rectangular element, this hull is characterized by a reference point and a width and length: ${\mathbf{x}}_{n}^{\prime}=\left({x}_{x}^{\prime},{x}_{y}^{\prime},{x}_{w}^{\prime},{x}_{h}^{\prime}\right)$, analogously to observations z _{m}. An element D_{ m, n } of matrix D is set to one if the observation z_{ m } intersects with ${\mathbf{x}}_{n}^{\prime}$.
6.3 Observation model
such that ω_{ m, n } ranges between 0 and 1 (it is 0 if object n does actually not intersect with observation m, and 1 if object n is the only object associated to observation m).
The first ratio of Equation 11 represents how much area of observation m intersects with its associated objects. The second ratio expresses how much area of the associated objects intersects with the given observation. Since objects might be as well associated to other observations, the sum of their areas is weighted according to the amount of intersection they have with other observations. After the application of the exponential, this factor tends to return low values if the match between the observation and its objects is not accurate, and high if the fit is correct. Some examples of the behavior of these ratios are depicted in Figure 7. For instance, the first case (two upper rows) represents a single observation, and two different hypothesized ${\mathbf{x}}_{n}^{\prime}$. It is clear from the figure that the upper-most case is a better hypothesis, and that the area of the observation covered by the hypothesis is larger. Therefore, the first ratio of Equation 11 is 0.86 and 0.72 for the second hypothesis. Analogously, it can be observed that the second ratio indeed represents how much area of the hypothesis is covered by the observation. In this case, the first hypothesis gets 0.77 and the second 0.48. As a result, the value of p_{ a } (·) represents well how the 2D boxes z_{ m } and ${\mathbf{x}}_{m}^{\prime}$ coincide. The other examples of Figure 7 show the same behavior for this factor in different configurations.
Figure 7 depicts as well some examples of the values retrieved by function p_{ d } (·) in some illustrative examples. For instance, consider again the first example (two upper rows): the alignment in x of the first hypothesis is much better, since the centers of the boxes are very close, while the second hypothesis is not well aligned in this dimension. As a consequence, the values of d_{ x } are, respectively, 0.04 and 1.12, which imply that the first hypothesis obtains a higher value of p_{ d } (·). The other examples show some other cases in which the alignment makes the difference between the hypotheses.
The combined effect of these two factors is that the hypotheses whose 2D projections best fit to the existing observations obtain higher likelihood values, taking into account both that the area of the intersection is large, and that the boxes are aligned in the two dimensions of the plane.
6.4 Prior model
The information that we have at time t prior to the arrival of a new observation is related to two different issues: on the one hand, there are some physical restrictions on the speed and trajectory of the vehicles, and, on the other hand, there are some width-length-height configurations more probable than others.
6.4.1 Motion prior
For the motion prior model, we will use a lineal constant-velocity model [19], such that we can perform predictions of the position of the vehicles from t-1 to t according to their estimated velocities (at each spatial dimension, x and y).
Specifically, $p\left({\mathbf{x}}_{t}\mid {\mathbf{x}}_{t-1}\right)=\mathcal{N}\left(A{\mathbf{x}}_{t-1}\mid \Sigma \right)$, where matrix A is a linear matrix that propagates state x_{t-1}to x_{ t } with a constant-velocity model [19], and $\mathcal{N}\left(\cdot \right)$ represents a multivariate normal distribution.
In general terms, we have observed that within this type of scenarios, this model predicts correctly the movement of vehicles observed from the camera's view point, and is as well able to absorb small to medium instantaneous variations of speed.
6.4.2 Model prior
Since what we want to model are vehicles, the possible values of the tuple WHL (width, height, and length) must satisfy some restrictions imposed by the typical vehicle designs. For instance, it is very unlikely to have a vehicle with width and length equal to 0.5 and 3 m high.
Nevertheless, there is a wide enough variety of possible configurations of WHL such that it is not reasonable to fit the observations to a discrete number of fixed configurations. For that reason, we have defined a flexible procedure that uses a discrete number of models as a reference to evaluate how realistic a hypothesis is. Specifically, we will test how close is a hypothesis to the closest model in the WHL space. If it is close, then the model prior will be high, and low otherwise.
Provided the set of models $\mathcal{X}=\left\{{\mathbf{x}}_{c}\right\}$, with c = 1 ... C, the expression of the prior is $p\left({\mathbf{x}}_{t}\mid \mathcal{X}\right)=p\left({\mathbf{x}}_{t}\mid {\mathbf{x}}_{{c}^{\prime}}\right)$, where ${\mathbf{x}}_{{c}^{\prime}}$ is the model that is closer to x _{t}. Hence, $p\left({\mathbf{x}}_{t}\mid {\mathbf{x}}_{{c}^{\prime}}\right)=\mathcal{N}\left({\mathbf{x}}_{c}\mid \Sigma \right)$ is the function that describes the probability of a hypothesis to correspond to model ${\mathbf{x}}_{{c}^{\prime}}$. The covariance Σ can be chosen to define how much restrictive is the prior term. If it is set too high, then the impact of $p\left({\mathbf{x}}_{t}\mid {\mathcal{X}}_{c}\right)$ on p(x _{t}|z _{t}) could be negligible, while a too low value could make that p(x _{t}|z _{t}) is excessively peaked so that sampling could be biased.
6.5 MRF interaction model
Provided our method considers multiple vehicles within the state vector x _{t}, we can introduce models that govern the interaction between vehicles in the same scene. The use of such information gives more reliability and robustness to the system estimates, since it better models the reality.
Specifically, we use a simple model that avoids estimated vehicles to overlap in space. For that purpose we define an MRF factor, as in Equation 8. The function Φ(·) can be defined as a function that penalizes hypotheses in which there is a 3D overlap between two or more vehicles.
between any pair of vehicles characterized by x_{ n } and ${\mathbf{x}}_{{n}^{\prime}}$, where $\cap \left(\cdot \right)$ is a function that returns the volume of intersection between two 3D boxes.
6.6 Input/output control
Appearing and disappearing vehicle control is done through the analysis of the data association matrix, D. If an observed 2D box, z _{m}, is not associated with any existing object x _{n}, then a new object event is triggered. If this event is repeated in a determined number of consecutive instants, then the state vector is augmented with the parameters of a new vehicle.
Analogously, if an existing object is not associated with any observation according to D, then a delete object event is triggered. If the event is as well repeated in a number of instants, then the corresponding component x_{ n } of the state vector is removed from the set.
7 Optimization procedure
This technique performs as many iterations as necessary to find a stationary point such that its slice is of size zero. As expected, the choice of the step size is critical because too small values would require evaluating the target function too many times to generate the slices, while too high values could potentially lead the search far away from the targeted maximum.
We have designed this method since it provides fast results, typically stopping at the second iteration. Other known methods, like gradient-descent or second-order optimization procedures, have been tested in this context, being much more unstable. The reason is that they greatly depend on the quality of the Jacobian approximation, which, in our problem, introduces too much error and makes the system tend to lose the track.
8 Tests and discussion
There are two different types of tests that identify the performance of the proposed system. On the one hand, detection and classification rates, which illustrates how many miss-detections and false alarms the system suffers. On the other hand, efficiency tests of the proposed sampling algorithm, which depicts the number of evaluations of the posterior distribution p(x _{t}|z _{t}) are required to reach the target detection and classification rates.
8.1 Detection and classification results
Tests have been carried out using six long sequences (1 h in average each one, over 10,000 vehicles in total), four of them obtained from a low-height camera, and the two others from two different perspectives with higher cameras. These sequences have been selected to evaluate the performance of the proposed method in challenging situations, including illumination variation, heavy traffic situations, shadows, rain, etc.
Considering the detection rates, we have counted the number of vehicles that drive through the scene and are undetected by the system (miss-detections or false negative F_{ N } ), the number of non-existing detections (false alarms or false positive, F_{ P } ), and the ground truth number of vehicles (N). Moreover, we will consider two vehicle categories: light and heavy vehicles. Although images cannot be used to obtain weight information, we deduce it using the length of the vehicles, i.e. a vehicle is considered as light if its length is lower than 6 m, and heavy otherwise. This approximation is motivated by the fact that road operators typically require that vehicles are classified according to their weight. Hence, we will define pairs of statistics for each type of vehicle, i.e. false positive and negative values and total number of light vehicles (F_{ PL } , F_{ NL } , N_{ L } ), and analogous variables for heavy vehicles (F_{ PH } , F_{ NH } , N_{ H } ).
Detection and classification results
Sequence | E _{ CL } | E _{ CP } | F _{ NL } | F _{ PL } | F _{ NP } | F _{ PP } | N _{ L } | N _{ P } | R _{ L } | P _{ L } | R _{ P } | P _{ P } |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Dusk | 0 | 5 | 24 | 9 | 1 | 0 | 1662 | 118 | 0.9801 | 0.9861 | 0.9492 | 0.9573 |
Rain and shadow | 33 | 26 | 73 | 88 | 7 | 11 | 4516 | 627 | 0.9570 | 0.9484 | 0.9298 | 0.8780 |
Traffic jam | 10 | 28 | 63 | 7 | 16 | 4 | 4796 | 563 | 0.9833 | 0.9891 | 0.9147 | 0.9180 |
Dusk and rain | 8 | 13 | 19 | 48 | 0 | 1 | 968 | 115 | 0.9225 | 0.8842 | 0.8783 | 0.8145 |
Perspective | 2 | 10 | 30 | 18 | 2 | 2 | 614 | 101 | 0.9186 | 0.9216 | 0.8614 | 0.8447 |
Color noise | 0 | 3 | 0 | 1 | 0 | 0 | 561 | 23 | 0.9982 | 0.9912 | 0.8696 | 0.8696 |
and an analogous expression for heavy vehicles. Recall is related to the number of miss-detections, while precision is related to the number of false alarms.
Nevertheless, we have obtained good detection and classification results in all these challenging situations, being of special interest the ability of the system to reliably count vehicles with heavy traffic (such as in the third sequence). The system is also able to work with different type of perspectives, since it computes the calibration of the camera and thus considers the 3D volume of vehicles instead of just 2D silhouettes. The last sequence (Color noise) has been selected since it corresponds to a sequence captured with a low cost camera, which indeed shows significant color noise in some regions of the image. The segmentation and blob generation stages absorb this type of distortion and makes that the detection and classification results are both excellent.
8.2 Sampling results
This subsection shows some experimental results that illustrate the benefits of using the proposed sampling strategy within the Bayesian framework. First, we show with a real example that the proposed method can be used to reach high values of posterior probability with few iterations. Second, we compare the performance of this sampling strategy with that of well known sampling methods typically used in the context of particle filtering and Bayesian inference.
8.2.1 Real data example
The proposed method performance has been evaluated as well according to the number of required evaluations of the posterior distribution to reach the above-mentioned detection and classification rates.
As explained along the article, the proposed sampling strategy allows adapting the number of evaluations to the movement of the vehicle. Hence, typically it is only needed to carry out movements in the y-direction, while the movements in width, height, and length are only necessary in entering and leaving situations.
The system generates a number of samples adapted to the number of vehicles of the scene at each instant. The greater the number of vehicles the greater dimension of the state vector and number of posterior evaluations.
In the two other situations: normal traffic and heavy traffic, the number of vehicles is increased, and there are some instants with 4 and 5 vehicles in the scene, which requires a higher computational load to the system. The histograms of the number of evaluations show that, in these situations, the number of evaluations ranges between 0 and 100, and between 0 and 200, respectively.
8.2.2 Synthetic data experiments
The following experiments aim to show that the slice sampling-based strategy generates better estimates of a target posterior distribution compared to the importance re-sampling algorithm [14] and the Metropolis-Hastings algorithm.
The tests are carried out as follows. For the sake of simplicity, a target distribution is defined as a multi-variate normal distribution, N (μ, Σ), of D dimension, where μ∈ R^{ D } and Σ ∈ R^{D×D}. The three-mentioned algorithms are executed to generate a number of samples of this target distribution. The error is computed as the norm of the difference between the average value of the samples and the mode of the multi-variate normal distribution $\epsilon =\parallel \mu -\frac{1}{N}{\sum}_{n=1}^{N}{\mathbf{x}}_{n}\parallel $, where N is the number of samples, and x_{ n } ∈ R^{ D } is the n th sample.
Each algorithm is executed 100 times, and the error is averaged to avoid numerical instability. The test is executed for example instances of the multivariate distribution, where D = 1, 2, 4, 10 and asking the algorithms to generate 10 to 1,000 samples.
Therefore, we can say that, compared to other methods, the SS algorithm (i) generates better estimations with less number of samples; (ii) provides more accurate results; and (iii) is less sensitive to parameter tuning. In summary, the proposed scheme can be used for real applications as the one described in the text which require accurate results and real-time processing, since it can generates good estimates using a reduced number of samples.
8.3 Computation requirements
Finally, attending to the computation time of the whole system implementation, it runs at around 30 fps using images downsampled to 320 × 240 pixels for processing on an Intel Core2 Quad CPU Q8400 at 2.66 GHz, with 3 GB RAM and a NVIDIA 9600 GT. This is an industrial PC that satisfies the installation requirements and allows us to process the images in real time.
The program has been implemented in C/C++, using OpenCV primitives for data structure and basic image processing operations, OpenGL for visualization of results, and OpenMP and CUDA for multi-core and GPU programming, respectively.
9 Conclusions
In this article, we have presented the results of the work done in the design, implementation, and evaluation of a vision system designed to represent a serious alternative, cheap, and effective to systems based on other types of sensors in vehicle counting and classification for free flow and shadow tolling applications.
For this purpose, we have presented a method that exploits different information sources and combines them into a powerful probabilistic framework, inspired by the MCMC-based particle filters. Our main contribution is the proposal of a novel sampling system that adapts to the needs of each situation, so that allows for very robust and precise estimates with a much smaller number of point-estimates with respect to other sampling methods such as Importance sampling or the Metropolis-Hastings.
An extensive testing and evaluation phase has led us to collect data on system performance in many situations. We have shown that the system can detect, track, and classify vehicles with very high levels of accuracy, even in challenging situations, including heavy traffic conditions, presence of shadows, rain, and variable illumination conditions.
Declarations
Acknowledgements
This work was partially supported by the Basque Government under the ETORGAI strategic project iToll.
Authors’ Affiliations
References
- Haag M, Nagel HH: Incremental recognition of traffic situations from video image sequences. Image and Vision Computing 2000, 18: 137-153. 10.1016/S0262-8856(99)00021-9View ArticleGoogle Scholar
- Coifman B, Beymer D, McLauchlan P: A real-time computer vision system for vehicle tracking and tracking surveillance. Transportation Research Part C: Emerging Technologies 1998, 6: 271-288. 10.1016/S0968-090X(98)00019-9View ArticleGoogle Scholar
- Kanhere NK, Pundlik SJ, Birchfield ST: Vehicle segmentation and tracking from a low-angle off-axis camera. IEEE Proc Conf on Computer Vision and Pattern Recognition (CVPR) 2005, 1152-1157.Google Scholar
- Vibha L, Venkatesha M, Prasanth GR, Suhas N, Shenoy PD, Venugopal KR, Patnaik LM: Moving vehicle identification using background registration technique for traffic surveillance. Proc of the Int. MultiConference of Engineers and Computer Scientists 2008.Google Scholar
- Maduro C, Batista K, Peixoto P, Batista J: Estimation of vehicle velocity and traffic intensity using rectified images. IEEE International Conference on Image Processing 2008, 777-780.Google Scholar
- Buch N, Orwell J, Velastin SA: Urban road user detection and classification using 3D wire frame models. IET Computer Vision Journal 2010,4(2):105-116. 10.1049/iet-cvi.2008.0089View ArticleGoogle Scholar
- Pang C, Lam W, Yung N: A method for vehicle count in the presence of multiple occlusions in traffic images. IEEE Transactions on Intelligent Transportation Systems 2007,8(3):441-459.View ArticleGoogle Scholar
- Bardet F, Chateau T: MCMC particle filter for real-time visual tracking. IEEE International Conference on Intelligent Transportation Systems 2008, 539-544.Google Scholar
- Johansson B, Wiklund J, Forssén P, Granlund G: Combining shadow detection and simulation for estimation of vehicle size and position. Pattern Recognition Letters 2009, 30: 751-759. 10.1016/j.patrec.2009.03.005View ArticleGoogle Scholar
- Zou X, Li D, Liu J: Real-time vehicles tracking based on Kalman filter in an ITS. International Symposium on Photoelectronic Detection and Imaging 2008, SPIE6623: 662306.Google Scholar
- Bouttefroy PLM, Bouzerdoum A, Phung SL, Beghdadi A: Vehicle tracking by non-drifting Mean-Shift using projective Kalman filter. IEEE Proc Intelligent Transportation Systems 2008, 61-66.Google Scholar
- Song X, Nevatia R: Detection and tracking of moving vehicles in crowded scenes. IEEE Workshop on Motion and Video Computing 2007, 4-8.Google Scholar
- Khan Z, Balch T, Dellaert F: MCMC-based particle filtering for tracking a variable number of interacting targets. IEEE Trans on Pattern Analysis and Machine Intelligence 2005,27(11):1805-1819.View ArticleGoogle Scholar
- Arulampalam MS, Maskell S, Gordon N, Clapp T: A tutorial on particle filters for online Nonlinear/Non-Gaussian Bayesian tracking. IEEE Trans on Signal Processing 2002,50(2):174-188. 10.1109/78.978374View ArticleGoogle Scholar
- Bishop CM: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer; 2006.Google Scholar
- Kim K, Chalidabhongse TH, Harwood D, Davis L: Real-time foreground-background segmentation using codebook model. Real-time Imaging 11(3):167-256.
- Hartley RI, Zisserman A: Multiple view geometry in computer vision. Cambridge University Press; 2004.View ArticleGoogle Scholar
- Suzuki S, Abe K: Topological structural analysis of digital binary images by border following. Computer Vision, Graphics and Image Processing 30(1):32-46.
- Maybeck PS: Stochastic models, estimation, and control, Mathematics in Science and Engineering vol 141. Academic Press, New York, San Francisco, London; 1979.Google Scholar
- Neal R: Slice sampling. Annals of Statistics 31: 705-767.
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.