Video-based vehicle detection technology and shadow elimination method

0 Introduction Video-based vehicle detection technology is developed on the basis of traditional TV surveillance systems. Based on vehicle detection technology, camera and computer image processing technology, it is an emerging technology for detecting and identifying vehicles on a large scale. Compared with the traditional detection technology, it has many advantages such as fast processing speed, convenient installation and maintenance, low cost, wide monitoring range, and various traffic parameters. With the development of image processing technology and microelectronic technology, video detection technology has great potential in the transportation system.
In a typical automatic traffic monitoring system, a static camera is used to monitor the fixed area in real time, and the traffic parameters are further extracted by extracting, classifying and tracking the vehicle target. Therefore, real-time segmentation of vehicle targets from video streams is a fundamental part of the automatic traffic monitoring system. The process of extracting the vehicle target mainly includes using an algorithm to extract the foreground vehicle from the captured image, and performing shadow detection to remove the shadow. According to this process, this paper selects the appropriate foreground extraction algorithm-background difference method, performs background update in real time, analyzes the shadow generated in the foreground, and proposes a reasonable and effective shadow removal method.


l Video-based vehicle detection Background difference method is a commonly used algorithm in vehicle motion detection systems. The algorithm obtains the foreground image by maintaining a background model in real time and eliminating the background influence of the current frame. The road surface, trees, buildings, etc. are always at rest, so they are removed as a background by difference. The processed image theoretically only includes the moving target, and the binarization is directly performed to extract the target, but in reality, the detection result is due to the camera shake and the change of the road light, the wind and the grass, the shadow of the vehicle target itself, and other factors. The impact is large and often causes large errors and even errors. After extracting the foreground, remove the different shadows to obtain the real vehicle target.
1.1 Background Modeling The initial n-frame image was used for background modeling (using n=200). A difference image A is obtained by making a difference between two adjacent frames of a certain interval. That is:

Where: N is the difference between the corresponding positions of A, and a binarized frame difference mask image N is obtained; “1” is a pixel corresponding to the change; “O” is a pixel corresponding to no change.
In the frame difference mask sequence, for a pixel point that has not changed for a long time, that is, the frame difference mask image sequence remains "O" for a period of time, it is considered that the pixel point corresponds to the background pixel, and the original pixel is The pixel value of the point in the image is copied into the ideal background image, and the state of the ideal background pixel is set to "background pixel". After the process is completed, the state of the ideal background pixel may not be converted into a "background pixel", that is, it is not reconstructed, and then the above steps are continued in the subsequent process, and the reconstructed background pixel is turned into a background. Update.
1.2 Background Update After obtaining the background image, there will be many changes in the scene over time, among which the change of illumination brightness and the movement of the background object are more obvious, which requires the background image to be continuously updated. The text is updated in a way that the current image and the background image are weighted. The update method is as follows: the pixel value in the current image is I(x, y), the pixel value in the background image is I*(x, y), and the corresponding frame difference mask image N(x, y)=0, then I (x, y) is the background pixel, I(x, y) and I*(x, y) are weighted according to equation (2):

In the formula: α is the update coefficient, which is related to the update speed, and the update speed requires the background to capture the change of the brightness, and at the same time can not make the instantaneous change exist for a long time. Assume that α=O. 1. When the brightness of the image changes over a large area, the mean value of the entire background pixel will change greatly, so when the mean value changes more than a certain range, in order to be able to update the background faster, α=0.2.
If |I(x,y)_I*(x,y)| is greater than the threshold or N(x,y)=1, then I(x,y) is the foreground pixel; if I(x,y) is continuous for a long time As the foreground pixel, you need to reconstruct the background of this pixel and re-establish the background according to the background reconstruction step.

1.3 Moving target extraction After obtaining the reconstructed background, the moving target can be obtained from the difference between the current image and the background image. In order to reduce the amount of calculation and interference, the region of interest can be set in advance, and the subsequent processing is performed only in the region of interest. Let the video sequence image be I(x,y), the current background image be I*(x,y), and the background difference image D(x,y)=||I(x,y)-I*(x,y )||. Use the threshold to calculate the vehicle image pixel template image:


Where: δ is a small threshold, a point of 1 in the template image M(x, y) represents a vehicle image area, and a point of 0 represents a background image area. Vehicle information obtained by simply using the difference in gray value is not complete: if the threshold is selected too large, some parts of the vehicle will be considered as the background, so that the image of the vehicle is broken, and the information of the vehicle is inaccurate, even the shaded part. The information has not been eliminated, and the information on the vehicle has been largely eliminated (see Figure 3); if the threshold is chosen too small, the shadow formed by the light will be connected to the vehicle and become part of the vehicle (see figure 4), but this method can well preserve the complete information of the vehicle, as long as the interference of the shadow part can be eliminated, a good effect can be achieved. Therefore, accurately removing the shadow plays a key role in the extraction of vehicle information.

2 Shadow detection and elimination Moving vehicles and pavement objects inevitably produce shadows under the influence of light. The shadow of the stationary object can be removed by the background difference, but due to the lens shake, the re-motion of the stationary object, etc., the shadow portion cannot be completely removed to generate noise (hereinafter referred to as the outer shadow); the target vehicle's own shadow and the projection between the targets (hereinafter referred to as inner shadow). In this paper, the method based on rough set shadow edge point classification is used to achieve the detection of shadow edges.
2.1 Classification of shadow edge points It is assumed that there is a transition zone between the shadow area and the non-shadow area, that is, the edge is considered to have a width.
(1) The difference between the outer shadow and the inner shadow is large. It is found by a large number of images that the average gray level of the outer shadow is lower than the average gray level of the inner shadow, and the contrast is larger under the same light background. At this time, the obtained set A1, A2 (A1 represents a set of pixels with large gradients in all pixels after noise is removed; A2 represents a set of pixels with large maximum neighborhood differences of all pixels after noise is removed. A1 , A2 is the set of required pixels). K(x, y), H(x, y), and K(x, y) are respectively obtained as gradients of pixel points, and H(x, y) is a neighborhood function. In A1, A (2) The difference between the outer shadow and the inner shadow is not so large that K(x, y) or H(x, y) can not be divided into two distinct intervals. In this case, only H(x, y is obtained. ) Just fine.
2.2 Treatment of partial false edge points If the outer shadow and the inner shadow are unevenly distributed, that is, K(x, y) and H(x, y) of some points inside the outer shadow just belong to the inner shadow edge point, it will be generated. Misjudgment. At this point, the points should be removed from the M(R) of the inner shadow. The method is as follows: the inner shadow edge point set M(R) is counted one by one, so that the point moves around, and if it can hit the outer shadow edge The gradient point (the point where the gradient is greater than or equal to B) or the point of the high maximum neighborhood gray level difference (the point where the maximum neighborhood gray level difference is greater than or equal to D), then the point is the internal point of the outer shadow, and it is taken from The shadow edge point set M(R) is removed. In this way, the remaining points are true edge points.
2.3 Edge refinement and continuity Because the edges are assumed to have a width, to more accurately detect the edges, the edges need to be refined. The specific method is to keep the maximum neighborhood difference if the gradient directions of the two edge points are the same and are located on the normal line. This results in a refined edge discrete point with higher positioning accuracy.
In order to connect the discrete points into a line, a trigeminal tree needs to be established, and each pixel point in M ​​and N is sequentially checked, and the predecessor point and the subsequent point are judged according to different directions. After determining the predecessor point and the successor point for each edge point, each pixel point on the line can be obtained from the starting point of the edge line in turn, thereby obtaining continuous shadow edges, thereby eliminating the shadow.


3 Conclusion In recent years, with the continuous development of computer, image processing, artificial intelligence, pattern recognition, video transmission and other technologies, video-based vehicle detection technology has been more and more widely used.
Based on the most common background difference method, this paper obtains the foreground target with shadow part by selecting the small threshold, and uses the rough set classification method to obtain the shadow edge point, thus eliminating the shadow. In this process, a cloud-based false edge point method is used, which uses the maximum neighborhood gray-scale difference and edge gradient to obtain both the true edge point and the noise, and then the edge points are refined and continuous. A single-width shaded edge is obtained. The algorithm is simple, easy to operate, and highly accurate, and can be used in practical applications.

LED Emergency Lamp

LED Emergency Lamp, 5w LED Emergency Lamp,Rechargeable LED Emergency Lamp, Energy Saving LED Emergency Lamp

LED Downlight Co., Ltd. , http://www.satisledlight.com

Posted on