外文翻译---一个鲁棒的基于机器视觉运动目标检测与跟踪系统.docx_第1页
外文翻译---一个鲁棒的基于机器视觉运动目标检测与跟踪系统.docx_第2页
外文翻译---一个鲁棒的基于机器视觉运动目标检测与跟踪系统.docx_第3页
外文翻译---一个鲁棒的基于机器视觉运动目标检测与跟踪系统.docx_第4页
外文翻译---一个鲁棒的基于机器视觉运动目标检测与跟踪系统.docx_第5页
免费预览已结束,剩余9页可下载查看

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

附录I 外文文献翻译(1)原文:A Robust Vision-based Moving Target Detection and Tracking System Abstract In this paper we present a new algorithm for realtime detection and tracking of moving targets in terrestrial scenes using a mobile camera. Our algorithm consists of two modes: detection and tracking. In the detection mode, background motion is estimated and compensated using an affine transformation. The resultant motion rectified image is used for detection of the target location using split and merge algorithm. We also checked other features for precise detection of the target location. When the target is identified, algorithm switches to the tracking mode. Modified Moravec operator is applied to the target to identify feature points. The feature points are matched with points in the region of interest in the current frame. The corresponding points are further refined using disparity vectors. The tracking system is capable of target shape recovery and therefore it can successfully track targets with varying distance from camera or while the camera is zooming. Local and regional computations have made the algorithm suitable for real-time applications. The refined points define the new position of the target in the current frame. Experimental results have shown that the algorithm is reliable and can successfully detect and track targets in most cases. Key words: real time moving target tracking and detection, feature matching, affine transformation, vehicle tracking, mobile camera image.1 Introduction Visual detection and tracking is one of the most challenging issues in computer vision. Application of the visual detection and tracking are numerous and they span a wide range of applications including surveillance system, vehicle tracking and aerospace application, to name a few. Detection and tracking of abstract targets (e.g. vehicles in general) is a very complex problem and demands sophisticated solutions using conventional pattern recognition and motion estimation methods. Motion-based segmentation is one of the powerful tools for detection and tracking of moving targets. It is simple to detect moving objects in image sequences obtained by stationary camera 1, 2, the conventional difference-based methods fail to detect moving targets when the camera is also moving. In the case of mobile camera all of the objects in the image sequence have an apparent motion, which is related to the camera motion. A number of methods have been proposed for detection of the moving targets in mobile camera including direct camera motion parameters estimation 3, optical flow 4, 5, and geometric transformation 6, 7. Direct measurement of camera motion parameters is the best method for cancellation of the apparent background motion but in some application it is not possible to measure these parameters directly. Geometric transformation methods have low computation cost and are suitable for realtime purpose. In these methods, a uniform background motion is assumed. An affine motion model could be used to model this motion. When the apparent motion of the background is estimated, it can be exploited to locate moving objects. In this paper we propose a new method for detection and tracking of moving targets using a mobile monocular camera. Our algorithm has two modes: detection and tracking. This paper is organized as follows. In Section 2, the detection procedure is discussed. Section 3 describes the tracking method. Experimental results are shown in Section 4 and conclusion appears in Section 5. 2 Target detection In the detection mode we used affine transformation and LMedS (Least median squared) method for robust estimation of the apparent background motion. After the compensation of the background motion, we apply split and merge algorithm to the difference of current frame and the transformed previous frame to obtain an estimation of the target positions. If no target is found, then it means either there is no moving target in the scene or, the relative motion of the target is too small to be detected. In the latter case, it is possible to detect the target by adjusting the frame rate of the camera. The algorithm accomplishes this automatically by analyzing the proceeding frames until a major difference is detected. We designed a voting method to verify the targets based on apriori knowledge of the targets. For the case of vehicle detection we used vertical and horizontal gradients to locate interesting features as well as constraint on area of the target as discussed in this section. 2.1 Background motion estimation Affine transformation 8 has been used to model motion of the camera. This model includes rotation, scaling and translation. 2D affine transformation is described as follow: (1)where (xi , yi ) are locations of points in the previous frame and (Xi , Yi ) are locations of points in the current frame and a1a6 are motion parameters. This transformation has six parameters; therefore, three matching pairs are required to fully recover the motion. It is necessary to select the three points from the stationary background to assure an accurate model for camera motion. We used Moravec operator 9 to find distinguished feature points to ensure precise match. Moravec operator selects pixels with the maximum directional gradient in the minmax sense. If the moving targets constitute a small area (i.e. less than 50%) of the image, then LMedS algorithm can be applied to determine the affine transformation parameters of the apparent background motion between two consecutive frames according to the following procedure. 1. Select N random feature point from previous frame, and use the standard normalized cross correlation method to locate the corresponding points in the current frame. Normalized correlation equation is given by: (2)here and are the average intensities of the pixels in the two regions being compared, and the summations are carried out over all pixels with in small windows centered on the feature points. The value r in the above equation measures the similarity between two regions and is between 1 and -1. Since it is assumed that moving objects are less than 50% of the whole image, therefore most of the N points will belong to the stationary background. 2. Select M random sets of three feature points: (xi , yi , Xi , Yi ) for i=1,2,3, from the N feature points obtained in step 1. (xi ,yi) are coordinates of the feature points in the previous frame, and (Xi , Yi ) are their corresponds in current frame. 3. For each set calculate the affine transformation parameters. 4. Transform N feature points in step 1 using M affine transformations, obtained in step 3 and calculate the M medians of squared differences between corresponding points and transformed points. Then select the affine parameters for which the median of squared difference is the minimum. According to the above procedure, the probability p that at least one data set in the background and their correct corresponding points are obtained is derived from the following equation 7: (3)where (0.5) is the ratio of the moving object regions to whole image and q is the probability that corresponding points are correctly find. In 7 it has been shown that the above method will give an accurate and reliable model. 2.2 Moving target detection using background motion compensated frames When affine parameters are estimated, they can be used for cancellation of the apparent background motion, by transformation of previous frame. Now difference of the current frame and transformed previous frame reveals true moving targets. Then we apply a threshold to produce a binary image. The results of the transformation and segmentation are shown is figure 1a and 1b. Some parts are segmented as moving targets due to noise. Connected component property can be applied to reduce errors due to noise. We use split and merge algorithm to find target bounding-boxes. If no target is found, then it means either there is no moving target in the scene or, the relative motion of the target is too small to be detected. In the latter case, it is possible to detect the target by adjusting the frame rate of the camera. The algorithm accomplishes this automatically by analyzing the proceeding frames until a target is detected. Our special interest is detection and tracking of the moving vehicles so we used aspect ratio and horizontal and vertical line as constraints to verify vehicles. Our experiments show that comparison of the length of horizontal and vertical lines in the target area with the perimeter of the target will give a good clue about the nature of the target. 3 Target tracking After a target is verified, the algorithm switches into the tracking mode. Modified Moravec operator is applied to the target to identify feature points. These feature points are matched with points in the region of interest in the current frame. Disparity vectors are computed for the matched pairs of points. We used disparity vectors to refine the matched points. The refined points define the new position of the target in the current frame. The algorithm switches to the detection mode whenever the target is missed. Although the detection algorithm described above can be used for tracking too but the tracking algorithm, we describe in this section has very low computation cost in contrast with the detection algorithm described above. On the other hand when the target is detected it is not restricted to keep moving in tracking mode. The target can also be larger than 50% of the scenery in the tracking mode and this means camera can zoom to have a larger view of the target while tracking. Figure 1: two consecutive frames and difference of them after background motion compensation, the calculated affine parameters are: a1=0.9973, a2=-0.004, a3=0.008, a4=1.0022, a5=1.23, a6=-2.51 When the size of the target is fixed the normalized crosscorrelation or SSD (Sum of Squared Differences) method can be directly applied for target tracking. But we dont restrict the target to have a fixed size. Our tracking algorithm is capable of updating the target shape and size. To achieve this goal, the algorithm is based on dynamic feature points tracking. We select feature points from the target area and we track them in the next frame. Horizontal and vertical lines are important features for vehicle tracking. So we used optimized Moravec operator, which selects feature points, considering only horizontal and vertical gradients. This improves selection of interesting feature points located on the geometrically well-defined structures such as vehicles. This feature is very useful when dealing with occlusion. Our tracking algorithm consists of four steps, described as follow. 1. Apply the modified Moravec algorithm to select feature points in the target area in the previous frame. 2. Find corresponds of the feature points in the ROI (region of interest) of the current frame using normalized cross-correlation. 3. Calculate the disparity vectors and based on these vectors refine feature points. The refining is defined as omitting features with inconsistent vectors. This helps removal of nontarget feature points. 4. Based on the location of refined corresponding points and the previous size of the target determine the location and size of the target in the current frame. To refine the feature points described in the step 3, we calculated the mean and variance of the disparity vectors and based on these values we removed points, which have disparity vectors far from the mean disparity. In figure 2 the refined disparity vectors are shown for a target. When there is no feature point for tracking or correspond points are not found properly, we assume that the target is lost so the algorithm switches to detection mode to find a target.4 Results The algorithm has been implemented on a Pentium III 500Mhz using a Visual C+ program. We have tested the algorithm with both simulated and actual sequences of images of vehicles in different landscapes. The system can detect and track targets in realtime. We achieved the frame rate of 4 frames/second for detection and 15 frames/second for tracking of 100*90 pixels target in 352*288 pixels video frames. We tested the proposed algorithm with wide variety of image sequence. Figure 3 shows some results for detection of various objects in arbitrary background. As it is shown the algorithm has successfully detected the targets. In figure 4 the tracking results for a vehicle are shown. In this example, the tracked vehicle turns over the road and its shape and size change. As it shown in the pictures both the size and the shape of the vehicle varies but our tracking algorithm can successfully track it. Results showed the accuracy of the method in detecting and tracking of moving objects. Comparison of results generated by the proposed method with those of other methods showed that more reliable results could be obtained using the proposed method in realtime. a b cFigure 2: The refined disparity vector used for tracking. (a) Previous frame, (b) Current frame and (c) Current frame with disparity vectors Figure 3: various targets detected using the detection algorithm.Figure4: Tracking a vehicle while it turns over and changes its size and shape5 Conclusion In this paper we proposed new method for detection and tracking of the moving objects using mobile camera. We used two different methods for detection and tracking. For detection, we used affine transform and LMedS method for estimation of the apparent background motion. When the apparent motion of the background is canceled, the difference of two consecutive frames is used for detection of the moving target. We also checked the target for some features. We used modified Moravec operator and feature matching with disparity vectors refining for tracking. We tested our algorithms with wide variety of images sequence. The proposed methods successfully detect and track moving vehicles and objects in arbitrary scenes obtained from a mobile video camera. The tracking system is capable of target shape recovery and therefore it can successfully track targets with varying distance from camera or while the camera is zooming. Local and regional computations have made the algorithm suitable for realtime applications. Moreover, it is implemented in an understandable and structured way. Experimental results have shown that the algorithm is reliable and can successfully detect and track targets in most cases. To add more robustness to the tracking system it is possible to combine edge/texture based tracking methods with the current approach. (2)译文一个鲁棒的基于机器视觉运动目标检测与跟踪系统摘要在本文中,我们提出了一种在使用移动相机在地面场景得到移动目标实时检测和跟踪的新算法。我们的算法包括两种模式:检测和跟踪。在检测模式,背景使用运动估计和补偿仿射变换。由此产生的运动整流图像用于探测目标位置时使用分割和合并算法。我们通过精确的检测,还检查了其他功能目标位置。当目标被锁定,算法转为跟踪模式。修改后的莫拉维克操作是用于识别目标的特征点,将当前帧特征点与参考点进行比较。相应的点使用差距进一步完善载体的阈值范围。对目标跟踪系统进行形状恢复,因此它可以成功的跟踪与相机间的距离不同的目标或在相机变焦时移动的目标。本地及局部转口计算算法适用于实时的应用中,用精确的点在当前帧确定新的目标位置。实验结果表明该算法是可靠的,在大多数示列中可以显示成功地探测和跟踪了目标。关键词:实时运动目标跟踪和检测,特征匹配,仿射变换,车辆跟踪,移动相机1简介视觉检测和跟踪是在计算机视觉中最具挑战性的问题。视觉检测和跟踪的应用众多,他们跨越了广泛的应用范围包括监控系统,车辆跟踪和航空航天应用,仅举数列。抽象的目标检测和跟踪(例如一般的车辆)是一个非常复杂的问题,并要求复杂的解决方案使用传统的模式识别和运动估计方法。基于运动的分割是一个强大的检测工具和跟踪移动目标。它是简单的检测固定相机1获得的图像序列运动的物体,2,以传统的差异为基础的方法无法检测在移动相机时也移动的移动目标。在移动相机的情况下,所有的对象在图像序列中有一个明显的运动,这是关系到相机的运动。已经提出了一些方法在移动相机的运动目标检测包括直接的摄像机运动参数估计3,光流4 5,与几何改造6 7。直接测量摄像机运动参数是最好的方法取消明显的背景运动,但在某些应用中,它是不可能直接测量这些参数的。几何转化方法有较低的计算成本和适合实时的目的。在这些方法,一个统一的运动背景假设。可用于仿射运动模型建模。当视觉运动背景数据估计时,它可以被利用来定位移动的物体。在本文中,我们提出一种新的方法,使用的移动目标检测和跟踪移动单眼相机。我们的算法两种模式:探测和跟踪。本文是安排如下:第2节中,检测讨论过程。第3节介绍跟踪方法。在第4节介绍实验结果在第5节总结结论。2目标检测在检测模式下,我们使用了仿射变换和LMedS(最低位数的平方)明显的稳定的背景估计方法。经光补偿后运动背景的检测,我们应用分割和合并的算法生成电流差异框架和转化前一帧获得目标位置的估计。如果没有发现目标,那就意味着在此场景中没有运动目标,相关的运动目标太小,无法检测。在后者的情况下,我们可以通过调整相机的帧频检测目标。“自动实现算法直到一个重要的初始帧分析检测到差异。我们设计了一个随机选取的方法来验证在已知目标基础之上得到一个先验目标。对于在车辆检测的情况下,我们使用垂直CAL和水平梯度找到有趣特点以及制约该地区移动目标检测的原因将在本节中得到讨论。2.1背景运动估计仿射变换8已被用来检测相机模型的运动。这个模型包括旋转,缩放和平移。2-D仿射变换描述如下: (1) (xi,yi)点的位置在前一帧(Xi,Yi),地点在当前帧和A1A6点的运动参数。这种转变有六个参数的;因此,三个配对的需要完全恢复的设置。从固定背景选择三点是非常必要的,这样才可以保证相机模型准确的设立。我们使用莫拉维克算法 9发现突出的特征点,以确保精确匹配。莫拉维克算法的选择与像素最小或最大的方向梯度感有关。如果运动目标构成一个很小的区域(即低于50的图像),就可以把LMedS算法按照下列程序应用于确定仿射连续两帧画面之间以明显的转换背景运动参数。1 从上一帧选择N的随机特征点,使用标准的规范化交叉相关方法来定位当前帧中的对应点。归一化相关方程为: (2)这里和是平均强度在这两个区域的像素比较,求和是把所有像素进行的小型窗口集中在特征点。在上面的方程式r值采用两个区域之间相似度的和介于1和-1。因为我们假设移动对象是少于50%的整体形象,因此大多数的N点将属于静止的背景。 2.选择M值随机设定 三个特征点:(xi,yi,Xi,Yi)对于i=1,2,3, N特征点在步骤1中获得。(xi,yi)是坐标前一帧的特征点,(Xi,Yi)是对应于当前帧的特征点。 3.对每一组计算仿射变换参数。 4.在步骤1使用M变换n功能点仿射变换,在第3步获取计算平方差的M位数之间的对应点和转化点。然后选择仿射参数平方差的中位数的最低限度。根据上述过程中,至少有一个数据在后台设置的概率p和获得正确的对应点从下面的公式推导7: (3)其中,在( 0.5)的比例是移动物体区域的整体形象,q的概率值使对应点被正确地找到。7已被证明,上述方法会给一个准确和可靠的模型。2.2运动目标检测方法背景运动补偿帧当仿射参数估计时,他们可以用于消除明显的背景运动,通过变换的帧。现在不同的当前帧和转换的帧显示了真正的移动目标。然后我们应用一个阈值来产生一个二进制图像。转换的结果是图像显示和数据细分和1b划归。另一些部分被分割为正在移动的目标归因于

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论