资源目录
压缩包内文档预览:(预览前20页/共21页)
编号:12084517
类型:共享资源
大小:2.01MB
格式:ZIP
上传时间:2018-12-23
上传人:小***
认证信息
个人认证
林**(实名认证)
福建
IP属地:福建
100
积分
- 关 键 词:
-
载重
跃进
货车
驱动
总成
设计
- 资源描述:
-
!【包含文件如下】【机械类设计类】CAD图纸+word设计说明书.doc【需要咨询购买全套设计请企鹅97666224】.bat
中英文翻译.doc
主动齿轮.dwg
从动齿轮.dwg
半轴.dwg
半轴齿轮.dwg
行星齿轮.dwg
装配图.dwg
设计说明书.doc
目 录
摘 要 III
Abstract IV
1 概 述 1
1.1概述 1
1.2驱动桥的结构分类 2
1.3设计的主要内容 8
2 总体方案的确定 9
2.1主要技术参数 9
2.2主减速器的结构方案确定 9
2.3差速器的结构方案确定 10
2.4半轴形式确定 10
2.5桥壳形式确定 11
2.6本章小结 11
3 主减速器设计 12
3.1主减速器锥齿轮的设计 12
3.2主减速器锥齿轮强度计算 16
3.3主减速器锥齿轮轴承设计计算 18
3.4主减速器的润滑 22
3.5本章小结 22
4 差速器设计 23
4.1差速器的结构形式 23
4.2差速器齿轮的材料选择 23
4.3圆锥齿轮式差速器齿轮设计 23
4.4圆锥齿轮式差速器齿轮的强度计算 27
4.5本章小结 28
5 半轴设计 29
5.1半轴形式 29
5.2半轴的材料以及热处理 29
5.3半轴的设计和计算 29
5.4本章小结 32
6 驱动桥桥壳设计 33
6.1桥壳结构形式 33
6.2桥壳的受力分析及强度计算 33
6.3本章小结 37
结论 38
参考文献 39
致 谢 40
3吨载重跃进货车驱动桥总成的设计
摘 要
轻型载货汽车在现阶段时期的商用汽车中有着很大的比重,尤其是其中的驱动桥在整车当中是处于一个极其重要的位置。驱动桥属于汽车的四大总成之一,其性能的优缺点往往会直接影响到整个车子性能的好坏,并且这一性能对于载货汽车来说是非常重要的。目前对于驱动桥的各项要求是非常之高的,因为只有这样才可以符合现在载货汽车在其使用的过程当中所必须满足的各项要求。在本次设计当中倘若我们可以设计出一个成功的驱动桥,使其满足人们在使用载货汽车时的各种使用要求,并且设计的结构能做到简单,价格低廉。则此次设计的轻型载货汽车驱动桥设计也就具有了一定的现实意义。
本文首先确定主要部件的结构型式和主要设计参数,在分析驱动桥各部分结构形式、发展过程及其以往形式的优缺点的基础上,确定了总体设计方案,采用传统设计方法对驱动桥各部件主减速器、差速器、半轴、桥壳进行设计计算并完成校核。最后运用AUTOCAD完成装配图和主要零件图的绘制。
关键词:轻型货车;驱动桥;单极主减速器;差速器;半轴;桥壳










- 内容简介:
-
英语原文A new vehicle detection methodZebbara Khalid Abdenbi Mazoul Mohamed El AnsariLabSIV, Department of Computer ScienceLabSIV, Department of Computer ScienceLabSIV, Department of Computer ScienceFaculty of Science, Ibn Zohr University Agadir, MoroccoFaculty of Science, University of Ibn Zohr Agadir, MoroccoFaculty of Science, University of Ibn Zohr Agadir, MoroccoAbstractThis paper presents a new vehicle detection method from images acquired by cameras embedded in a moving vehicle. Given the sequence of images, the proposed algorithms should detect out all cars in realtime. Related to the driving direction, the cars can be classified into two types. Cars drive in the same direction as the intelligent vehicle (IV) and cars drive in the opposite direction. Due to the distinct features of these two types, we suggest to achieve this method in two main steps. The first one detects all obstacles from images using the so-called association combined with corner detector. The second step is applied to validate each vehicle using AdaBoost classifier. The new method has been applied to different images data and the experimental results validate the efficacy of our method.Keywords-component; intelligent vehicle; vehicle detection; Association; Optical Flow; AdaBoost; Haar filter.I. INTRODUCTIONDetection of road obstacles 1 2 3 4 5 6 is an important task in the intelligent transportation. A number of sensors embedded in IV to perform the vehicle detection task. These sensors can be classified into passive and active sensors. Known that active sensors are expensive and cause pollution to the environment, we propose to use passive sensors in our vehicle detection approach. The data we are going to process to achieve vehicle detection are images taken from a camera embedded in a moving car.In the field of technical obstacle detected by vision system, two approaches existed: the first approach is unicameral approach that uses a single camera that consists of an image interpretation with former knowledge of information about these obstacles. This information can be texture information 7, color 8, 9. The second one is the stereo or multi-camera approach which is based on the variation map after matching primitives between different views of the sensor 10, 11 and 12. Vehicle detection algorithms have two basic step; Hypothesis Generation (HG) and Hypothesis Verification (HV) 13. In the hypothesis Generation step, the algorithm hypothesizes the locations of vehicles in an image. In the Hypothesis Verification (HV) step, he algorithm verifies the presence of vehicle in an image. The methods in the HG step can be categorized into tree methods; Knowledge-based methods which use symmetry of object, color, corners and edges; Stereo-vision-based methods which use two cameras;Motion-based Methods which track the motion of pixels between the consecutive frames 14. The methods in the HV step are Template-based methods and Appearance methods. Template-based methods use predefined patterns of the vehicle class. Appearance-based methods include pattern classification system between vehicle and non vehicle. There are a many works 151617 tackling realtime on-road vehicle detection problem. All the papers used monocular cameras and have real- time constraints. 15 used horizontal and vertical edges (Knowledge-based methods) in HG step. The selected regions at HG step are matched with predefined template in HV step. 16 used horizontal and vertical edges in HG step. However, they use Haar Wavelet Transform and SVMs (Appearance- based methods) in HV step. 17 detected long-distance stationary obstacles including vehicles. They used an efficient optical flow algorithm 18 in HG step. They used Sum of squared differences (SSD) with a threshold value to verify their hypothesis.This paper presents a new approach for vehicle detection. At each time, the decision of the presence of vehicles in the road scene is made based on the current frame and its preceding one. We use the association approach 20, which consists in finding the relationship between consecutive frames. This method exploits the displacement of edges in the frames. At each edge point in one frame we look for its associate one in the preceding frame if any. Obstacles can be detected on the basis of the analysis of association results. Adaboost classifier is used to verify is an obstacle is a vehicle.II. METHOD VEHICLE DETECTIONThis section details de main steps of the proposed method. We extract the edge points and corners of the consecutive images. We keep only the edge points belonging to curves containing corners. The association is performed between consecutive images. We analyze the association results to detect obstacles (objects). Finally, Adaboost is used to decide if a detected object is a vehicle or not.A. Detecting CornerWe use Shi and Tomasi 19 corner detector that is modied from the Harris corner detector. Shi and Tomasi corner detector is based on the Harris corner detector. Affine transformation is used instead of a simple translation. Given the image patch over the area (u,v). Shi and Tomasi corner detector finds corner with applying Affine transformation A and shifting it by (x,y) (Eq. 1). .(1) S = (I (u, v) - I ( A(u, v) - (x, y)2 u vAfter calculating the points corners threshold was performed to remove small close points corners, points cornersin a vehicle are much more compared to trees or features of the road.B. Detecting Edge and filtringCanny operator is used to detect edge points of the consecutive images. The edge curves are formed by grouping edge points using morphological operations. Among the resulting curves, we keep only the ones crossing at least one of the corners calculated in subsection A.C. AssociationThe rest of this subsection describes the method we use to find association between edges of successive frames. Let Ck-1 be a curve in the image IK-1and Ck be its corresponding one in the image IK. Consider two edges Pk-1 and Qk-1 belonging to the curves Ck-1 and their corresponding ones Pk and Qk belonging to the curve Ck (see Fig. 1). We define the associate point of the point Pk-1 as the point belonging to the curve Ck which has the same y coordinate as Pk-1. Note that the association is not correspondence neither motion. Two associate points are two points belonging to two corresponding curves of two successive images of the same sequence and having the same y-coordinate. From Fig. 1, we remark that the point Qk meets these constraints. Consequently, Qk constitutes the associate point of the point Pk-1.In practice, we assume that the movement of the objects from one frame to the other is small. So, if x1 and x2 represent the x-coordinates of Pk-1 and Qk, respectively, x2 should belongs to the interval x1 - Dx,x1 + Dx,where Dx is a threshold to be selected. This constraint allows the reduction of the number of associate candidates. The gradient magnitude is used to choose the best associate one. As a similarity criterion, the absolute difference between the gradient magnitudes of the edges is used. As we see in Fig. 1, the point Pk represents the match of the point Pk-1. However, the point Qk constitutes the associate of the point Pk-1. We remark that the points Pk and Qk are different because of the movement of the point Pk in the image Ik.We could not find the association for all edges because ofthe different viewpoints and then objects movement. It is the same as in the matching algorithm, where some parts are visible in one image but occluded in the other one.Association approach is a technique used to find the relationship between successive frames, this method exploit the displacement of edges in the frames. Let Qk be an edgeFigure 1. IK-1 and Ik represent successive images of the same sequence, e.g. left sequence. The point Qk in the image Ik constitutes the associate point of the point Pk-1 in the image Ik-1. The points Pk and Pk-1 are in red color. The points Qk and Qk-1 are in green color.point belonging to the curves Ck in the image Ik. The associate point of Qk can be found as a correspondent point Pk-1 belonging to the curves Ck-1 in the horizontal neighborhood of Qk in previous image Ik-1.(More details about association method are described in 20).The associated points should belong to the same object contour and they should have similar or closer gradient magnitudes and orientation. In this work, we use an important cost function (Eq. 2) described below in this paper. This function computes the distance between two candidate associate points using gradient magnitudes. The edge with smaller cost will be considered as associated pairs of features. Because of vertical movement of scene, the association approach does not guarantee that each feature in the image have its associated point. But some good associates points are enough to construct the vehicle objects. u + w F (dx ) = min(I (x, y)- I (x + dx , y) (2) x =u - wWhere dx is the distance that a contour moves between instant t0 and t1. Given point (u, v) in image It, the algorithm should find the point (if exist) (u + dx, v) in image It+1 that minimizes function of cost F (Fig. 2). And w is the neighbourhood window around (x,y).D. Detection of ObjectsLet us consider Ass the image association and M and N be the image width and height, respectively. At each pixel (x,y) in the current image, Ass(x,y) is the distance between the pixel (x,y) and its associate one in the preceding image (frame). The obstacles can be detected by using the following functions. F1 (i) = Ass(i, j) (3) j =1 M F2 ( j) = Ass(i, j) (4) i=1 (a) (b)(c)Figure 2. (a) Edge detection at instant t0. (b) Edge detection at instant t1 (c) Vector Association. Where i = 1,.,M and j = 1,.,NThe values of the function F1 and F2 should be maximum at the areas where there are obstacles. The function F1 allows to determine the horizontal bounds of obstacles. The function F2 allows to determine the vertical bounds of obstacles. The segmentation of the two functions helps to determine the horizontal and vertical bounds of obstacles. Fig. 3 illustrates an example of the computation by equations F1 and F2. Fig. 3 (a) depicts the image association, Fig. 3(b) the computed function F1, and Fig. 3(c) the computed function F2.E. Validation using AdaboostIn the step of detecting and locating faces, we propose an approach for robust and fast algorithm based on the density of images, AdaBoost, which combines simple descriptors (Haar feature) for a strong classifier.The concept of Boosting was proposed in 1995 by Freund 21. The Boosting algorithm uses the weak hypothesis (error rate 0.5) a priori knowledge to build a strong assumption. In 1996 Freund (a) (b) (c)Figure 3. (a) Image Association (b) the computed function by equation F1 (3). (c) the computed function by equation F2 (4).and Schapire proposed the AdaBoost algorithm which allowed automatic choosing weak hypothesis with adjusted weight. AdaBoost does not depend on a priori knowledge 22.In 2001, Viola and Jones applied the AdaBoost algorithm in the detection of faces for the first time. With simple descriptors (Haar feature), the method of calculating value descriptors (full image), the cascade of classifiers, this method has become reference face detection for its qualities of speed and robustness.In 2002, Lienhart et al. extended Haar descriptors, experienced in several of AdaBoost algorithms: Discrete Adaboost, Real Adaboost, Gentle Adaboost and Logitboost.These codes learning and detection algorithm AdaBoost are published in the function library OpenCV (Open Source Computer Vision) 23 24. 25 Using descriptors histograms of oriented gradients for human detection and bicycles.In our work we applied the algorithm Gentle AdaBoost using the OpenCV library function using two waterfalls - haarcascade_car_1 and haarcascade_car_2 - to detect and locate most vehicles in the sequences images.III. RESULTSWe have performed a number of experiments and comparisons to demonstrate the proposed Association approach in the context of vehicle detection.The system was implemented on a Intel Core CPU 2.99 Ghz. We tested the system on different frames of images. The system is able to detect most vehicles in different images in 20 milliseconds its fast more than algorithm of optical flow 26. (a)(b)(c)(d) (e)Figure 4. (a) at instant t0 and t1. (b) points corners of images (a). (c) edges that cross points corners detected at (b). (d) Association vectors of edges from (c) calculated between instants t0 and t1. (e right) vertical and (e left) horizontal projection of the associated edges pointsFig. 4 shows each step of our approach for detection of vehicles using Association and Adaboost. The results illustrate several Strong points of the proposed method. Fig. 4.a shows an image at instant t0 ant t1. In Fig. 4.b, the points corners are calculated successfully after threshold to eliminate other obstacles (tree,), although we have only points corners of vehicles. In Fig. 4.c, shows edges that cross points corners and we keep only edges of vehicles. In Fig. 4.d, shows associations vectors for each edge in the frame t0. In Fig. 4.e shows results calculate in section detection of objects by formulas (3) and (4) to determine abscises and ordinates of obstacles.The proposed method has been tested on other real road images depicted in Fig. 6. The HG and HV results are shown in Fig. 6.b and Fig. 6.c respectively. Its clear that the results computed by our approach are very satisfactory.(a) (b)Figure 5. (a) Bounding box. (b) Validation of objects using AdaBoost.(a)(b) (c)Figure 6. (a) Original image.(b) Hypothesis Generation (HG) and (c) Hypothesis Verification (HV).IV. CONCLUSIONThis paper presents a new vehicle detection method based on association notion descrebed above. In order to select more reliable features, the corner detector is used. Based on horizontal and vertical projection of the associated edge points, the focused sub-region is selected as region of interest.The experiment results have validated the efficacy of our method, and they show that this method is capable to work in real time. In the future, we plan to improve our vehicle detection method, which will be tested to detect much more complex obstacles (pedestrian, traffic light.) under different weather conditions.REFERENCES1 R Manduchi, A Castano, A Talukder, L Matthies 2005. Obstacle Detection and Terrain Classification for Autonomous Off-Road Navigation.2 R. Labayrade, D. Aubert, J. P. Tarel, Real Time Obstacle Detection on Non Flat Road Geometry through V-Disparity Representation, IEEE Intel- ligent Vehicules Symposium, Versailles, June 2002.3 M. Bertozzi, A. Broggi - GOLD: A parallel real- time stereo vision system for generic obstacle and lane detection, IEEE Transaction on image pro- cessing, Vol. 7, N1, January 1998.4 T.A. Williamson - A high-performance stereo vi-sion system for obstacle detection , Phd, Carnegie Mellon University, September 1998.5 G. Toulminet, A. Bensrhair, S. Mousset, A. Broggi, P. Mich, Systeme de stereovision pour la detec- tion dobstacles et de vehicule temps reel. In Procs. 18th Symposium GRETSIIO1 on Signal and Image Processing, Toulouse, France, September 20016 Tuo-Zhong Yao, Zhi-Yu Xiang, Ji-Lin Liu 2009 .Robust water hazard detection for autonomous off-road navigation in Journal of Zhejiang UniversityScience.7 T. Kalinke, C.Tzomakas, and W.Seelen (1998). A Texture-based object detection and an adaptive model-based classification.8 R.Aufrere, F.Marmoiton, R.Chapuis, F.Collange, and J.Derutin (2000).Road detection and vehicles tracking by vision for acc.9 R. Chapuis, F.Marmoiton and R.Aufrere (2000). Road detection and vehicles tracking by vision for acc system in the velac vehicle.10 U.Franke and A.Joos (2000). Real-time stereo vision for urban traffic scene understanding.11 D.Koller, T.Luong, and J.Malik (1994). Binocular stereopsis and lane marker flow for vehicle navigation: lateral and longitudinal control.12 R. Labayade, D.Aubert, and J. Tarel (2002). Real time obstacle detection in stereo vision on non flatroad geometry through v-disparity representation.13 R. Miller. On-road vehicle detection: A review. IEEE Trans. Pattern Anal. Mach. Intell., 28(5):694711, 2006. Member- Zehang Sun and Member-George Bebis. 214 B. D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proc. of the 7th IJCAI, pages 674679, Vancouver, Canada, 1981. 1, 2, 315 M. Betke, E. Haritaoglu, and L. Davis. Real-time multiple vehicle detection and tracking from a moving vehicle. 12(2):6983, 2000. 216 G. B. Z. Sun, R. Miller and D. DiMeo. A real-time precrash vehicle detection system. In Proceedings of the 2002 IEEE Workshop on Applications of Computer Vision, Orlando, FL, Dec. 2002. 2.17 A. Wedel, U. Franke, J. Klappstein, T. Brox, and D. Cremers.Realtime depth estimation and obstacle detection from monocular video. In K. F. et al., editor, Pattern Recognition (Proc. DAGM), volume 4174 of LNCS, pages 475484, Berlin, Germany, September 2006. Springer. 218 G. D. Hager and P. N. Belhumeur. Efficient region tracking with parametric models of geometry and illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(10):10251039, 1998. 219 J. Shi and C. Tomasi. Good features to track. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR94), Seattle, June 1994. 1, 2, 3.20 M. El-Ansari, S. Mousset, and A. Bensrhair, “Temporal consistent real- time stereo for intelligent vehicles,” Pattern Recognition Letters, vol. 31, no. 11, pp. 12261238, August 2010.21 Y. Freund, R.E. Schapire, Experiments with a New Boosting Algorithm, In Proc. 13th Int. Conf. on Machine Learning, pp. 148.-156, 1996.22 Y. Freund, Boosting a weak learning algorithm by majority, Information and Computation, 121(2):256285, 1995.23 R. Lienhart, J. Maydt, An Extended Set of Haar-like Features for Rapid Object Detection, IEEE ICIP 2002, Vol. 1, pp. 900-903, Sep. 2002.24 R. Lienhart, A. Kuranov, V. Pisarevsky, Empirical Analysis of Detection Cascades of Boosted Classifiers for Rapid Object Detection, MRL Technical Report, May 2002.25 R. Miller. On-road vehicle detection: A review. IEEE Trans. Pattern Anal. Mach. Intell., 28(5):694711, 2006. Member-Zehang Sun and Member-George Bebis.26 Jaesik Choi. Realtime On-Road Vehicle Detection with Optical Flows and Haar-like feature detector. 中文翻译一种新的车辆检测方法摘 要本文提出了一种新的车辆检测方法,通过从嵌入在移动的汽车上获得的图像,同时给得到的图像排序,按照之前设定好的算法,应该能检测出汽车实时的各种状况。与驾驶的方向相关,这些汽车可以被分为两种不同的类别:汽车在同一方向上行驶就像智能车辆(四)以及汽车在往彼此的相反方向上行驶。由于这两种类型的非常明显的不同特点,我们建议实现这种方法可以通过两个主要的步骤来实现,第一步是通过一个所谓的联合角探测器得到的图像来检测到所有的障碍,第二个步骤是通过一个叫AdaBoots分类器的东西来验证没一辆车。这种新的检测方法已经被应用到不同的图像数据和实验结果,并且验证了我们的方法的有效性。关键词:组件;智能车辆;车辆检测;联系;光流;AdaBoots;Haar滤波器。1简介道路障碍物检测123456是智能交通中的一项重要任务。一些被嵌入在四(智能车辆)里的传感器来完成车辆检测的任务。这些传感器可以被分为主动传感器和被动传感器。众所周知,主动传感器是非常昂贵的,并且还会对环境造成污染。所以我们建议在我们的检测方法中使用被动传感器。我们将要处理的数据是通过一个嵌入在移动的汽车上的摄像头得到的图像来获得的,以此来实现车辆的检测。在技术障碍的视觉系统检测领域,有两种方法:第一种方法是一院制存在的方法,通过一个单一的摄像头,包括一个有从前的关于这些障碍的信息和知识的图像判读。这些信息可以的纹理信息7,颜色信息89。第二个是立体声或者是基于在不同的传感器视角之间10,11和12匹配原语的变化图的多摄像机的方法。车辆检测有两个基本的步骤:生成假设(HG)和检验假设(HV)13。在假设生成步骤当中,该算法假设汽车在图像中的位置。在验证假设(HV)的步骤中,该算法验证了在图像中的车辆。在HG的步骤方法中可以被分为树形方法和通过使用物体、颜色、角和对称性的以知识为基础的方法,以及车辆和非车辆之间的系统。有许多工作151617可以用来解决道路上车辆问题检测的实际时间。所有的文件都使用但目镜的相机,并且有实时限制。15在HG步骤中使用水平和垂直的边缘(知识为基础的方法)。在HG步骤中,选择的区域与预先设定的模板匹配。16在HG步骤中使用水平和垂直的边缘。然而,他们使用Haar小波变换和支持向量机(外观为基础的方法)在HV阶段。17检测到长距离的固定障碍,包括汽车。在HG步骤当中,他们使用了一种高效的光流算法。他们用差得平方和(SSD)与一个临界值来验证他们的假说。本文提出了一种新的车辆检测方法。在每一个时间,决定车辆在道路场景中的存在是根据当前帧和其前面一个。我们使用的关联方法20,其中包括了在寻找连续帧之间的关系。这种方法利用了帧中的边缘的位移。在每一帧的边缘点我们在前一帧中寻找任何可能的关联。在分析结果的基础上,可以检测到障碍物。AdaBoots分类器是用来验证一个障碍物是否为车辆的。2METHOD车辆检测本节详细介绍该方法的主要步骤。我们提取边缘点和角的连续图像。我们只保留边缘点中属于曲线的角落,这种关联是在连续图像之间进行。我们分析关联的结果来检测障碍物(对象)。最后,用AdaBoots算法来决定检测对象是否是不是车辆。2.1检测角我们使用石和托马西19角检测器,莫迪fiEdHarris角点检测器。石和托马西角探测器是基于Harris角点检测。运用的是仿射变换,而不是一个简单的翻译。将图像修补基于该区域(u,v)。石和托马西角探测器发现的角落用仿射变换和转移(x,y)由(公式1)。 (1) S = (I (u, v) - I ( A(u, v) - (x, y)2 u v通过计算点的角点的临界值来删除小的接近点的角。在车辆上的点弯角与树木或者道路上来比的话要多很多。2.2边缘检测和滤波Canny算子被用来检测连续图像中的边缘点。边缘曲线由分组边缘点组成,使用形态学运算。在得到的曲线中,我们只保留在段落(1)中计算出的角落中的至少一个。2.3联系本节的剩余部分描述了我们使用的方法来发现连续帧的边缘之间的关联。让CK-1在图像IK-1和CK中成为对面的那个曲线IK。考虑两边缘,Pk-1和Qk-1属于曲线Ck-1并且他们对应的PK和Qk属于曲线Ck(见图.1)。我们定义的关联点pk-1作为点属于曲线CK,和曲线PK-1有相同的Y坐标。请注意该联系不对应的运动形式。两个关联点是属于同一序列的连续图像具有相同的Y坐标。从图一中,我们可以看出点QK满足这些约束条件。因此,点QK构成点PK-1的关联点。在实践中,我们假设物体的运动从一个帧到另外一个帧是非常小的。所以,如果X1和X2分别表示Pk-1和QK的X坐标,X2应该属于区间X1-DX,X1+DX,DX是由一个临界值来确定的。这个约束可以减少可以选择的数字的数量。梯度的大小是用来选择最好的联系的方式。作为一个相似的标准,边缘梯度的大小的绝对差异也被用来当做一个标准。如图一所见,点PK代表的是匹配的点PK-1,然而这个点构成了联系点PK-1。我们认为点PK歌QK是不同的因为点PK在图像IK上的移动。因为不同的观点和物体的运动,我们不能找到所有边缘的关联。在匹配算法中是同样的,当某些部分是可见的时候,但在另外的一个图中是不可见的。关联方法是一种用来寻找连续帧间关系的方法,这种方法利用了帧中的边缘位移。让QK成 图1. IK-1 和Ik. 代表同一序列的连续图像序列,如左。在图像中的点构成的IK QK副点在图像IK-1点PK-1.点PK和PK-1是红色的。点QK和木屑是绿色的。为在图像IK中曲线CK的一个边缘点。QK的关联点可以在以前的图像IK-1的水平曲线QK附近找到属于曲线CK-1的一个对应点。(关联方法的更多细节在 20 的描述)。相关的点应属于相同的对象的轮廓,他们应该有类似或更接近梯度的大小和方向。在这项工作中,我们使用的一个重要的成本函数(公式2)来描述本文剩下的文章。此函数使用梯度幅值来计算候选点之间的距离。具有较小成本的边缘,将被视为相关联的有限元分析。由于垂直运动的场景,关联的方法并不能保证图像中的每一个特征都有其相关的点。但是一些好的关联点就足以创建一个车辆的结构。 u + w F (dx ) = min(I (x, y)- I (x + dx , y) (2) x =u - wDx的距离是T0和T1之间的轮廓的运动距离。在图IT中给定点(u,v),该算法应该可以找出点(如果存在)(u+dx,v)在图IT+1最小化成本函数中找到(图2)。并且W是(x,y)附近的窗口。2.4检测的对象让我们来考虑一下关联图片ASS的高度和宽度分别为M和N。在当前图像中的每个像素(),ASS(x)是像素之间的距离(x)和它的在前面的关联图像的图像(帧)中的一个。通过使用一下功能,可以检测到障碍。 N F1 (i) = Ass(i, j) (3) j =1 M F2 ( j) = Ass(i, j) (4) i=1 (a) (b) (c) 图2。(一)时刻t0的边缘检测。(b)在即时T1边缘检测(C)矢量关系。当i=1M和j=1N该函数F1和F2的最大值应该在有障碍的地方,F1的功能是确定障碍物的水平范围,函数F2课判断障碍物的垂直边界。两者的分割功能有助于确定障碍物的水平和垂直边界。图3说明了方程F1和F2的计算实例,图3(a)描述了图像之间的联系,图3(b)描述了所计算的函数F1和图3(c)所计算的函数F2。2.5验证使用AdaBoots在检测和定位识别人脸的步骤中,我们提出了一种坚定快速的基于图像密度,AdaBoots算法,它结合了简单的描述符(Haar特征)为一个明显分类。升压的概念是由Freund21在1995年提出的。升压的算法采用弱假设(错误率0.5)(a)(b) (c)图3.(a)关联图像(b)通过F1(3)计算函数的方程(c)通过F2(4)计算函数方程的先验知识来提出了一个强大的假设。在1996年 Freund和Schapire提出允许选择弱假设来调整重量的AdaBoost算法。AdaBoots不依赖于先验知识22。在2001年,Viola 和 Jones第一次提出了将AdaBoots算法用于人脸检测当中。通过简单描述(Haar特征),计算值描述的方法(图像),分类器的流动,这种方法已经成为最快速和准确的人脸检测方法。2002年,Lienhart等人,通过扩展Haar描述符,在几个AdaBoots算法中进行验证:离散AdaBoots,Real AdaBoots,Gentle AdaBoost和LogitBoost。这些代码的学习和检测算法AdaBoost发表在函数库OpenCV(开源计算机视觉) 23 24 。 25 利用人的检测和自行车的方向梯度直方图。在我们的工作中,我们采用算法“Gentle AdaBoost使用OpenCV库函数使用两数据流”haarcascade_car_1”和“haarcascade_car_2”检测和定位在序列图像中的车辆。3结果我们已经进行了大量的实验和比较,来证明我们提出的在车辆检测中的证明方法。该系统是在英特尔核心CPU上实现的。2.99千兆赫。我们用不同帧的图像测试了该系统。该系统能够在20ms内在不同图像中测试出大多数车辆,它的快速超过了光学流量算法26。(a)(b)(c)(d) (e)图4.(a)在时刻T0和T1之间,(b)点得角的图像,(c)在(b)检测到交叉点的边缘,(d)边缘的关联向量,(c)计算T0和T1之间的时刻,右)垂直和(左)相关的边缘点的水平投影。图4显示了利用联系和AdaBoots来检测车辆的每一步方法。结果说明所提出的方法的几个优点。图4.a中显示了在T0和T1时刻的一个图像,在图4.b中,计算除了点的角,在计算出临界值之后成功的消除了其他障碍物(树)。尽管我们只有车辆的点的角。在图4.c中,显示了交叉点的边缘,我们只保留车辆的边缘。在图4.d中,在框架T0中显示每个边缘中的关联向量。在图4.e中,根据显示结果在目标区域由公式(3)和(4)来检测确定去除和协调障碍。所提出的方法已经在实际道路上测试过了,如图6所示。HG和HV的结果分别被显示在图6.b和图6.c中。很明显,我们的应用程序的计结果是非常令人满意的。(a) (b)图5.(a)包围框,(b)AdaBoots算法验证的对象(a)(b) (c)图6.(a)原始图像,(b)假设生成(HG)和(c)假设检验(HV)4结论本文提出了一种基于关联的概念来论述以上新的车辆检测的方法。为了选择出更可靠的功能,角检测器被使用。基于水平和垂直投影的相关的边缘点,重点子区域被选定为感兴趣区域。实验结果验证了我们的方法的有效性,他们表明,这种方法是能够实时工作的。在未来,我们计划改进我们的车辆检测方法,它将被测试,以检测更多在不同天气条件下的复杂障碍(行人、交通灯)。5参考文献1 R Manduchi, A Castano, A Talukder, L Matthies 2005. Obstacle Detection and Terrain Classification for Autonomous Off-Road Navigation.2 R. Labayrade, D. Aubert, J. P. Tarel, Real Time Obstacle Detection on Non Flat Road Geometry through V-Disparity Representation, IEEE Intel- ligent Vehicules Symposium, Versailles, June 2002.3 M. Bertozzi, A. Broggi - GOLD: A parallel real- time stereo vision system for generic obstacle and lane detection, IEEE Transaction on image pro- cessing, Vol. 7, N1, January 1998.4 T.A. Williamson - A high-performance stereo vi-sion system for obstacle detection , Phd, Carnegie Mellon University, September 1998.5 G. Toulminet, A. Bensrhair, S. Mousset, A. Broggi, P. Mich, Systeme de stereovision pour la detec- tion dobstacles et de vehicule temps reel. In Procs. 18th Symposium GRETSIIO1 on Signal and Image Processing, Toulouse, France, September 20016 Tuo-Zhong Yao, Zhi-Yu Xiang, Ji-Lin Liu 2009 .Robust water hazard detection for autonomous off-road navigation in Journal of Zhejiang UniversityScience.7 T. Kalinke, C.Tzomakas, and W.Seelen (1998). A Texture-based object detection and an adaptive model-based classification.8 R.Aufrere, F.Marmoiton, R.Chapuis, F.Collange, and J.Derutin (2000).Road detection and vehicles tracking by vision for acc.9 R. Chapuis, F.Marmoiton and R.Aufrere (2000). Road detection and vehicles tracking by vision for
- 温馨提示:
1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
2: 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
3.本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

人人文库网所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。