预制板生产线-自动划线装置的设计含NX三维及14张CAD图
收藏
资源目录
压缩包内文档预览:(预览前20页/共75页)
编号:145464737
类型:共享资源
大小:22.56MB
格式:ZIP
上传时间:2021-09-21
上传人:QQ14****9609
认证信息
个人认证
郭**(实名认证)
陕西
IP属地:陕西
300
积分
- 关 键 词:
-
预制板
生产线
自动
划线
装置
设计
NX
三维
14
CAD
- 资源描述:
-
预制板生产线-自动划线装置的设计含NX三维及14张CAD图,预制板,生产线,自动,划线,装置,设计,NX,三维,14,CAD
- 内容简介:
-
河海大学文天学院毕业设计(论文)毕 业 设 计(论 文)毕 业 设 计(论 文)预制板生产线-自动划线装置预制板生产线-自动划线装置专业年级专业年级机械工程 2011 级学号学号110330327姓名姓名汤镇指导教师指导教师汪丽芳评 阅 人评 阅 人李玲云2015 年 6 月中国马鞍山2015 年 6 月中国马鞍山河海大学文天学院毕业设计(论文)河 海 大 学 文 天 学 院本科毕业设计(论文)任务书、毕业设计(论文)题目:预制板生产线-自动划线装置、毕业设计(论文)工作内容(从专业知识的综合运用、论文框架的设计、文献资料的收集和应用、观点创新等方面详细说明):(1) 查阅有关资料,提出可行方案;(2) 进行预制板自动划线装置的总体设计(3) 进行预制板生产线自动划线装置的结构设计(4) 进行必要的理论计算与校核;、进度安排:2014 年 10 月 20 日2014 年 11 月 9 日(3 周):选择题目,收集材料;2014 年 11 月 10 日2014 年 12 月 7 日(4 周):布置任务,明确目标、计划;2014 年 12 月 8 日2015 年 1 月 4 日(4 周):试验环境搭建,关键技术试验,应用原型构造;2015 年 1 月 5 日2015 年 2 月 5 日(4 周):方案研究,系统分析、设计,编码实现;2015 年 2 月 6 日2015 年 3 月 11 日(5 周):继续前期工作,准备毕业设计中期院内检查;2015 年 3 月 12 日2015 年 4 月 1 日(3 周):后期完善调整,系统完整实现;2015 年 4 月 2 日2015 年 4 月 15 日(2 周):软件测试,指导老师验收成果,毕业论文写作;2015 年 4 月 16 日2015 年 5 月 31 日(2 周):毕业论文预提交、修改、评阅、答辩。、主要参考资料:1 公国英.现浇板与预制板的比较J.油气田地面工程.2003,22(6):9.河海大学文天学院毕业设计(论文)2 王彤.机电领域中伺服电机的选择原则N.应用科技,2001,21(8):6-8.3 吴宗泽,罗圣国.机械设计课程设计手册M.第 3 版.北京:高等教育出版社,2006.4 濮良贵,纪名刚等.机械手册M.第 8 版.北京:高等教育出版社,2006.5 王跃进.机械原理M.第 1 版.北京:北京大学出版社,2009.6 闻邦椿.机械设计手册:第 1-6 卷M.第 5 版.北京:机械工业出版社,2010.7 熊腊森,刘松等.电弧喷涂枪的研究与设计J.电焊械,2003,33(10):25-38.8 周传宏, 孙健利.滚动直线导轨副的运动精度实验研究J.机械设计, 2001, (2) :20-21.9 赵霞, 陈纬.横移车齿轮齿条有限元计算分析J.机械工程与自动化, 2011, (1) :60-62.10 戴俊平, 关文魁等.齿轮齿条进给伺服系统综合模型的研究J.机械工程及自动化,2011,(4):147-148.11 王宏杰, 颜国正等.基于补偿算法的机器人型材自动划线和切割系统J.上海交通大学学报,2002,(36):991-994.指导教师:(签名:汪丽芳),2014年11 月16日学生姓名:(签名:汤镇),专业年级:机械工程 2011 级系负责人审核意见(从选题是否符合专业培养目标、是否结合科研或工程实际、综合训练程度、内容难度及工作量等方面加以审核):专业负责人签字:,2014年12月1日河海大学文天学院毕业设计(论文)郑 重 声 明郑 重 声 明本人呈交的毕业设计(论文),是在指导老师的指导下,独立进行研究工作所取得的成果,所有数据、图片资料真实可靠。尽我所知,除文中已经注明引用的内容外,本设计(论文)的研究成果不包含他人享有著作权的内容。对本设计(论文)所涉及的研究工作做出贡献的其他个人和集体,均已在文中以明确的方式标明。本设计(论文)的知识产权归属于培养单位。本人签名:日期:河海大学文天学院毕业设计(论文)摘要本文将预制板生产线的划线部分设计为机器自动划线, 省去了传统人工划线的复杂与麻烦,同时节省成本,节约时间。该划线装置还可以用来辅助其它的工作。本课题主要对预制板生产线的划线装置进行结构设计,通过 UG 将划线装置的主体结构设计出来,实现划线部分在横向、纵向及上下方向的移动,通过对装配模型进行干涉检查,对所设计的装置加以修正。通过弯矩计算,校核所设计的结构的强度。根据生产要求,并在满足强度的要求下,查阅相关资料,对其功能和结构进行优化改进。最后,完善该装置的各个零部件。以满足实际的装配要求。通过设计计算,采用齿轮齿条传动能够达到生产精度要求。通过强度校核,得知所设计的该自动划线装置能够满足生产中的强度要求。与传统的预制板划线装置相比,该装置提高了生产效率。关键词关键词:预制板;预制板生产线;自动划线装置河海大学文天学院毕业设计(论文)AbstractThis article describes the use of the auto-marking machine to replace themanual-markingdeviceofprefabricatedplateproductionline.Eliminatingthetraditional artificial marking is complex and cumbersome, and save cost,save time .Thescribing apparatus can also be used to assist other work.This topic mainly conducts the design to the auto making device of prefabricatedpanel production line. According to UG software, making the main body structure of automaking device out. Realizing the part in transverse , longitudinal and vertical movement.Correcting the designed device through the interference check of the assemblymodel.Throughthebendingmomentcalculation,checkingthestructurestrength.Accordingtotheproductionrequirements,andtomeetthestrengthrequirements,andaccesstorelevantinformation,tooptimizeitsfunctionandstructure .Finally, consummate the unit parts ,to meet the practical requirements ofassembly.Through design calculation ,we find that the gear rack transmission can reach theproduction precision . Through the strength check ,we know the design of the automaticmarking device can meet the strength requirements of production .Compared with thetraditional prefabricated plate marking device, the device improves the productionefficiency.keywords :Prefabricated;Pre-cast plate production line;Automatically crossed device河海大学文天学院毕业设计(论文)目 录目 录摘要摘要.III第 1 章 总论III第 1 章 总论.1 11.1 概述.11.2 现浇板与预制板的比较.21.3 发展前景.31.4 预制板生产线-自动划线装置研究现状分析.41.5 需要解决的问题及其解决办法.8第 2 章预制板生产线-自动划线装置的设计第 2 章预制板生产线-自动划线装置的设计. 9 92.1 机架的设计.92.2 机架上滚动导轨的设计.122.3 横梁的设计.152.4 横向移动的设计方案.152.5 气缸的选择.192.6 伺服电机的选择.212.7 预制板支板的设计.222.8 支架的设计.242.9 夹板的设计.262.10 总体装配图.26第 3 章预制板生产线-自动划线装置的校核第 3 章预制板生产线-自动划线装置的校核. 27273.1 支架的校核.283.2. 轴径校核.283.3 轴承的选择与校核.283.4 校核键连接的强度.293.5 齿轮的选择.313.6 电机轴上的键的校核.323.7 滚动导轨副的校核.323.8 横梁的校核.33总结总结.34致谢34致谢.35参考文献35参考文献.3636河海大学文天学院毕业设计(论文)附录 1附录 1.49附录 249附录 2.6767河海大学文天学院毕业设计(论文)1第 1 章 总论第 1 章 总论1.1 概述本课题主要是设计预制板生产线中的自动划线装置,该装置由支架,直线导轨,伺服电机,喷嘴等构成。此套装置实现了横向,纵向和竖直方向的移动与定位,可以在预制板上的各个位置进行划线,保障了定位精度,简化了操作,提高了工作效率。此装置的设计还可以用于夹持其它物品(需要安装机械手),可以使其达到多种用途的目的。市面上的预制板生产线很混杂,生产出的预制板很难用于高强度作业,而这套装置的产生,可以将预制板做到大而坚固。图 1.1 预制板生产线-自动划线装置生产线照片1.2 现浇板与预制板的比较预制混凝土空心板在建筑行业的应用十分广泛,它可以有效的提高施工效率、节约生产成本。但是顺着经济的发展以及科学的进步,建筑产品的生产工艺也在不断的创新,施工技术也有了质的飞越。所以空心板建筑的优势也不在那么明显了,反而其在性能方面的劣势日益突出了起来。当然,在一些简易住宅建筑中,预制混凝土空心板以起低廉的价格依然得到广泛使用。但是现浇混凝土板的使用更为普遍,主要原因在于使用现浇板后没有板缝,既符合人们的审美又避免了墙面的渗水,使得居住环境更为舒适;同时使用现浇板可以提高房屋的整体刚性, 增强房屋的安全性; 在施工过程中, 采用现浇板就不用设置圈梁,还可以提高工程效率;最重要的是现浇板的使用对于成本的增加并不大,对于购房者而言很容易接受,对开发商也有利可图。所以,对建筑行业来说,使用现浇板比使用空心板更能达到令人满意的结果。因此,应当鼓励现浇板的发展。1.3 发展前景河海大学文天学院毕业设计(论文)2在现代企业生产中,越来越注重生产自动化的程度,主要原因是自动化程度代表一个国家的生产力,对综合国力有一定的影响。自动化程度越高,所需劳动力越少,生产效率越高, 生产成本大幅度降低, 生产模式由原来的劳动密集型转向技术密集型,使得企业的经济效益明显改善。这就迫使各生产厂家开始努力研发新技术,争取在本行业能够独占鳌头,从而获取更大利益,同时也能更好的服务群众。随着中国经济的发展,我国在各方面均取得骄人成绩,特别在航天航宇方面尤为突出。机械行业很早就已经就在历史上扎下根,现在我们之所以继续研究它,主要是希望它能够给我们带来更多的便利和实惠。 预制板生产线也从原来的纯体力劳动逐渐被机器所取代,但目前并不能够实现完全的自动化,而是掺杂了许多人工劳动,这与世界先进国家相比还存在一定的差距。预制板的生产具有广阔的前景,特别是对于新农村建设尤为重要。因为,第一,广大农村所盖的房屋很多都是一层到两层,没有很高的建筑, 使用浇筑的不划算。 第二, 随着新农村的开发, 会有一批人们到农村定居,享受田园风光,预制板的产生会让建筑业更加高效,生产成本会大大降低。第三,未来所设计的预制板具有轻巧, 隔音隔热, 承载力高等特点, 是很好的建筑首选。 因此,预制板的生产具有广阔的前景.预制板生产线的自动化实现,可以很好的解决不同预制板生产的需求。现在的预制板生产绝非传统的楼板,而是用于建造高强度的房屋设施所需的墙体等。图 1.2 预制板的实际应用1.4 预制板生产线-自动划线装置研究现状分析现在我们所研究的课题主要是以瑞士等国所研制的预制板生产线装置为依据,如图 1.3 所示, 而对这方面的研究, 国内则显得相对空白, 因此有必要进行本次研究,以便弥补国内在此方面的空白,进而满足国内市场的需要。世界上一些发达国家在预河海大学文天学院毕业设计(论文)3制板生产方面自动化程度比国内企业高, 因此我们可以借鉴他们的经验来辅助我们进行设计。如果我们的研究能够投入市场使用,那么国内将在预制板生产方面产生一次革新,为推进国家的自动化之路起着重要作用。随着中国经济的发展,我国在各方面均取得骄人成绩,特别在航天航宇方面尤为突出。机械行业很早就已经就在历史上扎下根, 现在我们之所以继续研究它, 主要是希望它能够给我们带来更多的便利和实惠。预制板生产线也从原来的纯体力劳动逐渐被机器所取代, 但目前并不能够实现完全的自动化,而是掺杂了许多人工劳动,这与世界先进国家相比还存在一定的差距。图 1.3 预制生产线本课题为预制板生产线-自动划线装置中的自动划线部分。预制板来源于对建筑中现浇板的改进和自动化生产的实现。对于目前来说,市面上还尚未将此技术公开,只有少数国家拥有此套装置的技术,而预制板的作用的日益突出,使得设计出此套装置显得尤为重要。在实地考察后发现,有些公司对于该装置的移动,是采用伺服电机卷动钢丝,使得钢丝拉动直线导轨的移动。经过计算和考量发现,钢丝在卷动的过程中,其直径随着卷动的匝数的增加而增加,从而为直线导轨的精确定位带来不便,这样一来,划线装置就显得不那么精确。而我此次设计的是采用以直线导轨作为滑动支撑,以齿轮齿条的传动作为驱动力。采用伺服电机,达到精确定位的目的。通过考察,我们发现,此套装置基本上实现了全自动化,但对于一些要求精度来说,出现了一些问题。首先,市面上对于自动划线装置的定位有不同的设计方法,有的使用滚轮与金属导轨的摩擦力来达到刹车定位,有的是通过电机带动滚轮,卷动钢丝,钢丝拉动直线导轨的移动来产生驱动力,电机再停转达到定位。这两种方法都很难做到精确定位。因为对于通过摩擦力来达到定位的,速度小的时候精度较高,如果导轨运行的速度大时,因其惯性作用,就会产生很大误差。其次,对于采用钢丝拉动的驱动力,其误差也是显而易见的。钢丝在滚筒上卷动的时候会使得卷动钢丝的滚动的直径加大,从而为定位带来不便。河海大学文天学院毕业设计(论文)41.5 需要解决的问题及其解决办法对于此套装置,首先需要解决纵向滚动导轨的传动与定位。伺服系统是一种自动调节系统, 广泛应用于测量仪器与数控装置中。 通常, 机械伺服系统由伺服电机经齿轮副、滚珠螺旋副驱动导轨部件移动。滚珠螺旋传动的优点是经预紧后可以消除传动中的空回, 传动精度高, 导轨部件的重量对系统动态响应的影响较小。但是滚珠螺旋副(尤其是小直径滚珠螺旋副) 不易生产制造,有其复杂的结构, 成本较高。对于齿轮齿条传动来说,结构更为简单,制造上更为方便,传动的效率也要高出很多,成本远比滚珠螺旋副低。虽然导轨部件重量对动态响应的影响比滚珠螺旋传动稍大一些, 但当导轨部件重量较轻和工作行程较小时, 这种影响并不大, 在这种情况下. 可用齿轮齿条传动代替滚珠螺旋传动。 在机械伺服系统中, 要求传动链中的各齿轮在正反向传动中运转灵活, 即要求各齿轮转动时的角加速度要大, 因此, 应使整个传动链转化到伺服电机轴上的转化转动惯量为最小最小转化转动惯量不但与传动比和传动级数有关。也与各构件的转动惯量和导轨部件重量有关。对于传动方式的选择,齿条的传动距离长,运转精度高,适合长直线传动;丝杆在长直线传动过程中很容易变形,然而在短直线传动过程中,丝杠的传动精度要明显高于齿条。故应根据实际请况做出合理选择。在这里,采用齿轮齿条传动,用伺服电机进行驱动定位。如下图 1.4:图 1.4 齿轮齿条传动装置所以,其次要解决伺服电机的选择。电动机的选择不仅要确定合适的负载条件,同时还要满足工作环境的需求,最后根据经济条件选择合适的电动机。目前,顺着动力技术的不断发展和现代机械行业对电机性能的各种需求, 步进电机的性能已经逐步被交流伺服电机所取代,究其原因主要有以下 6 点:1)控制精度高,并且其轴后端配有编码器,可以根据工作条件选择适合的控制精度,极大的满足了各种工作环境下的需求,2)低频特性非常稳定,可以避免低频振动现象,不会影响生产,并且拥有共振河海大学文天学院毕业设计(论文)5抑制的特性,有助于调整系统;3)力矩恒定,避免了电机启动时力矩不足的现象,有利于生产;4)瞬间过载能力强,达%150,持续时间长,可以增强机器的适用性能;5)闭环控制,保证了控制性能的稳定;6)极快的速度响应能力,满足了用于快速启停的工作环境。当然,对于一些性能要求不高的机器,步进电机也是不错的选择。故在选用电机时要考虑全面,选择适合自身发展的电机类型。再次,需要解决横向移动问题,这里仍然采用伺服电机和齿轮齿条进行传动与定位,见下图 1.5:图 1.5 横向移动示意图最后,需要解决上下移动的问题。上下移动是为了在需要划线的时候,将喷嘴伸到离预制板一定的距离处,让喷墨能够很好的喷在预制板上,当不需要划线时,喷嘴可以收起来,防止其妨碍其他的工作。此处,我们采用气缸来实现上下的移动。如下图 1-6:图 1.6 气缸装置河海大学文天学院毕业设计(论文)6此处的气缸采用的是 SMC 的 CY 系列磁欧式无杆气缸。CY 系列如下图 1.7 所示:图 1.7 CY 系列气缸河海大学文天学院毕业设计(论文)7第 2 章预制板生产线-自动划线装置的设计2.12.1 机架的设计自动划线装置的机架的设计是采用方形型钢和钢板的焊接来构成的, 对于方形型钢,我们选用方形冷弯空心型钢,其基本尺寸如下图 2.1:图 2.1 立柱尺寸图作为整个装置的整成部分,其三维效果图如下图 2.2 所示:图 2.2 立柱安装三维图2.22.2 机架上滚动导轨的设计滚动直线滑轨是一种滚动导引,它由钢珠在滑块与滑轨之间作无限滚动循环,使得负载平台能沿着滑轨轻易的以高精度作线性运动, 其摩擦系数可降至传统滑动导引河海大学文天学院毕业设计(论文)8的 1/50,使之能轻易地达到m 级的定位精度。现在滑块与滑轨间的末制单元设计,使得线形滑轨可同时承受上下左右等各方向的负荷, 专利的回流系统及精简化的结构设计使线性滑轨有更平顺且低噪音的运动。15(1)直线滚动导轨的特点直线滚动导轨的应用十分广泛,尤其是在数控机床行业,而滑动导轨更适用于普通机床,主要原因在于直线滚动导轨的:a)定位精度高,可实现自动化作业的精度要求;b)精简传动机构,降低机床造价并大幅度节约电力;c)摩擦阻力小,可提高机床的运动速度;d)摩擦阻力小,可避免运动误差的产生,使机床长期维持高精度性能,维护更加方便。此处采用 20mm 厚的钢板加上矩形冷弯空心型钢作为滚动导轨与机架之间的支撑,滚动导轨采用南京工艺装备制造有限公司生产的 GZB100BAL 型号的滚动导轨,其相关参数如下图 2.3.1 和图 2.3.3 所示:图 2.3.1 滚动导轨尺寸图 a图 2.3.2 滚动导轨尺寸图 b河海大学文天学院毕业设计(论文)9表 2.1a 滚动导轨参数型号导轨副尺寸滑块尺寸导轨尺寸HWKB1L1B3L3ML0TL2B2H1GZB100BAL12050105200394130200M20273328610080表 2.2b 滚动导轨参数导轨尺寸油杯尺寸额定动载额定静载DDhFLmaxGPNC(KN)C0(KN)2639321056000M10116235471330表 2.2c 滚动导轨参数额定力矩滑块重量导轨重量Ma(N.M)Mb(N.M)Mc(N.M)KgKg/m61200612007314024.546.8图 2.3.3 滚动导轨尺寸图 c河海大学文天学院毕业设计(论文)10矩形型钢尺寸如下图 2.4:图 2.4 型钢尺寸图(其长度为 9550mm.)其相关参数如下表 2.3:表 2.3a 型钢参数表边长/mm允许偏差/mm壁厚t/mm理论重量1/Mkg m截面面积A/2cm惯性矩/4cmHBXIYI2001001.308.034.37643.7912145.993719.014表 2.3b 型钢参数表惯性半径/cm截面系数/3cm扭转常数XrYrXWYW4/tIcm3/tCcm7.0004.052214.599143.8021798.551249.6其三维效果图如下图 2.5 所示:图 2.5 滑动导轨安装效果图2.3 横梁的设计2.3 横梁的设计横梁的作用是提供横向导轨的支撑与移动,考虑到总体方案中导轨的传动是采用齿轮齿条的传动实现的,为了更好的是传动得以实现,同时能更好的安装电机,我河海大学文天学院毕业设计(论文)11们采用了如下的设计方案,其设计尺寸如下图 2.6 所示:图 2.6 横梁的尺寸参数其顶部采用长宽高=400023220 的钢板作为上端盖,其尺寸图如下 2.7:图 2.7 横梁盖的尺寸参数图其三维效果图如下图 2.8 所示:图 2.8 横梁的安装效果图河海大学文天学院毕业设计(论文)122.4 横向移动的设计方案纵向的移动,我们采用滑动导轨作为支撑移动,以齿轮齿条作为传动,同样,对于横向移动,我们仍然采用滚动导轨作为支撑,以齿轮齿条作为传动部件,在横梁的上部和下部均装有滚动导轨,其效果图如下图 2.9 所示:图 2.9 横梁的移动方案三维图2.5 气缸的选择无杆气缸和普通气缸的的工作原理一样,只是外部连接、密封形式不同。无杆气缸里有活塞,而没有活塞杆的,活塞装置在导轨里,外部负载给活塞相连,作动靠进气。工作过程:在气缸缸管轴向开有一条槽,活塞与尚志在槽上部移动。为了防止泄漏及防尘需要,在开口部采用不锈钢封带和防尘不锈钢带固定在两端缸盖上,活塞架穿过槽地,把活塞与尚志连成一体。活塞与尚志连接在一起,带动固定在尚志上的执行机构实现往复运动磁欧式无杆气缸 CY3B/CY3R 系列轴承的承受能力大大得到了提高。 与 CY1B 相比,耐磨环的长度增加 70%,从而使轴的承受力更加提高。由于使用了耐磨环,润滑性也大大得到了改善。特殊树脂的耐磨环安装在防尘圈上,缸筒外周形成良好的润滑膜,耐久性更加提高。1) CY3B/CY3R 机种的选定方法(如图 2.10):图 2-10 CY3B/CY3R 机种的选定方法其受力如图 2.10河海大学文天学院毕业设计(论文)13图 2.11 气缸的受力示意图2) 气缸的定位(1) 用外部限位器使负载中间停止压力勿超过规定值(2) 用气动回路使负载中间停止动能勿超过规定值(3)行程末端停止方法(见图 2.12 左图)如图 2.12 右图所示,同时使用限位器与液压缓冲器,且从缸体中部的传递推力,便不会发生缸体的倾斜。图 2.12 行程末端停止方法示意图此套装置采用的是 CY3R63 型号的气缸,其设计尺寸如下图 2.13河海大学文天学院毕业设计(论文)14图 2.13a CY3R63 型气缸设计尺寸图 2.13b CY3R63 型气缸设计尺寸图 2.13c CY3R63 型气缸设计尺寸河海大学文天学院毕业设计(论文)153)磁性开关的安装方法(如图 2.14):将开关安装件沿气缸的开关安装沟槽移动到大体的设定位置。将磁性开关插入开关安装件的安装槽。确认检测位置后,拧入磁性开关的附件,固定螺钉(M2.5),固定磁性开关。检测位置变更时,进行步骤。注)拧紧固定螺钉(M2.5)时,请使用手柄直径 56mm 的钟表螺丝刀。另外,拧紧力矩大约为 0.10.15N m。也可以在感觉到拧紧时,再转 90 。图 2.14 磁性开关的安装示意图其尺寸如下表 2.4:表 2.4 气缸的设计参数型号ABCCBCRDFGGPCY3R6315148.25365.4188.595型号GWHHAHBHCHPHRHSHTCY3R6393.59792519651909.551型号JEKLLDMMMNPWQCY3R63M101.515241188.610M81.251694171型号QWTTCWWPWSXYZCY3R63603248704710601211882.6 伺服电机的选择(1) 本装置伺服电机的选择计算工作台和工件的机械规格:横梁重量 G=15.2KN;滚动导轨摩擦系数选=0.05;滑块快速移动速度设为maxV=1000mm/s,低速移动时速度为minV=50mm/s;加速时间为 t=0.5s;所选直齿齿轮的分度圆直径1d=120mm则摩擦力:fG0.0515.2=0.76KN(2.1)河海大学文天学院毕业设计(论文)16由:Ffma(2.2)maxVat(2.3)可得:22000/amm s3.8544FKN由此可知,需要加在齿轮分度圆上的径向力tF=f=4KN由:P=tFmaxV(2.4)得:P=4KN1000mm/s=4000W=4KW为了达到可靠运行,我们选取欧姆龙 R88M-G6K010H-B(S2)-Z 型伺服电机。其参数如下:功率 P=6KW,转速 n=1000r/min,其设计尺寸如下图 2.15 所示:图 2.15 伺服电机参数图(其中 LL=380.5mm)2.7 预制板支板的设计预制板支板的作用是支撑预制板的,在预制板浇筑的过程中,其作为支板,起到平整预制板的效果。为了使预制板能够平整,支板的表面要具有一定的粗糙度,由于有的预制板比较重,所以,支板就需要有一定的强度来支撑。对于支撑预制板的设计,我们使用钢板焊制,用工字梁增加其强度,其三维模型如下图 2-16 和 2-17 所示:图 2.16 预制板支板三维效果图河海大学文天学院毕业设计(论文)17图 2.17 预制板支板三维效果图其尺寸线如下图 2.18 所示:图 2.18 预制板支板参数图2.8 支架的设计支架的作用是支撑预制板支板并为其传递提供导向作用,其三维效果图如下2.19 所示:图 2.19 支架三维效果图河海大学文天学院毕业设计(论文)18该装置由支架体,滚筒,轴,摩擦轮,轴承组成,对于其各个零件的设计尺寸,如下图 2.20 所示:1)支架体支架的作用是支撑轴和摩擦轮的, 其强度要能够支撑住预制板支板和预制板的重量,再整个支撑作用中,强度要求最高。图 2.20 支架参数图2)摩擦轮摩擦轮的作用是为了支撑预制板,并为其向前传递提供导向作用,摩擦轮的表面粗糙度要足够大,要防止在传递的过程中出现打滑现象。设计尺寸如图 2.21 所示:图 2.21 摩擦轮参数图河海大学文天学院毕业设计(论文)193)轴轴的作用是为了支撑摩擦轮,对于其强度要求要高,其需要足够的支撑力来支撑受压的摩擦轮。轴要有足够的强度,能够承载相应的载荷,对于其强度的校核,我们将在第三章进行校核说明,来检验所设计的机构是否满足生产和强度要求轴的设计尺寸如下图 2.22:图 2.22 轴参数图2.9 夹板的设计夹板在本装置中的作用是为了更好地安装气缸, 使其能够更好地满足气缸上下移动的要求,同时要满足气缸在横向上的移动要求,其三维效果图如下图 2.23 所示:图 2.23 夹板三维设计模型河海大学文天学院毕业设计(论文)20其设计尺寸如下图 2.24 和图 2.25:图 2.24 小夹板图 2.25 大夹板大夹板和小夹板的作用是为了固定气缸和伺服电机的传动。使用两个的组合,可以根据装配的要求来调整大小夹板上下的距离,以便满足装配要求2.10 总体装配图其三维效果图如下图 2.26:河海大学文天学院毕业设计(论文)21图 2.26 自动划线装置三维设计模型该总装配图所设计的是自动划线装置的运动实现, 他所解决的问题是使划线装置能够实现前后左右以及上下移动,同时满足横梁移动时的载荷问题。对于该装置,其所采用的核心部件就是用滚动导轨作为横梁的支撑,采用伺服电机作为动力驱动,以齿轮齿条的啮合作为传动部件。竖直方向上采用气缸来实现上下移动。河海大学文天学院毕业设计(论文)22第 3 章预制板生产线-自动划线装置的校核3.1 支架的校核根据实际情况,估计预制板梁与所制成的预制板最大载重 30t,负重均布在 9 对支撑支座上,其受力示意图如下图 3.1 所示:图 3.1 传输装置受力示意图则每个支座受力(传输轮):KNnPF34.16188 . 91030/31(3.1)式中:P预制板与预制梁的重力,N;N支架的个数。3.2. 轴径校核而支架则是由轴支撑, 且必须通过摩擦轮上的橡胶提供足够的摩擦轮来带动预制板运动,在满足上述条件下,至关重要的保证轴的直径,通过弯扭组合计算出轴的最小直径,由于支架上的轴所受的扭矩不大,所以此处只校核轴的弯曲应力。由弯曲的强度条件: maxmaxMW(3.2)结合下图 3-2,轴的受力示意图图 3-2 轴的受力示意图河海大学文天学院毕业设计(论文)23得:max16.34801307.2MF lKNmmN m (3.3)33630.032.6 103232dWm(3.4)所以: maxmax61307.250.282.6 10MMPaW(3.5)式中:max计算的最大弯曲应力,MPa;maxM轴所承受的最大弯矩,N.m;W抗弯截面系数,W=332d,m3;D轴的直径,m;许用应力,=55 Mpa故轴的强度满足要求,安全。3.3 轴承的选择与校核通过经验分析,轴承的型号选择为 6006,其性能参数如下表 3.1;表 3-1 轴承参数基本尺寸/mmd30D55B13基本额定载荷/kNCr13.2C0r8.3极限转速/ r.min-1油11000脂14000重量/kgW0.113轴承代号6006其他尺寸/mmd238.4D247.7r min1安装尺寸/mmda min36Da max50.0ra max1球径/mmDW7.144球数Z11由于滚动轴承所受的轴向载荷很小,所以,此处不考虑,只考虑径向载荷是否满足要求。通过以上的性能参数知,轴承的基本额定载荷是 13.2KN,而每个轴承所受的实际载荷为 16.34KN0.5=8.17KN,小于 13.2KN.故轴承能够满足强度要求。河海大学文天学院毕业设计(论文)243.4 校核键连接的强度键、轴和轮毂的材料都是钢,由下表查的许用挤压应力p=100120MPa,取其平均值,p=110MPa。键的工作长度:l=L-b=75mm -10mm=65mm(3.6)键与轮毂键槽的接触高度:k=0.5h=0.5*8mm=4mm(3.7)由公式:332102 13.01 452 1075.071104 65 30ppTMPaMPaMPakld (3.8)可见满足连接强度。键的标记为:键 8x60 GB/T 10962003。表 3-2 键连接的许用挤压应力、许用应力/MPa许用挤压应力、许用应力连接工作方式键或毂、轴的材料载荷性质静载荷轻微载荷冲击p静连接钢120-150100-12060-90铸铁70-8050-6030-45p动连接钢5040303.5 齿轮的选择由上一章节中,我们选得伺服电机是欧姆龙 R88M-G6K010H-B(S2)-Z 型伺服电机。其参数为:功率 P=6KW,转速 n=1000r/min。此处我们采用齿轮齿条啮合,所以,我们选择传动比=1.选定齿轮类型、精度等级、材料及齿数选用直齿圆柱齿轮划线装置的纵向和横向的传动速度并不是太高,故选用 7 级精度(GB 10095-88)。材料选择。选择齿轮材料为 40Cr(调制),硬度为 280HBS。选择齿轮齿数 Z=403.5.1 按齿面接触强度设计由设计计算公式,即:3211212.32()EtdHKTZudu(3.9)确定公式内的各计算数值选tK =1.3计算齿轮传递的转矩:5595.5 1095.5 10657300n1000PTN mm(3.10)河海大学文天学院毕业设计(论文)25由机械设计2(以下均省略)表 10-7,选取齿宽系数d0.5由表 10-6 查得材料的弹性影响系数12189.8EZMPa,由图 10-21d 得该齿轮的接触疲劳强度极限lim1600;HMPa应力循环次数960n60 1000 1 (2 8 300 15)4.32 10nNjL (3.11)由图 10-19 取接触疲劳寿命系数10.9HNK;取失效概率为 1%,安全系数 S=1由式 10-12 得1lim10.9 60540NHHKMPaSMPa(3.12)3.5.2 计算1)计算齿轮分度圆直径1dt,代入H中较小的值。3211212.32()EtdHKTZudu=21.3 573001+1189.82.320.51540=77.17mm (3.13)计算圆周速度 v:177.17 1000v4.04/60 100060 1000td nm s(3.14)3)计算齿宽 b:11 77.1777.17dtbd mm(3.15)4) 计算齿宽与齿高之比bh模数:177.172.5730ttdmmmz;(3.16)齿高:h=2.25tm =2.252.57=5.788mm(3.17)77.1713.335.788bh(3.18)5).计算载荷系数:根据 v=4.047 级精度, 由图 10-8 查得动载系数v1.17K , 直齿轮1HFKK由表 10-2 查得使用系数1AK 由表 10-4 用插值法查得 7 级精度,齿轮相对支撑悬臂布置时2.527HK。由b13.33h;2.527HK查图 10-13 得2.40FK。故载荷系数:1 1.17 1 2.5272.957AVHHKK K KK (3.19)6).按实际的载荷系数校正所算得的分度圆直径,由式 10-10a 得:3312.957d77.17101.491.3ttkdkmm(3.20)7)。计算模数:101.493.3830dmz,(3.21)河海大学文天学院毕业设计(论文)263.5.3 按齿根弯曲强度设计由式 10-5 得弯曲强度的设计公式为:322mFaSadFY YKTz(3.22)确定公式内的各计算数值:1)由图 10-20c 查得齿轮的弯曲疲劳强度极限1500aFEMP;2)由图 10-18 取弯曲疲劳寿命寿命系数10.85FNK;3).计算弯曲疲劳许用力取弯曲疲劳安全系数 S=1.4.由式 10-12 得:0.85 500a303.571.4FNFEFKMPMPaS(3.23)4).计算载荷系数 K:1 1.17 1 2.402.808AVFFKK K KK (3.24)5).查取齿形系数查表 10-5 查得:a12.52FY;1.625SaY;(2).设计计算:322 2.808 573002.52 1.625m2.130.5 30303.57mm(3.25)因为齿根弯曲疲劳强度计算的模数小于曲面解除疲劳强度计算的模数 m。而齿轮模数 m 的大小主要取决于弯曲强度所决定的承载能力, 齿面接触疲劳强度所决定的承载能力仅与齿轮直径有关。故 m 的取值为 3。按接触强度算得的分度圆直径 d=101.49mm,算出齿轮齿数:83.33349.101mdz(3.26)取 z=.几何尺寸计算1).分度圆直径:d40 3120mz mm(3.27)2).计算齿宽:0.5 12060dbdmm(3.28)3).计算齿轮齿顶圆和齿根圆:d2126amz;(3.29)2.5112.5fdmzmm(3.30)4)齿轮尺寸图如图 3.3:河海大学文天学院毕业设计(论文)27图 3.3 齿轮尺寸图3.6 电机轴上的键的校核由伺服电机的尺寸参数可知,电机轴的直径42d mm,由上面计算可知57.3TNm键的参数为:h=8mm;b=12mm;L=86.8mm;p=110MPak=0.5h=4mm;l=L-b=74.8mm,所以332102 57.3 107.861104 86.8 42ppTMPaMPaMPakld(3.31)满足强度要求。3.7 滚动导轨副的校核由表 2-2 知,滚动导轨副的61200AMN m;61200BMN m;73140CMN m图 3-4 导轨副弯矩示意图通过上面的计算过程我们可以得知,18400mavMN m远远小于以上要求载荷,所以,其强度远远满足要求。3.8 横梁的校核由下图 3-5 所示,横梁所产生的支撑力12182FFGKN(3.32)河海大学文天学院毕业设计(论文)28图 3-5 横梁受力图1)导轨副弯矩的校核:横梁自重 G=16KN,其对两端的滑动导轨副产生的弯矩:16232000cMG LKNmN mM(3.33)其中73140cMN m。2)横梁体的校核将横梁体等效成高 h=360mm,宽 b=232mm 的矩形截面梁,则2230.232 0.360.00566bhWm(3.34)max32000MMN m(3.35)所以:maxmax320006.40.005MMPaW(3.36)查机械设计手册,该材料为 45 钢,弯曲许用应力 =100aMP因 max,所以该横梁体满足弯曲强度。因两端采用钢板焊接,此钢板的参数为 h=20mm,b=394mm焊接处剪切应力18SFFKN则:max338000=1.0222 0.02 0.394sFMPabh(3.37)查机械设计手册知 30aMP因为 max,所以满足强度要求。河海大学文天学院毕业设计(论文)29总 结本课题的题目为预制板生产线-自动划线装置的设计,设计此套装置的目的是为了节省在预制板生产时人工划线时所耗费的时间,同时提高了生产效率,节省了人力物力。本装置的设计目的就是为了能够在所铸造的预制板上划出坐标线, 对于其功能结构,是采用伺服电机作为传动源,配合齿轮齿条的啮合传动,使之能够驱动在水平方向的运动。对于其传动过程中的支撑,此处采用滚动导轨作为横梁移动时的支撑。控制等方面采用 PLC 控制,结合伺服电机的驱动,可以精确的定位划线部分所在的位置。综合本装置的各个部分,此装置不仅可以实现自动划线,还可以在机械手的辅助下来用于其它的用途。对于此装置设计过程中所遇到的问题,总结如下:首先,在设计之前要深入调研该项目的市场需求和生产需求, 以利于获得最佳的设计方案; 其次, 对于结构的设计,尽量多用标准件,可以减少生产成本。同时,材料的选择要参考国家标准。最后,对所设计的装置的各个部分进行校核,以获得满足结构强度和生产要求的装置。预制板生产线-自动划线装置可以实现在预制板上划出所需的坐标线,以利于对预制板的后续加工。此装置采用本文所述的设计方法,完全满足生产要求。能够达到生产中所需的精度。河海大学文天学院毕业设计(论文)30致谢经过这几个月的毕业设计,我从中学到了很多书本上没有的东西,感谢汪老师对我的孜孜不倦的指导。她在我做设计的过程中,给了我很多的帮助和教诲。在这段时间,我们每周所做的进展都要向老师汇报,老师也不辞辛苦的一一为我们做指导,指出我们的设计不足,在我们实在想不出解决办法时老师也会用各种方式引导我们,对于老师的这种认真负责的态度,我们非常的感动,因为我们学到了如何去思维,如何去解决工程中的问题。再次,深深的感谢老师。同样也要感谢我的同学,我们在学习三维制图软件的过程中,相互帮助,相互启发,这也让我深深感受到团队的力量是巨大的。当然还有哪些给我提供资料的企业,非常感谢你们的支持。毕业在即,同时也感谢母校传授给我的知识。感谢那些为了我们的未来而不辞辛劳的老师们,你们毫不吝啬的将你们所学的知识传授给了我们,同时又在生活上,工作上给了我们莫大的帮助。我将不负众望,继续前进。河海大学文天学院毕业设计(论文)31参考文献1 公国英.现浇板与预制板的比较J. 油气田地面工程.2003,22(6):90.2 王彤. 机电领域中伺服电机的选择原则N. 应用科技, 2001,21(8):6-8.3 吴宗泽, 罗圣国.机械设计课程设计手册M. 第 3版.北京: 高等教育出版社, 2006.4 濮良贵,纪名刚,陈国定,吴立言.机械设计M. 第 8 版.北京:高等教育出版社,2006.5 王跃进.机械原理M. 第 1 版.北京:北京大学出版社,2009.6 闻邦椿.机械设计手册:第 1-6 卷M.第 5 版.北京:机械工业出版社,2010.7 熊腊森, 刘 松, 吴丰顺. 电弧喷涂枪的研究与设计J.电焊械,2003,33 (10) : 25-38.8 周传宏, 孙健利 . 滚动直线导轨副的运动精度试验研究 J. 机械 设计,2001,(2):20-21.9 赵霞,陈纬. 横移车齿轮齿条有限元计算分析 J. 机械工程与自 动化,2011,(1):60-62.10 戴俊平,关文魁,郭辉. 齿轮齿条进给伺服系统综合模型的研究J. 机械工程与自动化,2011(4):147-148.11 王宏杰,颜国正,丁国清,姚 舜,颜德田. 基于补偿算法的机器人型材自动划线和切割系统J. 上海交通大学学报,2002,(36):991-994.12 秦大同,谢里阳.现代机械设计手册:第一卷M.第一版.北京:化学工业出版社,2011.13 刘鸿文.材料力学M. 第 4 版.北京:高等教育出版社,2004.14 钮新强,覃利明,于庆奎. 三峡工程齿轮齿条爬升式升船机设计J. 中国工程科学, 2011,13(7):96-103.15 何重远.直线滚动导轨J.机床,1985,7:7-8.16 宋海林.直线滚动导轨的特点及选用.机械工程师J.2011,7:34-35.河海大学文天学院毕业设计(论文)32Automatic real-time road marking recognition using afeature driven approachAlireza Kheyrollahi Toby P. BreckonAbstract Automatic road marking recognition is a key problem within the domain of automotive visionthat lends support to both autonomous urban driving and augmented driver assistance such assituationally aware navigation systems. Here we propose an approach to this problem based on theextraction of robust road marking features via a novel pipeline of inverse perspective mapping andmulti-level binarisation. A trained classifier combined with additional rule-based post-processing thenfacilitates the real-time delivery of road marking information as required. The approach is shown tooperate successfully over a range of lighting, weather and road surface conditions.Keywords Computer vision Mobile robotics Road marking recognition Vanishing pointdetection Intelligent vehicles1 IntroductionAutonomous driving and road intelligence have been the focus of attention for many computer visionresearchers over the last 20 years 1. Although significant achievement has been made in developing avehicle that can perform some form of autonomous guided driving, progress has been slow because ofthe problems of speed, safety and the real-time complexity of the on-road situation. A human drivergathers constant and numerous visual information from the road and the surroundings. Our brain is quiteefficient in analysing this information and responding quickly by an appropriate course of action. For acomputer vision system to be able to display a similar ability, it must encompass various detectionabilities, each of which has been subject of significant research activity 24.Whilst work on lane detection and tracking is significant 5,22, the literature on road markingrecognition is limited with no reported work for real-time on-road text recognition. While road marking(including arrows) and text recognition is a relatively simple task for human drivers, its automaticdetection would be very usefuland perhaps essential in some casesfor an autonomous vehicle or asan aid to driver situational awareness in an increasingly complex road environment.Here we propose a multi-step processing pipeline for robust recognition of road markings and text.First image frames from an on-board camera are captured and pre-processed to remove the perspectiveeffect via an inverse perspective mapping (IPM) driven by automatic vanishing point (VP) detection.After removing the effects of perspective a multi-level thresholding approach is applied to extract brighton-road objects that contrast against the road surface. These objects are then simplified to a contourrepresentation that is passed to an artificial neural network (ANN) classifier for recognition. The resultsof this per-symbol (i.e. glyph level) classification are post-processed for either driver display orpotential use by an autonomous driving decisionengine. This approach is shown to operate in real-time under a variety of driving, lighting and roadconditions.2 Previous work河海大学文天学院毕业设计(论文)33Prior work in this area is limited and we briefly review the main seminal works in this area 6,7,29,30.Charbonnier et al. 6 report a marking recognition process, which relies on finding stretches of linesusing a horizontal scan line and then using Radon transform to find the most probable two lines whichmake up the start and the end of a rectilinear marking. Recognition of an arrow is based on comparingprojection of the left and right of the identified rectilinear marking. In this work no perspectivecorrection is done prior to recognition process and resulting performance is not real-time.Rebut et al. 7 describe an extensive method recognising four classes of arrows and linear markings.They initially use Hough transform for linear marking detection and an arrow pointer template to findthe location of the arrow symbols. When marking objects are located on the road surface a Fourierdescriptor is then used to extract key features from which a k-Nearest Neighbour (k-NN) classifier isused for final recognition. Training is achieved using a database of sample arrow marking images withfurther samples created by adding noise to the limited initial data set. A Fourier feature descriptor ofdegree 34 resulted in an overall global detection error of 6% but a significant false alarm rate of 30%.Again real-time performance was not achieved as processing was carried out off-line in a post-analysisapplication setting.More recent work on this topic 29 follows a similar shape-based methodology to that proposed herebut is limited to on-road arrow recognition and uses a limited feature set that drives a classifier poorlysuited to the complex alpha-numeric character sequences, under degraded quality conditions, consideredhere. In other recent work 30 an Eigenspace recognition approach is proposed but is reliant on goodautomated road glyph extraction and sample alignment (as per seminal Eigenspace recognitionapproaches 31). Unlike the methodology proposed here to deal with such variation under in-situon-vehicle operation under varying environmental conditions, 30 does not address either of theseissues. Detection rates (for a small set of isolated on-road glyphs only) are comparable to those achievedhere but 29,30 do not consider complex sequence recognition in the presence of glyph extraction androad position related noise.By contrast our method uses a range of features that include invariant spatial moments, histogramprojections and normalised angular measurements that are input to a trained neural network forreal-time symbol recognition. This approach offers a significantly lower false alarm rate, within thebounds of real-time performance over a much larger set of symbol classes (6 on-road arrow types and17 alpha-numeric characters) than prior work in the field 6,7,29,30. In addition it facilitates therecognition of complex multi-glyph sequences under both varying road, lighting and marking qualityconditions3 Perspective image correctionAs a pre-processing stage to our feature extraction approaches we first perform perspective correctionon the images obtained from the vehicle mounted, forward facing camera (e.g. Fig. 2). This isperformed via a one-time calibration process of vanishing point (VP) detection and subsequent inverseperspective mapping (IPM).3.1 Vanishing point detectionA vanishing point (VP) is a point in perspective images to which parallel lines converge. Conventional2D images are essentially a transformation of the 3D world onto 2D plane(image plane). Following a classical pinhole camera model 23 parallel lines (e.g. road edges) within河海大学文天学院毕业设计(论文)34the 3D scene appear to meet at a point within the 2D image-dependent camera angle and lenscharacteristics 8,23.An example is shown in the road edges shown in Fig. 1 .Fig. 1Temporal filtering of Canny edge detector output. Upper standard Canny edge detection output. Lower temporalfiltering result of Canny edge image sequence.In general, images that illustrate such a perspective (i.e. perspective images) can have up to three suchvanishing points located on the image boundary, outside the boundary (external) or at infinity (i.e. fardistance within the image, denoted as the infinite vanishing pointe.g. Fig. 2 ).Fig. 2 IPM Transform applied to example road imageThe vanishing point closest to the centre of the image, the dominant vanishing point, is commonly usedfor perspective correction in road scene images via camera calibration. The first stage in this process isthe detection of the VPs within the image.Classical VP detection is based on mapping edge lines detected within the image onto a unit Gaussiansphere as first described by Barnard 8. Each line creates a circum-circle on the sphere with themaximal accumulated intersecting region of these circum-circles defining the vanishingpoint locations. Further developed by various authors 911 and the capability for boundary, externaland infinite VP detection makes this a popular approach.However, recent studies show that such Gaussian sphere techniques, although simplifying theunbounded problem to a bounded search space, can produce spurious and false results especially in thepresence of noise or texture 12,13. An alternative, less prevalent approach is the use of a polar spaceaccumulator as originally described by Nakatani 14. As each point can be represented in polar space河海大学文天学院毕业设计(论文)35by a sinusoid, this improvement uses error minimising of the sinusoids to find the convergence 15.Clustering of lines in Hough space has also been proposed as an alternative method whereby regularHough line detection is then followed by clustering line segments to candidate vanishing points 16.In this work we have used a variant on this latter approach 11,12 which also uses the classicalHough transform as in the initial stages of VP detection. The output of Canny edge detection 32performed on a down-scaled (320 240) and Gaussian smoothed version of the image is fed into atemporal filter defined as follows:From Eq. 1, this temporal filter Tt at time t operates over a number of images i in a given calibrationsequence. Fi is the processed image frame at time i and n is the number of cumulative frames used togenerate an accumulated edge output, Tt . This output, Tt , is then normalised before further processingfor Hough-based line detection. As shown in Fig. 1 this temporal filtering will attenuate edgefluctuations that are associated with noise (trees, shadows, etc., shown in Fig. 1 upper) in any givenframe and will enhance edges that are constant over n frames such as the road markings and boundarieswe desire for VP detection (Fig. 1 lower). A short sequence of n frames, readily obtainable at 25 Hzfrom a modern video source, over a short distance of roadway gives a suitably stable scene for such amulti-frame temporal approach to be applicable.This output of Tt is then used to find linear edge features, for VP detection, using a classic Houghtransform method 11 within each frame t based on the previous n frames. The maximally detected llines are extracted from each frame based on their Hough space accumulator value after the exclusion oflines falling within orientation threshold lt of the vertical or horizontal 11,12.From this set ofl lines (here using l = 60), we then find the intersection points of all possible linepairings. These points are then clustered based upon a k-NN clustering approach in 2D image space(here using k = 3 for maximal presence of 3 VP in image). Each resulting cluster is then given asuitability score as follows:The score for a given cluster U is calculated as the sum of Manhattan distances of all intersection points,(xi, yi), within the cluster from the vanishing point of the previous frame (xc, yc),calculated using thesame process with Ttfor frame t 1 (Eq. 2). The Manhattan distance was found to empirically offermore stable results at reduced computational cost than the standard Euclidean approach. Where noprevious VP is available, an arbitrary point (such as (0, 0) is used. A simple averaging on the pointsfrom the winning cluster of this scoring method is used to determine the final candidate vanishing pointfor the frame. This resulting VP is then further averaged with VP of previous frame t 1.This overall VP detection process converges to the correct vanishing point in approximately 100frames (i.e. 4 s video 25 fps) and acts as a one-time computationally expensive calibration processfor a given vehicle camera installation. The detected VP is then used to drive the inverse perspectivemapping (IPM) of the on-vehicle camera image.3.2 Inverse perspective mapping河海大学文天学院毕业设计(论文)36As previously mentioned, the perspective effect within the 2D images introduces artefacts which couldinterfere with successful feature extraction and recognition. This issue is particular prevalent in theexample of ground plane objects that appear at a 4590 angle to the image plane of the camera.This is illustrated in Fig. 2 (left) with regard to the speed limit symbol (40) on the roadway in front ofthe vehicle camera.This effective of perspective can be overcome, although not entirely, by applying an inverseperspective transform using a technique known as inverse perspective mapping (IPM) 24. Theapplication of IPM requires six parameters 5: (1) focal length of camera , (2) height of camera fromground plane h, (3) distance of the camera from the middle of the road d, (4) vertical position of thecamera along the road axis l, (5) pitch angle , (6) yaw angle .Whilst the first four parameters are obtainable empirically from the vehicle camera installation, thefinal yaw and pitch angle parameters are retrieved via the vanishing point (xvp, yvp) identified in theprior detection exercise via Eq. 3 5The IPM transform 24 then maps point (u, v) in the vehicle camera image (dimension M N) topoint (x, y) onto the road plane (Eq. 4). This mapping to the road plane of the image pixels (flattened asz is always zero, Eq. 4) can then be extracted as an image of the ground plane with perspectivedistortion effects removed 24.The required mapping (Eq. 4) although computationally expensive only requires calculation once fora given set of calibrated vanishing points. It can then be stored for use, as a real-time mapping, for allsubsequent image frames from the on-vehicle camera.An example of this inverse mapping transform is shown in Fig. 2 where we see an image frame froman on-vehicle camera (Fig. 2, left) transformed to an inverse perspective mapping image of the roadwayground plane (Fig. 2, right) based on the detection of vanishing points as outlined previously. As isapparent in Fig. 2, the on-road marking in the transformed image have had the effects of perspectiveapparent in the original partially removed. This mapped image is significantly more viable as an inputfor constructing a robust method of road-marking extraction.4 Road-marking extractionRoad-marking extraction involves the binarisation (i.e. threshold based extraction) of the road sceneresulting from the application of the IPM transform to facilitate road surface glyph isolation using acontour-based approach.Achieving robust thresholding in the presence of extreme light and shadow variations is a classicalchallenge in image processing that has plagued earlier work 5,6. Numerous noise sources (shadows,sun/headlight/streetlight reflection, road surface debris and decay) interfere with the process. Broggi 5河海大学文天学院毕业设计(论文)37proposed an image enhancement method using custom localised adaptive thresholding which producedsuccessful results albeit with a significant, non-real-time computational cost.In this work we propose a related adaptive global thresholding approach driven from globalhistogram information on a per image basis within any given road image sequence.Fig. 3 Choosing four thresholds with p = 0.02 and q = 0.17 and a 256 bin cumulative histogram4.1Adaptive image thresholdingIn general an N-value adaptive global threshold approach is employed to create N-separate binaryimages for subsequent shape isolation. The normalised cumulative histogram 17 of the resulting IPMtransformed image (in grey scale) is used to establish these thresholds. An upper and lower border, pand q, for the range of interest in this histogram are established as percentile offsets of the normalisedcumulative histogram maximum value (1.0). This range is within the histogram is then equallysub-divided into N 1 subranges via the creation of N thresholds. Here, for road scenes, we use N = 4and create four thresholds from three subranges. For example if p = 0.02 and q = 0.17 (2nd and 17thpercentile) then we then choose cumulative threshold value k = 0.02, 0.07, 0.12, 0.17 (for N = 4) andthen find the corresponding image pixel value thresholds as the lowest index (i) cumulative histogram(Hi) bin with a value greater than or equal to 1.0 k.Fig. 4 Adaptive thresholding under extreme lighting variationsAs illustrated in Fig. 3 for p = 0.02 and q = 0.17 (2ndand 17th percentile) the corresponding upper and河海大学文天学院毕业设计(论文)38lower thresholds fall at 254 and 247 with the two intermediate thresholds (7th and 12th percentile)falling at the equidistance index positions of 252 and 250. Overall, this algorithmic approach isolatesN-boundaries based on the cumulative distribution of the pixel values within the IPM transformedimage from which N binary images, corresponding to differing shape isolation characteristics, can thusbe extracted.A remaining problem is that the distribution of the IPM image will vary substantially depending onthe presence/absence and size of any road markings in the image frame. Using fixed values for p and qthus leads to spurious false-positive glyph detection due to poor threshold selection under certainconditions. This is dealt with by reference to the overall mean intensity of the grey scale image frame,avg(Image), and the scaling of p and q on a per frame basis as i = C(i/avg(Image) where constant C isset empirically to C = 128 for 8-bit grey scale images and i = p, q.The most challenging glyph extraction conditions are generally found in bright sunlight conditions. InFig. 4 (right) we see an example of thresholding using the proposed approach under such conditions. Ofthe four binary images produced (Fig. 4, left) we see the arrow glyph easily becomes disconnected in allbut one (Fig. 4, top leftmost). This use of a multi-level adaptive threshold approach facilitates robustconnected glyph extraction even in the presence of extreme lighting variations and noise (e.g. shadowsof Fig. 4, right).Overall the approach performs well as a robust, real-time methodology for glyph extraction from theroad surface that operates successfully both under daylight and night driving conditions over a widerange of illumination conditions.4.2 Shape isolationFrom these binary images, a set of connected image contours are extracted using a backtrackingapproach 17 prior to simplification into a closed polygon shape representation using the DouglasPeuker derivative of 18. This is performed on over all four versions of the road surface IPMtransformed image that result from earlier multi-level adaptive thresholding.Figure 5 shows some examples of the closed contour con- figurations extracted from these images fordiffering types of road-marking glyph. On the left (in Fig. 5), we see the IPM input image whilst on theright we show the simplified polygon shapes extracted from the four levels of binary thresholdingapplied to the IPM input. Notably the complexity of extracted contours does vary significantlyFig. 5 Examples of shape isolation in the post-adaptive threshold images4.3 Shape post-processing河海大学文天学院毕业设计(论文)39In order to simplify the later recognition task, and also as an initial method of false-positive glyphfiltering we perform two additional stages of shape post-processing: complexity rejection andorientation normalisation.Complexity rejection considers the complexity of the resulting simplified polygon representationextracted from the image contours 18 with a view to excluding overly simple or complex shapecontours from further processing. At present this is performed using explicit minimal and maximalbounds on the number of segments each polygon contains. Currently, polygons with less than threesegments or more than 35 segments are excluded (by empirical selection). Segments below thiscomplexity are commonly found to be the rectilinear lane marks on the road surface (e.g. Fig. 2,left/right) after the contour smoothing applied by 18. Those above this complexity are almostinvariably blocks of foreground segmentation noise originating from the binarisation process (e.g. Fig. 4,top left) rather than genuine road-marking glyphs.In addition we also perform orientation normalisation of the extracted shape segments in order tocompensate for both distortion noise in the IPM transform (Sect. 3, e.g. Fig. 2) and any misalignment ofthe vehicle camera to the roadway (e.g. due to vehicle turning/over-taking/lane transition or camber inthe road itself).This is performed by identification of the object axis of the closed contour itself via reference to thecentral moments of shape, i j 19. As shown in Eq. 5 axis angle can be derived as the arctangent ofthe first and second order central moments.5 Road marking recognitionThe final stage of our pipeline is road marking recognition based on feature extraction, neural networkclassification and final post-processing. This is performed on the remaining extracted road markingobjects with overly complex shapes rejected and the remaining shapes normalised to a common axisorientation as previously defined.5.1 Feature extractionFeature extraction extracts a feature vector representation for each potential glyph extracted and retainedin earlier shape isolation and post-processing. In feature extraction for character recognition tasks suchas this, it is generally recommended to use a ratio of 510 times the number of features per class as tothe number of classes in the recognition problem20. Here with defined 23 classes (one per glyph) we employ a feature vector of 118 elements (ratio 9.5 features to classes).This feature vector is primarily constructed from the aspect ratio of the glyph (i.e. height divided bywidth), normalised central moment measurements of shape 20, Hu spatial moments 25 andhorizontal and vertical projection of the glyph. In addition a novel set of glyph features is alsointroduced based on the fuzzy zoning of angles extracted from the simplified polygon representationextracted from the glyph contour.For fuzzy zoning of the angles, we calculate the angle of the contour at each edge connection on the河海大学文天学院毕业设计(论文)40contour outline, normalise them (range 0179) and thus create two sets of 9-element features (oneeach for acute and obtuse angles) via fuzzy set grouping based on spatial position (fuzzy zoning).These zones are defined as a 9-way (3 3) grid division mapped onto the extracted area of theglyph (as shown in Fig. 7). The exact area (in pixels) of each zone within thegrid is adapted to be anequal division of the overall glyph area. Membership of a given pixel to a given zone is not exclusivelydisjunctive with boundary pixels having membership of multiple zones via the definition of overlappingfuzzy zone boundaries (Fig. 7).Fig. 7 Fuzzy angle zones mapped onto an example extracted glyphAccumulations are made on a per zone basis for angular features within the glyph contour. Within thefuzzy border regions such features only partially contribute to the overall zone accumulation relative toits position in the border region. Features within a zone contribute 1.0, those outside0 and a range 01.0 for fuzzy zone contributions. Over all zones the sum of contributions made by agiven feature to all accumulators will always be 1.0. This is illustrated in Fig. 7 where we see anexample glyph mapped to a set of nine (33) fuzzy zones with overlapping boundaries that correspondto these varying feature contributions over multiple zones.One element in the feature vector will thus represent the acute and obtuse accumulator for each of thenine angle zones covering the spatial layout of the glyph. Formally, we can define each suchaccumulator follows for a glyph contour with N angular features:In Eq. 6iis the resulting accumulation for zone i, and (i, j) is fuzzy membership relating to theposition of angular feature j to zone i.Finally, in addition to this concept of fuzzy spatial angular accumulator measures we also employ asecondary linear angular profile measure over the length of the external boundary contour of the glyph(orientated clockwise from the topmost point). This essentially constructs a fuzzy contour orientationprofile for the simplified object. Given the near scale-constancy of road-markings, we consider theexternal boundary contour as N equilength segments over its length (here we empirically use N = 25 forUK road-marking scale). Each segment is represented by the last (polygon joint) angle along its lengthfrom the common contour origin (topmost contour point) for each glyph shape. Where a given segment河海大学文天学院毕业设计(论文)41has no angle over its length it is represented by the angle from the previous segment (initialised from 0for first segment).This angular profile along the length of the exterior glyph contour is then encoded as a normalisedfour-element vector n. This proportionally assigns weights to multiple vector elements to create anovel multi-variate representation for segment angle (degrees) and vector elements i for i = 0 .3 (see Eq. 7).Overall feature extraction results in an 118-element feature vector descriptor of each glyphcomprising of aspect ratio (one element), normalised central moments (seven elements up to order 3),Hu spatial moments (seven elements up to spatial order 3), horizontal/vertical histogram projections (50vertical and 35 horizontal) and fuzzy zoning of angles (nine for acute and nine for obtuse). This featurevector description forms the input representation to a trained neural network classifier used forrecognition.Overall these features are selected to give an overall descriptor that combines established globalmeasures of glyph shape (aspect ration, moments, histogram projections) and relative geometric featureplacement (fuzzy zoning of angles). Here multiple spatial/geometric measures are used in a “rich”descriptor to facilitate (a) differentiation over a larger glyph alphabet and (b) increased robustness tonoise over a large test evaluation set when compared to earlier (feature-sparse or reduced-feature) works6,7,29,30. Empirical results show a further increase in feature vector complexity reduces overallperformance of the approach dueto the prevalence of noise (e.g. increased granularity of fuzzy anglezones) whilst earlier work illustrates lesser results over a smaller feature set 6,7,29,30. The use of alarger multifaceted feature set in this work mitigates the overall effect of isolated feature stability undervarying road conditions (Sects. 4/6) when combined with a noise-tolerant classification approach (Sect.5.2). Future work will investigate feature stability under such conditions for the derivation of an optimalsubset where applicable.5.2 Neural network classifierHere we use a single hidden layer artificial neural network classifier 26 with the sigmoid activationbased on training with resilient back propagation (RPROP) 27. This operates on a 118-node inputlayer (same as feature vector), 69-node hidden layer and 23-node output layer (size of glyph alphabet).The number of hidden nodes was set empirically based on minimising the generalisation error overunseen test data sample in order to avoid overfitting 21.Classifier training/testing was performed based on 1022 sample glyphs manually extracted from realroad footage sequences with randomly selected 8020% ratio subsets used for training and testingrespectfully 26. Training takes order of 20s. for the current glyph alphabet in use. Parameter variationsare considered and presented in terms of those selected for the final ANN configuration. Otherparameter choices made are selected empirically and shown to be robust over varying marking quality,weather conditions, scene cluttered and illumination conditions.The current alphabet includes 6 arrow classes (straight, to Left, to Right, double To Left, double ToRight and bifurcation) and 17 characters. Individual glyphs were only included on the basis of suitableavailable training data and the current set could be expanded given sufficient training examples forfurther glyph types. Due to the similarity of three pairs of glyphs (S and 5, O and 0, A and 4) these weredefined as three single glyphs (S/5, O/0, A/4) with further differentiation performed in post-processing河海大学文天学院毕业设计(论文)42with relation to word context.The neural network implementation in use 26 was trained on the basis of an output vector offloating point values (range 1 to +1) where the maximal vector entry indicates the classification of theglyph. This notion of output classification requires two additional thresholds: (a) a maximal score abovewhich the output vector entry must be to be considered a valid classification and (b) a maximalseparation distance between the maximal output vector entry and the second maxima in the output.Imperially a maximal score threshold of 0.7 and a maximal separation distance threshold of 0.2 wereselected to with output results not satisfying these criteria resulting in a null classification for the givenglyph character.5.3 Post-processingAfter individual glyph classification via neural network a secondary stage of post-processing isemployed to identify coherent and consistent glyph patterns (words and symbol structures) on theroadway. In terms of the overall system-level results (Sect. 6) this stage is as important as itspredecessor in false positive elimination. The post-processing strategy has been implemented separatelyfor the arrow and character subsets of the overall glyph alphabet.Arrows Here we have used a simple post-processing method based on a temporal accumulator valueover multiple image frames in the road sequence. This assists in the elimination of “single-frame”false-positive glyph classifications by providing temporal consistency filtering. For every arrow shaperecognised in a frame t, via the previous neural network classifier, value s is added to the accumulator ofthat class,Acct(Ci), as depicted in Eq. 8 where Xi is a glyph instance classified as class Ci .The use of the min() function in Eq. 8 ensures the accumulator for a given class Acct(Ci), will notexceed a threshold . Every class, Ci,for which a glyph instance is not found (i.e. classified) in framet has its accumulator, Acct(Ci) decremented by decay factor s to a minimal value bound of 0. Usingsuch a decay factor ensures that after a few frames, without a classification for class Ci the associatedaccumulator,Acct(Ci), will suitably decay below a defined display threshold.Text patterns The post-processing of text patterns to provide a simplified form of“word recognition”is achieved in two stages: word coherence and matching. The first stage, word coherence, attempts tofind lines of characters that construct individual words whilst the second stage of matching applies afuzzy match against a set of pre-defined dictionary words/patterns.Given the context of on-road marking recognition and the associated limited vocabulary this isdeemed more appropriate than a full classical HMM-based text pattern recognition approach 28. Anempirical study of the vocabulary of words allowed on UK roads shows at most a set of 1015 regular,non-geographic “words” that are encountered within this context.Our dictionary defines 19 exemplar words (road numbers, speeds, directions, labels): A5, 20, 30, 40,CAR, WASH,SOUTH, LANE, SLOW, NO, ENTER, BUS, ONLY, KEEP,CLEAR, M1, A421, HOTEL,A509. The potential performance degradation of expanding this set to sub-100 such words isconsidered to be minimal.Initial word search spatial orders the glyph characters detected by earlier processing (left to right),identifies spatially distinct lines of text on the roadway scene and groups those lines into words based河海大学文天学院毕业设计(论文)43on relative spatial location of the glyphs. In the secondary matching step we make a fuzzy match againstthe dictionary list as follows.For defined dictionary word (glyph sequence) Cp, we match the detected word (glyph sequence) Wqfrom the roadway by maintaining an accumulator value,Cjp, for each characterj of Cp,In doing so, weassume that Cp has m characters and Wq has n characters and n = m is possible. The value of thisaccumulator, f (), is calculated as shown in Eq. 9 for each character, Cjp, which matches to character i ofWq ,denoted Wiq.In Eq. 9,s is an accumulation constant and and () is defined as in Eq. 10 for a given i and n (asexemplars).This function is designed to penalise the score of the detected word if position of eachdetected character is not close to the position of a similar character in the dictionary word that is beingmatched.An example of successful recognition of the glyph sequence corresponding to the word “SLOW” isshown in Fig. 8. Here we see the original roadway camera view (top left view), the IPM transformedimages and thresholded glyph text (bottom left view) and the positional overlay of the detected andrecognized glyph pattern (e.g. “SLOW”) overlain onto the IPM transform of the roadway scene(bottom right view). The top right view (Fig. 8) shows the current edge pattern of the roadway forreference. A label of the recognised text is also shown in the main top left view (Fig. 8). Notably thisresult has been obtained at night based on vehicle headlight illumination of the road markings.Fig. 8 Successful recognition of the on-road word “SLOW” in night video footage6 ResultsWe present a series of the results of the proposed approach operating on UK rural and urban roads undervarying weather and lighting conditions. All of the video sequences used were captured at 15 fps using a640 480 resolution digital camera, mounted behind the windscreen, with the speed of the vehiclevarying between 30 70 mph (as per UK traffic regulations). An example of the camera to road河海大学文天学院毕业设计(论文)44perspective is shown in Fig. 9 (top left).Fig. 9 Recognition of two arrows in the same frameImage frames retrieved from the capture device are initially pre-processed via the VP detectionmodule (Sect. 3) for one-time initiation of the IPM transform parameters. These are down-sampled to a320 240 resolution for processing from which a road-way sub-region of interest defined in initialcalibration (e.g. Fig. 9, bottom left). Post-initialisation all video frames were processed in real-time atapproximately 11 15 fps (1.5 GHz CPU, single core in use) with each video frame takingapproximately 6090 ms to process. As expected processing time varies slightly with road markingcomplexity due to the number of glyphs to be processed in a given frame.The approach generally performed very well under varying lighting and weather conditions. Figures 9and 10 show the successful recognition of multiple arrow and multiple line word markings underdaylight conditions. Figure 8 shows a similar successful recognition under night conditions. In all of theexamples (Figs. 8, 9, 10, 11) we see the original roadway camera view (top left view), the IPMtransformed images and thresholded glyph text (bottom left view) and the positional overlay of thedetected and recognized glyph pattern overlain onto the IPM transform of the roadway scene (bottomright view).Fig. 10 Recognition of two words “CAR” and “WASH”河海大学文天学院毕业设计(论文)45By contrast Fig. 11 shows the erroneous detection of the “A40” road marking due to multiplefailures in the detection of the “KEEP CLEAR” marking pattern. This is largely attributable to brokenglyph characters caused by road noise (poor/worn markings), light reflection via the windscreen in frontof the camera and the elongated nature of this particular marking with regard to the camera field of view(FoV). Improved camera placement and FoV engineering could lookto overcome this issue in place of adjustments to the processing methodology itself.Fig. 11 Erroneous recognition of “40” speed limit in place of “KEEP CLEAR”From Table 2, we can see the overall combined recognition result for the initial glyph classificationand the post processing arrow/glyph sequence consistency checks.河海大学文天学院毕业设计(论文)46Direct comparison with prior work in this area 6,7,29,30 is practically difficult and thus notpresented here. Empirical studies on simple non-feature based approaches (e.g. Template matching 23,also 6,7) yield poor performance due to variation in the geometric skew of the glyphs and deal with asignificantly lesser glyph alphabet that considered here. Future work could consider further comparativeevaluation of 6,7,29,30 over the glyph alphabet used here.In order to quantify the performance of the system we present isolated glyph recognition rates for theneural network classifier over a set of extracted and pre-processed (as per Sect. 4) glyph examples inTable 1. In addition we present overall system performance over a set of test sequences in varyingconditions in Table 2. As we see from Table 1, isolated glyph recognition over the test set generallyperforms very well based on the current feature selection used for classification ( 75 100%).Sub-optimal recognition on the bifurcation arrow is encountered in some cases due to its similarity (at agiven scale) to the glyph character “U”.Alphanumeric character recognition was in general verygoodbut suffered in some instances from limited variation in training examples. Improved training, over alarger set of training examples would potentially address both of these issues.These results are shown for a variety of weather, illumination and environmental conditions. Overallwe see 85% successful recognition for arrows and 81% recognition for the 19 dictionary textpatterns/words. False detection are minimal and this is largely aided by the post-processing employedafter initial glyph classification. For reference Table 2 refers to the UK location of the sequence and theenvironment/weather/illumination conditions under which the testing was performed.Table 2Arrow and textrecognition results over sixdifferent video sequences7 SummaryA methodology for the automatic recognition of on road markings is presented that operates, undervarying test conditions, with approximately 81 85% success based on the data sets available forclassifier training. This approach has been shown to work within a viable real-time constraint(1115fps) both in daylight and at night. Prior work in this area is very limited 6,7,29,30 and no prior workon the recognition of on-road textual words has been carried out.Extensive results are shown undervarying conditions, vehicle speeds and environmental conditions.The approach is based on theconstruction of an IPM transformed image of the roadway derived from an initial calibration of河海大学文天学院毕业设计(论文)47automatic vanishing point detection. This topdown roadway view forms the input to robust featureextraction supported by a unique illumination resistant multi threshold.A trained single layer neuralnetwork classifier is then used for individual glyph classification prior toglyph word/patternconstruction via a simple post-processing matching methodology.Future work will investigate the use ofalternative shape features for glyph description, the extension of system glyph vocabulary andword/pattern dictionary and the potential use of alternative classifiers for the core glyph classificationtask.河海大学文天学院毕业设计(论文)48参考文献1. Bishop, R.: Intelligent vehicle applications worldwide. Intell. Syst.Appl. IEEE 15(1), 7881 (2000)2. Campbell, N.W., Pout, M.R., Priestly, M.D.J., Dagless, E.L., Thomas, B.T.: Autonomous road vehicle navigation.Eng.Appl. Artif. Intell. 7(2), 177190 (1994)3. Maurer, M. Dickmanns, E.D.: A system architecture for autonomous visual road vehicle guidance. ITSC 97. IEEEConference on Intelligent Transportation System, pp. 578583 (1997)4. Hoffmann, G.M., Tomlin, C.J., Montemerlo, F., Thrun, S.: Autonomous Automobile Trajectory Tracking forOff-Road Driving: Controller Design, Experimental Validation and Racing. American Control Conference, 2007 ACC07,pp. 22962301 (2007)5. Broggi, A.: Robust real-time lane and road detection in critical shadow conditions. In: Proceedings of InternationalSymposium on Computer Vision, pp. 353358 (1995)6. Charbonnier, P., Diebolt, F., Guillard, Y., Peyret, F.: Road markings recognition using image processing. In: IEEEConference on Intelligent Transportation System, ITSC 97, pp. 912917 (1997)7. Rebut, J., Bensrhair, A., Toulminet, G.: Image segmentation and pattern recognition for road marking analysis. IEEEInt. Symp. Ind. Electron. 1(47), 727732 (2004)8. Barnard, S.T.: Interpreting perspective images. Artif. Intell. 21, 435462 (1983)9. Magee, M.J., Aggmal, J.K.: Determining vanishing points from perspective images. Comput. Vis. Graphics ImageProcessing 26, 256267 (1984)10. Quan, L., Mohr, R.: Determining perspective structures using hierarchical Hough transform. Pattern Recognit. Lett.9(44), 279286 Elsevier (1989)11. Lutton, E., Maitre, H., Lopez-Krahe, J.: Contribution to the determination of vanishing points using Houghtransform. PatternAnal. Mach. Intell. 16(4), 430438 (1994)12. Cantoni, V., Lombardi, L., Porta, M., Sicard, N.: Vanishing point detection: representation analysis and newapproaches. In: Proceedings of 11th International Conference on Image Analysis and Processing Volume, Issue, pp.9094 (2001)13. Almansa, A., Desolneux, A., Vamech, S.: Vanishing point detection without any a priori information. Pattern Anal.Mach. Intell. 25(4), 502507 (2003)14. Nakatani, H., Kimura, S., Saito, O.: Extraction of vanishing point and its application to scene analysis based onimage sequence. In: Proceedings of the 5th International Pattern Recognition Conference (1980)15. Matessi, A., Lombardi, L.: Vanishing point detection in the hough transform space. In: Proceedings of the FifthInternational Euro-Par Conference, Tolouse, France, pp. 987994 (1999)16. McLean, G.f., Kotturi, D.: Vanishing point detection by line clustering. IEEE Trans. Pattern Anal. Mach. Intell.17(11), 10901095 (1995)17. Pratt, W.K.: Digital Image Processing, 3rd edn. John, New York (2001)18. Wu, S.T., Marquez, M.R.G.: A non-self-intersection DouglasPeucker algorithm. In: Proc. XVI Brazilian Symposiumon Computer Graphics and Image Processing, pp. 6066 (2003)19. Sebe, N., Lew, M.S.: Robust Computer Vision: Theory andApplications. Kluwer, Norwell (2003)20. Trier, .D., Jain, A.K., Taxt, T.: Feature extraction methods for character recognitiona survey. Pattern Recognit.29(4), 641662 (1996)21. Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1996)22. Kastrinaki, V., Zervakis, M., Kalaitzakis, K.: A survey of video processing techniques for traffic applications. ImageVis. Comput. 21, 359381 (2003)23. Forsyth, D., Ponce, J.: Computer VisionAModernApproach. Prentice-Hall, New Jersey (2003)河海大学文天学院毕业设计(论文)4924. Bertozzi, M., Broggi, A., Fascioli, A.: Stereo inverse perspective mapping: theory and applications. Image Vis.Comput. J. 8(16), 585590 (1998)25. Hu, M.-K.: Visual pattern recognition by moment invatiants. IRE. Tran. Inf. Theory 8, 179187 (1962)26. Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1996)27. Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: the RPROP algorithm. In:Proceedings of the ICNN, San Francisco, pp. 586591 (1993)28. Kundu, A., He, Y., Bahl, P.: Recognition of hand-written word: first and second order hidden Markov model basedapproach. Pattern Recognit. 22(3), 283297 (1989)29. Li, Y., He, K., Jia, P.: Road markers recognition based on shape information. In: Proc. IEEE Int. Symp. onIntelligent Vehicles Symposium, pp. 117122 (2007)30. Noda, M., Takahashi, T., Deguchi, D., Ide, I., Murase, H., Kojima, Y., Naito, T.: Recognition of road markings fromin-vehicle camera images by a generative learning method. In: Proc. IAPR Conf. On Machine Vision Applications, 155(2009)31. Turk, M., Pentland,A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 7186 (1991)32. Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679698(1986)河海大学文天学院毕业设计(论文)50实时自动道路标志识别用特征驱动的方法Alireza Kheyrollahi Toby P. Breckon摘要摘要: 自动道路标志识别域内的汽车视觉支持两个自治城市驾驶和增强的一个关键问题辅助驾驶等情境意识的导航系统。在这里,我们提出一个方法,这个问题,基于鲁棒路标特征提取通过一种新的管道逆透视映射和多层次的二值化。 训练的分类器相结合的其他规则的后处理则有利于道路标记所需的信息的实时传递。 该方法被证明在一个范围内的照明的成功经营,天气和道路表面条件。关键词关键词:计算机视觉 移动机器人 路标记识别 消失点检测 智能车辆1.引言自主驾驶和道路智能已经在过去的 20 年里,许多计算机视觉研究者关注的焦点1。虽然大量的成果已在开发一个工具可以执行某种形式的自治导向行驶,但由于速度的问题,安全的道路情况实时的复杂性一直进展缓慢。驾驶员聚集常数和众多的视觉信息从道对于计算机视觉系统能够显示类似的能力,它还必须包括各种检测能力,每个已显著的研究活动的主体路和周围的环境。 我们的大脑在分析这些信息和快速响应的适当的行动当然是很有效的24。虽然工作在车道线检测与跟踪是重要的5,22, 对道路标线识别有限没有报道工作的实时道路的文学文本识别。而道路标记(包括箭头)和文字识别是一个相对简单的任务,对人类的驱动程序,自动检测将是非常有用的,也许是必要的在某些情况下的自主车或在一个日益复杂的道路环境对驾驶员的态势感知的援助。在这里,我们提出了一种多步处理的道路标记和文本的识别管道。从机载相机拍摄的第一图像帧多级阈值方法去除角度的影响后,提取明亮的道路对象与路面。这些对象,然后简化为一个轮廓表示,通过对人工神经网络(ANN)分类器的识别。这每符号的结果(即符号级)分类处理或者驱动显示或通过一个自主驾驶决策引擎的潜在使用后。和预处理,去除透视效果通过逆透视映射(IPM)自动消失点驱动(VP)的检测。这种方法被证明在实时操作,各种各样的驱动下,照明和道路条件。2.前期工作在此之前这方面的工作是有限的,我们简要回顾在这一领域的主要著作6,7,29,30。沙博尼耶等人报告标记识别的过程,它发现依赖于发现延伸线采用水平扫描线, 然后利用 Radon 变换来找到最可能的两条线组成的开始和结束标记的河海大学文天学院毕业设计(论文)51直线。箭头的识别是基于比较左、右投影直线的识别标记。这项工作没有透视校正了识别过程之前和由此带来的性能不实时。反驳等人7描述一个广泛的方法识别四类箭和线性标志。他们最初使用 H 点变换的线性标记检测和箭头指针模板找到箭头符号位置。 当标记的对象是位于路面的傅立叶描述子来提取关键特征,k-近邻(K-NN)分类器进行最后的识别。训练用样本的箭头标记进一步样品加入噪声的有限的初始数据集创建的图像数据库的实现。 傅立叶描述子的程度 34 导致在一个整体的全局检测误差为 6%但重要的误报率 30%。再次实时性能不能达到加工进行了岗位分析应用程序设置离线。关于这一主题的 29 最近的工作遵循类似的基于形状的方法,提出了在这里,但仅限于道路上的箭头识别和使用有限的功能集, 驱动一个分类器不适合复杂的字母数字字符序列,在质量退化的条件下,考虑在这里。其他最近的工作 30 的特征识别方法是依赖于良好的自动化的道路提取和样品比对字形 (每种子特征子空间识别方法 31 ) 不同的方法来处理在不同环境条件下的车辆运行现场在这样的变化, 30 并没有解决这些问题。检出率(一个小的孤立的道路上的符号只)是与所取得的在这里,但29,30 不考虑复杂的序列识别中存在的符号的提取和道路的位置相关噪声。相比之下,我们的方法使用一系列的特点,包括不变的空间矩,投影,和归一化角测量,实时符号识别一个受过训练的神经网络的输入。这种方法提供了一个显着较低的误报率,在实时性能的界限,在一个更大的符号类(6 道路上的箭头类型和 17个字母数字字符)比在现场 6,7,29,30 之前的工作。此外,它有利于复杂的多符号序列的识别不同路面下,照明和标记质量条件。图(1)3.透视图像矫正作为一个预处理阶段,我们的特征提取方法,首先进行透视校正从车载获得的图像,面对镜头(如图 2)。这是通过一个一次性的灭点标定过程进行了(VP)的检测和随后的逆透视映射(IPM)。河海大学文天学院毕业设计(论文)523.1 消失点检测一个消失点(VP)是在透视图像的平行线收敛一点。传统的 2D 图像本质上是一种变换的三维世界到二维平面(图像)。一个经典的针孔相机模型 23 平行线(如道路边缘)在三维场景出现在一个点符合二维图像依赖摄像机的角度和镜头的特点 8,23 。一个例子是在图 1 所示的道路边显示(上)。一般来说,图像,说明这样的一个角度(即透视图像)最多可以有三个这样的消失点位于图像的边界,在边界(外部)或无穷大(即远的距离内的图像,称为无限的消失 point-e.g.图 2 左)。消失点接近中心的形象,占主导地位的消失点,通常用于通过摄像机标定的道路场景图像的透视校正。 在这个过程中的第一阶段是在图像的VPS 的检测。经典 VP 检测是基于边缘检测的图像映射到一个单元内高斯球的第一个巴纳德的 8 描述。每一行创建了一个环圈在球体上的最大累积交叉区域的这些环圈的消失点的位置确定。由不同的作者11和 9边界的能力进一步发展,外部的和无限的 VP 检测使这是一个流行的方法。图(2)然而,最近的研究表明,这样的高斯球技术,虽然简化了无界域问题的一个有界的搜索空间,可以产生虚假和虚假的结果,特别是在存在噪声和纹理12,13。另一种,更普遍的做法是一个极空间使用蓄电池作为中谷 14 最初描述。每个点代表可以在极地空间通过一个正弦,这种改进使用误差最小化的正弦曲线,发现收敛 15 。在空间直线聚类也被提出作为一种替代方法, 即常规 H 点直线检测是随后的聚类线段的候选人的消失点 16 。在这项工作中, 我们使用这一方法11,12也采用经典的 H 点变换在检测 VP 的初始阶段的一个变种。对 Canny 边缘检测 32 一个按比例缩小进行输出(320240)和高斯平滑的图像输入的时域滤波器的定义如下:从公式 1,这个时间滤波 TT 在时间 t 运行在一个给定的校准序列的多个图像的我。FI 是处理后的图像帧的时间我和 N 累积帧用于生成累积边缘输出的数量,TT。这个输出,TT,然后归一化之前通过基于 H 进一步加工的在线检测。如图 1 所示,这河海大学文天学院毕业设计(论文)53个时间滤波将减轻边缘的波动与噪声(树木,阴影,等,在图 1 上所示)在任何给定的框架,将增强边缘,恒定的 N 帧如道路标志和界限,我们希望检测 VP(图 1 下)。一个短的 N 帧序列,可以容易地获得在 25 赫兹从现代的视频源,在近距离巷道给一个适当的稳定的现场这么多帧时间的方法是适用的。此输出 TT 然后用直线边缘特征,VP 的检测,采用经典的 H 点变换方法 11 每一帧内基于前 n 帧。最大检测 L线从每一帧中提取的基于 H 点空间累加值线落在垂直或水平方向的阈值11,12LT 后排除。从这组仅行(这里使用 L = 60),然后我们发现所有可能的线对的交叉点。然后将这些点集群,基于二维图像空间的近邻聚类方法(这里使用 K = 3 在图像 3 VP最大的存在)。每个群集给出了适宜性得分如下:分数对于一个给定的集群计算 U 是曼哈顿所有交叉点的距离的总和, (xi,yi),在集群的消失点的前一帧(XC,YC),使用相同的过程,ttfor 框架 T1 计算(公式2)。曼哈顿距离被发现的经验提供更稳定的结果,在降低计算成本比标准的欧几里德的方法。以前没有 VP 是可用的,任意一点(如(0,0)使用。一个简单的平均分从该评分方法获胜的集群是用来确定最终的候选点消失的框架。这导致 VP 进一步平均 VP 前一帧的 T1。 这个整体的 VP 检测过程收敛到正确的消失点大约在 100 帧 (即4 秒视频 25 fps)和作为一个一次性的计算昂贵的校准过程,对于一个给定的车载摄像头的安装。检测到的副总裁,然后用于驱动的逆透视映射(IPM)的车载摄像机图像。3.2 逆透视映射如前所述,在二维图像的透视效果介绍文物可能影响成功的特征提取与识别。这一问题在地平面的物体出现在一个4590角度的摄像机的图像平面的例子是特别盛行。这是在图 2(左)在限速标志(40)在车辆相机前面的道路。这种有效的观点是可以克服的,虽然不完全,利用逆透视使用一种称为逆透视映射变换技术(IPM) 24 。IPM 应用程序需要六个参数 5 :(1)相机的焦距,(2)从地平面的高度 H 的相机,(3)从路中间的摄像机之间的距离 d,(4)沿道路轴线 L 的摄像机的垂直位置,(5)沥青角,(6)偏航。虽然前四个参数是可以获得的经验从车载摄像头的安装, 最后的偏航和俯仰角参数检索通过消失点(XVP,YVP)确定在现有的运动检测通过公式 3 5 。河海大学文天学院毕业设计(论文)54PM 变换 24 ,然后映射点(U,V)在车载摄像机图像(尺寸 MN)(x,y)点到道路平面(公式 4)。这个映射的图像像素的道路平面(Z 总是零,扁平式 4)可以被提取作为透视变形效果移除24 地平面图像。所需的映射(公式 4)虽然昂贵的计算,只需要计算一次,对于一个给定的标定,灭点。它可以储存使用,如一个实时的映射,为后续图像帧从车载摄像机。这种逆映射转换为例在图 2 中我们看到, 从车辆上的摄像头帧图像显示 (如图 2 所示,左)转化为一个逆透视映射图像的巷道的接地平面(图 2,右)基于消失点检测概述以前。如图 2 中的标记,在变换后的图像道路有角度明显在原来的部分去除的影响。这个映射图像明显更可行,为构建道路标线提取鲁棒的方法输入。4.道路标志提取道路标志提取包括二值化(即基于阈值的提取)的道路场景产生的 IPM 变换应用于方便路面字形隔离使用基于轮廓的方法实现在极端的光影变化下的鲁棒阈值分割是图像处理中一直困扰着早期的工作 5,6 经典的挑战。许多噪声源(阴影,反射太阳、大灯、路灯,路面的碎片和衰减)干扰的过程。布洛基 5 提出了一种使用自定义的局部自适应阈值产生成功的结果,尽管是以一种明显的图像增强方法,非实时的计算成本。在这项工作中,我们提出了一种自适应全局阈值分割方法相关的驱动每一个图像的基础上在任何给定的道路图像序列的全局直方图信息。4.1 自适应的图像阈值分割一般的 n 值自适应全局阈值的方法来为后续的形状隔离创造 n-separate 二值图像。归一化累积直方图 17 的 IPM 变换后的图像(灰度)是用来建立这些阈值。上、下边界,P 和 Q 的兴趣范围,在这个直方图作为百分位偏移的归一化累积直方图的最大值(1)。这个范围内的直方图则同样子通过 n 阈值的创作分为 N1 子。在这里,道路场景, 我们利用 N=4 和三子创建四个阈值。 例如, 如果 P =0.02 和 q =0.17(第二和第十七个百分位数)然后再选择累积阈值 k = 0.02,0.07,0.12,0.17 河海大学文天学院毕业设计(论文)55(N =4) , 然后找到相应的图像的像素值的阈值为最低的指数( I )累积直方图( HI)桶和价值更大,大于或等于 1K。图(3)如图所示。3,P = 0.02 和 q = 0.17(第二和第十七百分位)对应的上限和下限落在 254 和 247 两个中间值(第七和第十二个百分位数)落在 252 和 250 个等距离的索引位置。总体而言,该算法的方法分离基于 IPM 变换图像,二进制图像内的像素值的累积分布 n-boundaries,对应于不同形状的隔离特性,因此可以提取。剩下的一个问题是,IPM 图像分布会发生变化基本上取决于存在/不存在的图像帧中的任何道路标记的大小。使用固定值 P 和 Q,从而导致假阳性的字形检测由于阈值选取在一定条件下。这是通过参考的灰度图像帧的平均强度处理,AVG(图像),和P和Q在每一帧的基础上为我=C标度( I/AVG (图像 ) , 常数C的经验的C =128的 8 位灰度图像和我=P,Q。最具挑战性的符号提取条件通常是在明亮的阳光下发现。图 4(右)我们看到阈值使用所提出的方法在这样的条件下的一个例子。四的二值图像的产生(图 4,左)我们看到箭铭文容易断开,但在所有的(如图 4 所示,最左边)。本文利用一个多层次的自适应阈值的方法有利于即使在极端照明变化和噪声鲁棒性连接符号的提取 (如阴影图 4,右)。整体方法的性能以及作为一个强大的, 实时的方法提取从字形道路表面成功运作在白天和夜间驾驶条件下在很宽的范围内的光照条件。河海大学文天学院毕业设计(论文)56图(4)4.2 形状分离从这些二进制图像,一组连接的图像轮廓的提取采用回溯的方法 17 之前,简化成一个封闭的多边形表示方法对 18 道格拉斯Peuker 衍生物。这是对所有四个版本的路面 IPM 变换后的图像,从早期的多级自适应阈值分割结果。图 5 显示了封闭的轮廓和一些例子, 从这些图像中提取不同类型的道路标记符号形状。 左侧 (图 5) ,我们看到了IPM 的输入图像同时对我们显示简化多边形形状从二进制阈值应用到 IPM输入四级提取。值得注意的是,提取的轮廓的复杂性也显著变化。图(5)4.3 成形的后处理为了简化后的识别任务,并作为初始法假阳性字形滤波进行后处理两个阶段:形状复杂的排斥和取向正常化。拒绝考虑复杂性的简化多边形表示的图像轮廓 18 以排除过于简单或复杂的形状轮廓进一步处理提取的复杂性。 目前这是通过使用显式的最小和最大范围的段数每个多边形包含。目前,少于三段或 35 段以上的多边形被排除(通过经验选择)。下面这段通常是复杂的路面上直线车道标志(如图 2,左/右)的轮廓平滑后的 18的应用。上述这种复杂性几乎总是块前景分割噪声源于二值化过程(如图 4,左上)而不是真正的道路标记符号。此外,我们还为执行所提取的形状段取向正常化补偿失真噪声在 IPM 变换(教派。3,如图 2)和任何错位的车载摄像机的巷道(由于车辆转弯过以/车道过渡或河海大学文天学院毕业设计(论文)57弯曲的道路本身)。这是由对象的轴进行识别封闭轮廓本身通过参考中心矩形状,我 19 。如式 5 轴角显示导出的第一和第二阶中心的反正切矩。如图 6 所示,所提取的形状的旋转因此被归一化到一个标准位置。图(6)5.道路标线识别我们的管道的最后阶段是道路标线的基于特征提取识别,神经网络的分类和后处理。 这是对剩余的提取的道路标记过于复杂形状的拒绝和剩余的形状归一化到一个共同的轴取向先前定义的对象。5.1 特征提取特征提取, 提取物为每一个潜在的字形的提取和保留在早期的形状分离和后处理的特征向量表示。在特征提取的字符识别等任务,通常建议使用 510 倍的每类特征个数的比值作为识别问题 20 班数。这里定义的 23 类(每一个字形)我们采用一个有 118 个元素的特征向量(比9.5 类的功能)。这主要是由特征向量字形的纵横比(即高度除以宽度),归一化中心矩测量形状的 20 , 25 胡空间矩与水平和垂直投影的铭文。 另外一个新的字形特点也是基于简化多边形表示从字形轮廓提取角度介绍模糊区划。对角度模糊分区,我们计算轮廓的角度对轮廓的每个边缘连接,将他们(范围 0179),从而创建两组 9 个元素的特征(一个锐角和钝角)通过基于模糊集的空间位置分组(模糊分区)。这些区域被定义为一个 9 针(33)划分网格映射到字形提取的区域(如图 7 所示)。确切的面积(像素)在每个区。网格是适应的整体字形区域等分。 一个给定的像素到给定区会员不完全析取边界像素具有多个区域的会员通过重叠的模糊区域边界的定义(图 7)。河海大学文天学院毕业设计(论文)58图(7)积累了在每一个区依据字形轮廓内的角特征。 在这样的特征的模糊边界地区只有部分有助于整体区积累相对于它的位置在边境地区。在一个区的特点有助于 1,0 和0 以外的范围1 模糊区的贡献。由某一特征所有蓄电池始终为 1,在所有区域的贡献的总和。这是在图 7 中,我们看到一个例子字形映射到一组九所示(33)具有重叠边界对应于这些不同特征的贡献在多个区域的模糊区。在特征向量的一个元素将代表急性和钝角的九个角区覆盖了字形的空间布局器。正式,我们可以定义每个蓄电池:对于 N 字形轮廓的角特征:式中6 我是导致区域我积累,并(I,J)是模糊隶属度有关的角特征 J 区的最后一位,除了这个模糊的空间角度累加器的措施,我们还采用了二次线性角分布测量的字形的外部边界轮廓长度的概念(导向顺时针从最高点)。这实质上构造一个模糊的轮廓定位轮廓简化的对象。鉴于道路标线不久的恒定性,我们考虑外部边界轮廓为 N等长线段在其长度(在这里我们的经验使用 n = 25 的英国道路标线刻度)。每个段由过去的表现(多边形拼接)角沿其长度从常见的轮廓的起源(上面的轮廓点)为每个符号形状。在一个给定的段没有角的长度是由前一段(从 0 角表示的初始化第段)。然后这个角剖面沿外字形轮廓的长度编码为标准四元矢量这个比例分配权重的创建一个新的多变量表示的向量元素段角(度)和矢量元素我= 03 (见表7)。总体特征提取结果在一个 118 元的特征向量描述符,每个符号由纵横比(一元),归一化中心矩(七元到 3 阶),胡空间矩(七元素空间 3 阶),水平/垂直直方图投影(50 纵 35 横)和角度模糊分区(九急性和九钝)。这个特征向量描述形式的输入表示一个受过训练的神经网络用于识别分类器。综合这些特征选择给出一个结合了字形形状的全球措施整体描述符 (纵横比, 矩,直方图投影)和相对几何特征的位置(角度模糊分区)。这里的多个空间/几何测量河海大学文天学院毕业设计(论文)59中使用的“富人”描述符方便(一)大雕文字母表上的分化和(b)增加了对噪声的鲁棒性,在一个大的测试评价集相比,早期(特征稀疏或减小的特征)的作品6,7,29,30。实证结果显示,特征向量的复杂性进一步增加,减少了由于噪声的流行方法的整体性能(如增加模糊角区粒度)虽然早期的工作表明较小的结果在一个较小的功能集 6,7,29,30 。一个更大的多方面的特征集,在这项工作中使用减轻整体效果的孤立特征稳定性不同道路条件下(教派。4 / 6)相结合时,容噪分类方法(教派。5.2) 。未来的工作将调查在这样的条件下特征稳定性的推导的最优子集的适用。5.2 神经网络分类在这里,我们使用单隐层的人工神经网络分类器的 26 的基础上与弹性反向传播训练(RPROP) 27 。这是一个 118 节点的输入层(同特征向量),69 节点和 23节点的隐层输出层(字形字母的大小)。隐藏节点的数目是经验的基础上,最大限度地减少泛化误差在看不见的测试数据为样本来避免过拟合 21 。分类器的训练/测试是基于1022 个样本符号手动提取与随机选取的 8020%比值的子集,用于训练和测试恭敬地 26 片段序列进行了实际道路。培训以 20。对目前使用的符号的字母。参数的变化被认为是在那些为最后提出人工神经网络的配置。其他参数的选择是选择的实证,证明是强大的不同的标记质量,天气情况,现场的混乱和光照条件。由于三对字形相似度( S 和 O5 ,0, 和 4) 这些被定义为三单符号( S/ 5 ,O/0,/4)与后处理结合语境进行进一步的分化。在使用 26 的过程神经网络训练的基础上,输出向量浮点值(范围1 + 1),最大向量项表示字形的分类。这个输出分类概念需要两个额外的阈值: (一)最大分以上的输出向量输入必须被认为是一种有效的分类和(b)最大输出向量输入和输出中的第二最大值之间的最大距离。帝国地最高分阈值0.7和0.2的最大距离阈值选择与输出结果不符合这些准则的产生一个空的分类为给定的符号特征。5.3 后处理人字形分类神经网络二阶段的后处理来识别连贯一致的字形图案(文字和符号结构)对巷道。在系统的整体水平的结果(教派。6)这一阶段是在假阳性消除其前任一样重要。后处理策略已
- 温馨提示:
1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
2: 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
3.本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

人人文库网所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。