中英文对照.doc

缓降器性能试验机设计【包含CAD图纸+PDF图】

收藏

资源目录
跳过导航链接。
缓降器性能试验机设计包含CAD图纸PDF图.zip
缓降器性能试验机设计【包含CAD图纸】
压缩包内文档预览:
预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图
编号:118728323    类型:共享资源    大小:6.11MB    格式:ZIP    上传时间:2021-03-24 上传人:好资料QQ****51605 IP属地:江苏
45
积分
关 键 词:
缓降器 性能 试验 设计 包含 CAD 图纸 PDF
资源描述:

喜欢这套资料就充值下载吧。。。资源目录里展示的都可在线预览哦。。。下载后都有,,请放心下载,,文件全都包含在内,,【有疑问咨询QQ:414951605 或 1304139763】 喜欢这套资料就充值下载吧。。。资源目录里展示的都可在线预览哦。。。下载后都有,,请放心下载,,文件全都包含在内,,【有疑问咨询QQ:414951605 或 1304139763】

内容简介:
Intelligent RobotsIndustrial robots have been developed to perform a wide variety of manufacturing tasks, such as machine loading/unloading, welding, painting, and assembling. However, most industrial robots have very limited sensing ability. For example, if the assembly parts are not presented to the robots in a precise, repetitive fashion, the robots simply can not perform its jobs, because it can not find the part and pick it up. If an invading object happens to enter the work space of a robot, the robot may collide with it, resulting in damage to both, because the robot can not see it as obstacle. In other words, existing robots are far from being intelligent, and they can not hear, see, and feel (force, shape, texture, temperature, weight, distance and speed of objects in its grasp or nearby). Imagine the difficulty involved in teaching a person with all these handicaps to perform precise assembly operations. This tells why we need intelligent robots. An entirely new phase in robotic applications has been opened with the development of “intelligent robots”. An intelligent robot is basically one that must be capable of sensing its surroundings and possess intelligence enough to respond to a changing environment in much the same way as we do. Such ability requires the direct application of sensory perception and artificial intelligence. Much of research in robotics has been and is still concerned with how to equip robots with visual sensors-the “eyes” and tactile sensors-the “fingers”. Artificial intelligence will enable the robot to respond to and adapt to changers in its tasks and in its environment, and to reason and make decisions in reaction to those changes.In the light of these requirements, an intelligent robot may be described as one that is equipped with various kinds of sensors, processes sophisticated signal processing capabilities and intelligence for making decisions, in addition to general mobility. One important capability that humans demonstrate when performing intelligent tasks is receiving sensory information from their surroundings. Human senses provide a rich variety of sensory inputs that are critical to many of the activities performed. Much effort has been made to simulate similar sensory abilities for intelligent robots. Among them, vision is the most important sense as it is estimated that up to 80% of sensory information is received by vision. Vision can be bestowed on robotic systems by using imaging sensors in various ways. For improving accuracy of performances, it can help precisely adjust the robot hand by means of optical feedback control using visual sensors. Determining the location, orientation, and recognition of the parts to be picked up is another important application. For example, vision is necessary to guide a seam-welding robot to perform mating two parts together and weld them. Regardless of the application, the vision system must contain an illumination source, an imagery sensor, an imagery digitizer and a system control computer.If ambient lighting serves as illumination source, the imaging process is passive. This type of imaging is common in military applications since the poison of the viewer is beyond the control of the viewer. But in industrial applications, controlled illumination or active imaging can be used as freely as possible.The imagery sensor of a robot vision system is defined as an electrooptical device that converts an optical image to a video signal. The image sensor is usually either a TV-camera or a solid-state sensory device, for instance, charge-coupled devices(CCDs).The latter device offers greater sensitivity, long duration of service and light weight, and is thus welcome when compared with the TV-camera.The camera system contains not only the camera detector but also, and very importantly, a lens system. The lens determines the field of view, the depth of focus, and other optical factors that directly affect the quality of the image detected by the camera. Novel techniques, such as a fish-eye lens used to obtain a 360-degree field of view without the need of focus adjustment, have recently been investigated and proved very useful in mobile robots.Either TV-cameras or CCDs produce an image by generating an analogue value on every pixel, proportional to its light intensity. To enable a digital computer to work with analogue signals, as the TV-camera scans a scene, its output must be converted to a digital code by an analogue-to-digital(A/D)converter and stored in the random access memory(RAM) installed in the computer. The computer then analyzed the digital image and extracts such imagery information as edges, regions, boundaries, colors and textures of the objects in the image. Finally, the computer interprets or understands what the image represents in terms of knowledge about the scene and gives the robot a symbolic description of its environment. Next to vision in importance is tactile sensing or touching. Imagine the blind can be do delicate jobs relying on his/her sensitive tactile! A blind robot can be extremely effective in performing an assembly task using only a sense of touch. Touch is of particular importance for providing feedback necessary to grip delicate objects firmly without causing damage to them.To simulate tactile in human hands, a complete tactile-sensing system must perform three fundamental sensing operations:(1) joint force sensing which senses the force applied to the robots hand, wrist and arm joints , (2) touch sensing which senses the pressure applied to various points on the rippers surface, (3) slip sensing which senses any movement of the object while it is being grasped.The joint forces are usually sensed using various strain gauges arranged in the robot wrist assembly. A strain gauge is a force-sensing element whose resistance changes in proportion to the amount of force applied to the element.The simplest application of touch sensor is a gripper that is equipped with an array of miniature micro switches. This type of sensor can only determine the presence or absent of object at a particular point or an array of the robot hand. A more advanced type of touch sensor used arrays of pressure-sensitive piezoelectric material (conductive rubber, conductive foam, etc.). This material conducts electrical current when stressed. This arrangement allows the sensor to perceive changes in force and in pressure within the robot hand. The size of matrices of tactile sensors ranges from 88 to 8080 2-D arrays. Since the force at each point can be determined, the forces on its surface can be determined respectively. The force data can be used to display on a TV-like screen the shape of the object and the distribution of force on its surface.Slip sensing is required for a robot to create the optimum amount of grasping force applied to a delicate, fragile object. This ability prevents damage to the object and allows the object to be picked up without the danger of being dropped. The method used to detect slip is based on sensing any vibration of any movement between the object and the gripper, no matter how subtle it may be. The gripping force is increased step by until the object has been firmly grasped and no more slip occurs.The integration of tactile sensing and vision sensing can dramatically enhance adaptive robotic assembly task. An example of this type of sensors would be a vision sensor used to locate and identify objects and position of the robot itself, combined with a tactile sensor used to detect the distribution of force and pressure, determine tongue, weight, center of mass and compliance of the material it handles. This hand-eye coordination for general-purpose manipulation would be extremely powerful in the industrial world.Another important sensing for a robot is range, or depth information. The range data are necessary for a robot to create its 3-dimensional special information when it carries out real-world navigation. Such information is required whether the robot is stationary and navigating its gripper or mobile and navigating its body. For example, consider an industrial bin picking operation where a sensing robot can locate objects of interest in the bin, even though it does not know exactly where they are. The sequence of operations for such a robot might go something like this:(1) Scan the parts in the bin and locate the object of interest in three-dimensional space.(2) Determine the relative position and orientation of the object.(3) Move its gripper and/or its body to the object location.(4) Position and orient the gripper according to the objects position and orientation.(5) Pick up the object.(6) Place the object at the required location.The first step and the second step in this sequence determine the distance and orientation to the object in the three-dimensional or 3-D space.The range sensors are classified into two categories: passive devices and active devices. As a passive device, a vision system equipped with stereo camera can be employed to determine depth by means of triangulation. However, vision systems are relatively sophisticated and expensive. As an active device, ultrasonic ranging systems have been widely used to give environmental information to a mobile robot. Most ultrasonic ranging systems are based on some form of time-of-flight measurement. The system works in the following manner. A transmitter emits ultrasonic waves toward an object, return to the transducer. Some of the waves echo from the object, return to the transducer. The ultrasonic sensor determines the range by measuring the elapsed time between the transmission and reception, that is time of flight, from which the distance (d) can be calculated byd = ctof/2where tof = time-of-flight and c = sonic speedProximity sensing is for detecting objects existing or relatively moving in close vicinity to the robots hands or legs. To detect object objects within short distance, or with high speed and acceleration, proximity sensors usually work by measuring the distortion of electromagnetic field when the object lies near it, or the deviation of frequency in a sonic wave when the object moves toward or away from the robot (making use of Doppler effect). Proximity sensing is widely used in intelligent robots to detect obstacles prior to possible collision.For some specialized intelligent robots, chemical sensors that simulate the sensor of smell and taste of animals may be important. For example, a robot that can detect odor of some specific substance, for instance, cocaine, can help customs officer to track drug traffickers; a robot that can sense subtle odor of a dangerous pollutant can help conservationists to detect its leakage.Another feature that distinguishes intelligent robots from conventional robots is the ability to alter action plan in response to unexpected changes in environment. A robot plan comprises a series of actions which the robot will take in a step-by-step fashion to achieve its goal. Robot planning is a subject of artificial intelligence, especially when the plan involves interfering actions. Interfering actions will be observed when an action renders invalid an achieved precondition needed by another action that is supposed to be taken later in the plan. Obviously, interfering actions in robot planning will render the whole or part of the plan invalid. There is no general solution to interfering actions in robot planning. In some complex cases, there seems to be no way to remove all interferences simply by changing the order by which action take place. But for simple cases, robot-planning is the easiest subject in artificial intelligence.Based on data provided by its sensing systems, which reflect the changes in its environment, the planning system of an intelligent robot makes decisions to alter its action plan, either partly or completely, to respond to those changes without the request for human intervention.With senses and adaptive control, an intelligent robot appears autonomous and is able to adapt itself to its jobs and environment. For example, it can recognize objects, distinguish obstacles from its surroundings, take actions to avoid obstacles, adjust its position and attitude and the forces it applies on objects when performing jobs, arrange proper working order for better efficiency, coordinate its two hands, etc. Finally, let us take a brief look at an example for intelligent robots.This mobile robot called Help Mate was developed by Transition Research Corporation (TRC) in Danbury, Connecticut, U.S.A. Help Mate currently carries off-schedule (late) meal trays from the dietary department to the nursing stations at Danbury Hospital. Future applications include transportation of pharmacy and lab supplies, patient records, and mail. A robot that can perform transportation tasks can relieve nurses of their frustrating and time-consuming task and free up time for patient care. Approximately, 90 late meal trays are hand-delivered each day at Danbury Hospital, a 450-bed facility. Late meal trays result from new admissions, diet order changes, transferred patients, special requests, etc.Help Mate stands 54 inches high with a footprint of 2830 square inches. It weights 350 pounds and can carry loads up to 50 pounds and can travel at 500 mm/second in Danbury Hospital except in patient corridors where it travels at 300 mm/second. The maximum velocity is 1 meter/second and the maximum acceleration is 1 meter/second.The front view shows the placement of the sensors, warning lights and emergency stop switches. This rear view shows the backpack and the user interface control panel. A flashing amber light on top of the robot visible from any direction warns people that Help Mate is moving autonomously under its program control.To carry late meal trays from the dietary department to nursing stations distributed among the six buildings and twenty floors at Danbury Hospital, Help Mate must be able to call and ride the elevators. Help Mate communicates with elevator controller via infrared transmitter/receiver pairs mounted on the elevator, the Help Mate, and in each elevator, closes the doors, and requests the desired floor. Once the elevator reaches the desired floor and the door opens, Help Mate leaves the elevator, signals for the door to close, and continues on its way to the nursing station.Vision, sonar and infrared proximity sensors are in navigation and obstacle avoidance. An array of Polaroid (a company) ultrasonic proximity sensors is used to measure range to obstacles and walls, while the infrared proximity sensors are used to detect edges and retro reflective tape used as beacons during navigation.An auxiliary subsystem handles operator interface functions and a drive subsystem steers the vehicle and provides odometry. All of the sensor subsystems, the drive and auxiliary subsystems and the 68 000 master/slave protocol.The most significant capability of the Help Mate is its autonomous navigation in hall-ways, elevators, lobbies, and doorways. Given a perfect drive system and an environment free of obstacles, odometry-based navigation would suffice. In the real world, sensors must be used to recognize known landmarks, avoid a wide variety of obstacle, including humans, and to correct accumulating odometry errors. Sensor data is fused into a composite local map. Navigation information is then derived from the map. Help Mate uses as many natural landmarks as possible for navigation and registration. These local landmarks are encoded into the database. Raw sensor data are processed in real time on a 68 000 microprocessor to recognize landmarks and “register” the local model against a prior world model.An operator inserts his/her ID card to unlock the backlock and inserts a payload, such as meal trays. This operator then selects a destination from the computer operation menu with keyboard and removes the card. Once the card is removed, the backpack locks securely. The payload may only be removed at the destination and only by using a specific card for each nursing station. Help Mate arrives at the specified destination and sounds a buzzer that alerts unit secretary that a package has arrived. The unit secretary then inserts her card and gains access to the operation menu system. The backpack is unlocked and the secretary takes contents of delivery, signs the delivery slip and places it back in the backpack. She then removes her card which locks the backpack and dispatches Help Mate to its next destination.From this example, we can see that the long-awaiting development of robots for domestic service is rapidly becoming feasible. For the elderly, intelligent robots would provide simple home cleaning, food preparation, security and entertainment services. As the cost of computer and electro-mechanical devices continues to decrease, intelligent robots for services in transportation, hospital and home service will become economically feasible.The near future will see the prosperity of intelligent robots. In unmanned workshops, intelligent robots will be the major work force taking delicate, boring and dangerous jobs, replacing human workers. In some environments, such as underground, underwater and outerspace, intelligent robots will be the only laborers working tirelessly. By 2007, intelligent robots as trial-blazers will land on the Mars to undertake pioneering expeditionary missions before astronauts set force to the Mars. Believe it or not! 中文翻译智能机器人 工业机器人已被开发来完成种类广泛的制造任务,例如,给机器装卸零件、焊接、喷漆、装配,但大多数机器人的感觉能力很有限,举例来说,如果装配零件并不以精确的、重复的方式交给机器人,那么,机器人简直不能做它的工作,因为它不能找到并拿起这个零件。如果碰巧有一物体进入机器的工作场所,机器人就可能与它碰撞,对双方都造成破坏,因为机器人并不把它看作障碍物。换句话说,现存的机器人与智能相差甚远,而且它们不能听、看和触摸(感知在它的手里握住的或在附近的物体的力、形状、表面纹理结构、温度、重量、距离和速度)。请想象一下,要教会一位具有这一切缺陷的人去做精确的装配操作是多么困难啊!这就说明我们为什么需要智能机器人。智能机器人的开发在机器人学的应用方面开辟了一个崭新的阶段。智能机器人必须有能力感知它周围的环境,并有足够的智慧对环境的变化作出反应,就像我们做的那样。这样一种能力要求直接应用传感器感知和人工智能。大部分的机器人学方面的研究曾经是,现在仍然是关于如何用视觉传感器“眼睛”和触觉传感器“手指”装配机器人。人工智能将使机器人对它的工作任务和环境变化作出反应并能适应之,而且要对这些变化作出推理,作出决定。根据这些要求,智能机器人可描述为带有各种各样传感器,具有复杂的信号处理能力和作出决定的智慧,此外,还要有一般的活动能力。人类在做智能工作时显示出一种重要能力就是他们能从周围环境中接受感觉信息。人类的感观提供了种类繁多的感觉输入,对完成许多活动至关重要。为了做出智能机器人,已作了很大努力去模拟类似的感觉能力。其中,视觉是最重要的感觉。据估计,多达80的感觉信息是是由视觉接受的。以各种方式利用成像传感器可以使机器人获得视力。为了改善工作的精度性,视力被用来帮人们精确地调整机器人的手,用的是带视觉传感器的光学反馈控制。确定要拣起来的零件的位置和方向并辨认它们是视觉的另一重要用途。举例来说,要指导一台缝焊机器把两件零件排在一起并焊起来,就必须由视觉。不论何种应用,视觉系统都包括照明光源、成像传感器、图像数字化器和一台系统控制用计算机。若把周围的光线用作照明光源,这一成像过程就是被动型的。这一类型的成像常用于军事用途,因为观察者的位置不受观察者控制;但在工业应用中,可以尽可能自由地设置照明光源或主动成像。机器人视觉系统的图像传感器定义为电子光学设备。它能把光学信号转变成视频信号。视觉传感器通常或是电视摄像机或是固态图像器件,例如,电荷耦合器件(CCD),后一种器件灵敏度较高、寿命长、重量轻,因此,与电视摄像机比较,更受欢迎。摄像机系统不仅包括摄像机的成像器,而且非常重要的是一组透镜系统。这些透镜确定视场、景深和其它光学因素,直接影响摄像机摄取图像的质量。一些新颖的技术,如鱼眼透镜,不必经过聚焦调整就可以获得360度的视场。最近的调查表明,它在活动机器人上非常有用。无论是电视摄像机,还是CCD,都在每一个像素上产生与它的光强的成比例的模拟值,于是组成一幅图像。为了使数字计算机可以处理这一模拟信号,当摄像机扫描一场景时,它的输出必须利用模拟数字转换器(A/D)转换成数字码,并存储在安在摄像机内部的随机存储器(RAM)内。数字计算机就可以分析数字化图像,并抽取这样的一些图像信息,如棱、区域、边界和图像中物体的颜色及表面纹理结构。最后,计算机根据有关场景的知识对图像所代表的东西作出说明和理解,然后给机器人一幅用符号描述的环境图像。在重要性上仅次于视觉的是触觉。请想象一下,盲人靠他们的触觉能做精细活儿。采用触觉的盲机器人能机器有效地做装配工作。要牢牢地抓住物体而又不致弄坏它,触觉是特别重要的,为的是提供反馈。为了模拟人手的触觉,完整的触觉系统必须完成三种基本触觉操作:(1)关节力受感感受施加在机器人手上、腕上和关节上的力。(2)触觉感受感觉施加在手的表面或抓取器表面上各个点的压力。(3)滑动感受感受被握物体的任何运动。安装在机器人关节组合体中的各种应变块规通常用来测量关节力。应变块规就是力敏感元件,它的电阻按比例地随着施加在它上面的力的大小而变化。最简单的触觉传感器的应用例子就是与小型开关阵列装备在一起的抓取器。这种类型的传感器只能确定机器人的手上的一特定点或一些点的阵列是否又物体。较为先进的一类触觉传感器使用压敏的压电材料(导电橡胶,导电泡沫等)做成的阵列。当受到应力时,这种材料传导电流。这种安排使传感器能感觉到机器人手中的力和压力的变化。触觉传感器系列矩阵大小是从88到8080的二维阵列。由于每一点上的力可以确定,机器人手面上的力就可以详细地表示出,握在机器人手中的东西形状也就可以分别确定。这些力的数据可以用来在一部像电视那样的荧屏上显示物体的形状以及在手的表面上力的分布状况。机器人要求感受滑动是为了使施加在纤细的、脆弱的物体上的握力的大小保持最佳。这种能力防止弄坏该物体又允许该物体被抓起而没有被破坏的危险。用来检测滑动的方法基于感受物体与抓取器之间任何震动或任何运动,不管它们是多么难以察觉。抓力一步一步地增大直到该物体被牢牢抓住,再也不出现滑动为止。触觉和视觉的结合大大地增强了机器人干装配工作的适应能力。这种类型传感器的一个例子就是一只视觉传感器用来给物体定位并辨认它们,确定机器人本身的位置;与之结合的是一只触觉传感器,用来探测力和压力分布状况,确定力矩、重量、质量中心和它所处理的材料容让型。在工业界,这种手眼协调对一般的操作是及其有利的。对机器人来说,另一种重要的感知就是有关距离或深度的信息。对机器人来说,距离数据是必须的,为的是建立它的三维空间信息,供它在现实世界中行走。无论机器人是静止着操纵它的抓取器,还是活动着操纵它自己的身体,都要求有这些信息。举个例子,请设想有一台有感觉的机器人做工业装相工作。它能找出箱子里人们感兴趣的那个东西,即使它并不精确地知道它们在什么地方。这台机器人做这项工作的顺序可能如下:(1) 扫视箱中的零件,在三维空间中找出那感兴趣的零件。(2) 确定该东西的相对位置和方向。(3) 把它的抓取器或它的身体挪近那物体的位置。(4) 根据该物体的位置和方向,把它的抓取器放在适当的位置并指向适当方向。(5) 拾起该物体。(6) 把该物体放在要求的地方。这一序列的第一步和第二步确定了在三维空间中相对于该物体的距离和方向。距离传感器可分为两类:被动启器件和主动器件。作为被动器件的视觉系统装备了立体摄像机,用三角学方法确定深度。然而,视觉系统相对复杂且造价昂贵。作为主动器件的是超声波测距系统。它被广泛用于活动机器人,获取环境信息。大多数超声波测距系统都以某种方式的飞行时间测量为基础。此系统的工作方式如下:一台发射机向着某一目标发射超声波,其中一些波从该物体反射回到发射机。超声波传感器测量发射与接收之间所逝去的时间就能确定距离,这就是说,由飞逝时间计算出距离:d = ctof/2式中:tof = 飞逝时间 c = 超声波传播速度.临近感知是用来检测在紧邻机器人的手或脚附近由物体存在着或相对运动着。为了检测出近距离内的物体或以高速度和高加速度运动着的物体,所用的临近传感器的工作原理是检测物体附近的电磁场的畸变,或者测量当该物体向着或背着机器人运动时引起的超声波的频率偏移(利用多普勒效应)。临近感知广泛用于智能机器
温馨提示:
1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
2: 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
3.本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
提示  人人文库网所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。
关于本文
本文标题:缓降器性能试验机设计【包含CAD图纸+PDF图】
链接地址:https://www.renrendoc.com/paper/118728323.html

官方联系方式

2:不支持迅雷下载,请使用浏览器下载   
3:不支持QQ浏览器下载,请用其他浏览器   
4:下载后的文档和图纸-无水印   
5:文档经过压缩,下载后原文更清晰   
关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

网站客服QQ:2881952447     

copyright@ 2020-2025  renrendoc.com 人人文库版权所有   联系电话:400-852-1180

备案号:蜀ICP备2022000484号-2       经营许可证: 川B2-20220663       公网安备川公网安备: 51019002004831号

本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知人人文库网,我们立即给予删除!