连杆-A2.dwg
连杆-A2.dwg

四连杆履带式搜救机器人设计-灾害救援机器人【6张CAD高清图纸、说明书】【JS系列】

收藏

压缩包内文档预览:
预览图
编号:49492000    类型:共享资源    大小:8.06MB    格式:ZIP    上传时间:2020-02-14 上传人:好资料QQ****51605 IP属地:江苏
45
积分
关 键 词:
6张CAD高清图纸、说明书 JS系列 连杆 履带式 搜救 机器人 设计 灾害 救援 CAD 图纸 说明书 JS 系列
资源描述:

【温馨提示】====【1】设计包含CAD图纸 和 DOC文档,均可以在线预览,所见即所得,,dwg后缀的文件为CAD图,超高清,可编辑,无任何水印,,充值下载得到【资源目录】里展示的所有文件======【2】若题目上备注三维,则表示文件里包含三维源文件,由于三维组成零件数量较多,为保证预览的简洁性,店家将三维文件夹进行了打包。三维预览图,均为店主电脑打开软件进行截图的,保证能够打开,下载后解压即可。======【3】特价促销,,拼团购买,,均有不同程度的打折优惠,,详情可咨询QQ:1304139763 或者 414951605======【4】 题目最后的备注【JS系列】为店主整理分类的代号,与课题内容无关,请忽视

内容简介:
河南理工大学万方科技学院本科毕业设计(论文)中期检查表指导教师: 邓乐 职称: 所在院(系):机械与动力工程学院 教研室(研究室): 题 目四连杆履带式搜救机器人学生姓名钱龙飞专业班级 08机设2班 学号0828070150一、选题质量:(主要从以下四个方面填写:1、选题是否符合专业培养目标,能否体现综合训练要求;2、题目难易程度;3、题目工作量;4、题目与生产、科研、经济、社会、文化及实验室建设等实际的结合程度) 所选的题目与书本学习知识联系紧密,比较贴近生产实际情况,比较具有代表性;具有非常大的发挥空间和巧活多样的设计思路,对于本科机设专业的学生来说,题目难相对容易,主要是进行四连杆履带式搜救机器人的结构设计与越障能力分析,这主要是针对机械设计学生的要求,其机器人的控制部分还需与计算机,通信,电子等专业密切联系。四连杆履带式搜救机器人的结构设计主要是机器人的外形尺寸设计,内部零部件安置方式以及外部翻越行动方式,此机器人采用四连杆履带式,主要由机架个对称分布的履带变形模块组成。机架两侧是基于平行四边形结构的履带变形模块,主要由四连杆变形机构,主驱动轮,被动轮及绕在履带轮上的履带组成,其中四连杆变形机构由连杆,主动曲柄,被动曲柄组成,用于提供驱动力,并且可以绕机架旋转,实现履带变形,在越障时给机器人提供额外的辅助运动。选题完全符合专业培养目标,属于机械设计的一种,对即将毕业的学生的再学习有着较好的指引作用, 不仅仅局限在机械基础知识上更涉及了有关材料学、力学等多学科知识,使我们对交叉学科有了一定的涉足,综合训练的要求也得到充分的体现。二、开题报告完成情况:开题报告已经完成。从适合实际工作环境出发,确定了明确的课题设计方向;对四连杆履带式搜救机器人已经有了一定的认识了解。已经对课题进行了设计、分析,并有了突破性的进展。同时,已完成了对相关资料的查阅,对课题有了总体的分析,开题报告完成质量相对较高。三、阶段性成果:1、本次设计的开题报告已经完成,总体布置方案和主要结构参数已确定,并完成一些标准件的选型及和大多数零部件的设计计算工作。2、部分零件图的绘制已经基本完成,设计说明书已经开始整理。3、英文翻译工作已经基本完成,现在正对一些结构设计进行校核。四、存在主要问题:由于专业基础知识学习不够深入,设计经验欠缺,参考资料收集有限,设计主题思路把握不够,简单问题解决不够灵活;设计中结构较复杂,机器人越障能力分析有一定的难度,数据分析与变形草图的绘制综合分析机器人顺利越过90度障碍物能力。同时机器人内部结构中的电动机,减速器,齿轮设计等细节问题的要求以及内部结构安排方式,如何使得安排即合理又正确等问题需要进一步解决。五、指导教师对学生在毕业实习中,劳动、学习纪律及毕业设计(论文)进展等方面的评语 指导教师: 年 月 日2多自由度步行机器人摘要 在现实生活中设计一款不仅可以倒下而且还可以站起来的机器人灵活智能机器人很重要。本文提出了一种两臂两足机器人,即一个模仿机器人,它可以步行、滚动和站起来。该机器人由一个头,两个胳膊和两条腿组成。基于远程控制,设计了双足机器人的控制系统,解决了机器人大脑内的机构无法与无线电联系的问题。这种远程控制使机器人具有强大的计算头脑和有多个关节轻盈的身体。该机器人能够保持平衡并长期使用跟踪视觉,通过一组垂直传感器检测是否跌倒,并通过两个手臂和两条腿履行起立动作。用实际例子对所开发的系统和实验结果进 行了描述。 1 引言 随着人类儿童的娱乐,对于设计的双足运动的机器人具有有站起来动作的能 力是必不可少。为了建立一个可以实现两足自动步行的机器人,设计中感知是站 立还是否躺着的传感器必不可少。两足步行机器人它主要集中在动态步行,作为一种先进的控制问题来对待它。然而,在现实世界中把注意力集中在智能反应,更重要的是创想,而不是一个不会倒下的机器人,是一个倒下来可以站起来的机器人。为了建立一个既能倒下又能站起来的机器人,机器人需要传感系统就要知道 它是否跌倒或没有跌倒。虽然视觉是一个机器人最重要的遥感功能,但由于视觉 系统规模和实力的限制,建立一个强大的视觉系统在机器人自己的身体上是困难 的。如果我们想进一步要求动态反应和智能推理经验的基础上基于视觉的机器人 行为研究,那么机器人机构要轻巧足以够迅速作出迅速反应,并有许多自由度为 了显示驱动各种智能行为。至于有腿机器人,只有一个以视觉为基础的小小 的研究。面临的困难是在基于视觉有腿机器人实验研究上由硬件的显示所限制。在有限的硬件基础上是很难继续发展先进的视觉软件。为了解决这些问题和推进 基于视觉的行为研究,可以通过建立远程脑的办法。身体和大脑相连的无线链路 使用无线照相机和远程控制机器人,因为机体并不需要电脑板,所以它变得更加 容易建立一个有许多自由度驱动的轻盈机身。在这项研究中,我们制定了一个使用远程脑机器人的环境并且使它执行平衡的视觉和起立的手扶两足机器人,通过胳膊和腿的合作,该系统和实验结果说明如下。 图 1 远程脑系统的硬件配置 图 2 两组机器人的身体结构2 远程脑系统远程控制机器人不使用自己大脑内的机构。它留大脑在控制系统中并且与它用无线电联系。这使我们能够建立一个自由的身体和沉重大脑的机器人。身体和大脑的定义软件和硬件之间连接的接口。身体是为了适应每个研究项目和任务而设计的。这使我们提前进行研究各种真实机器人系统。一个主要利用远程脑机器人是基于超级并行计算机上有一个大型及重型颅脑。虽然硬件技术已经先进了并拥有生产功能强大的紧凑型视觉系统的规模,但是硬件仍然很大。摄像头和视觉处理器的无线连接已经成为一种研究工具。远程脑的做法使我们在基于视觉机器人技术各种实验问题的研究上取得进展。另一个远程脑的做法的优点是机器人机体轻巧。这开辟了与有腿移动机器人合作的可能性。至于动物,一个机器人有 4 个可以行走的四肢。我们的重点是基于视觉的适应行为的4肢机器人、机械动物,在外地进行试验还没有太多的研究。大脑是提出的在母体环境中通过接代遗传。大脑和母体可以分享新设计的机器人。一个开发者利用环境可以集中精力在大脑的功能设计上。对于机器人的大脑被提出在一个母体的环境,它可以直接受益于母体的“演变”,也就是说当母体升级到一个更强大的计算机时该软件容易获得权利。图1显示了远程脑系统由大脑基地,机器人的身体和大脑体界面组成。在远 程脑办法中大脑和身体接触面之间的设计和性能是关键。我们目前的执行情况采 取了完全远程脑的办法,这意味着该机体上没有电脑芯片。目前系统由视觉子系 统,非视觉传感器子系统和运动控制子系统组成。一个障碍物可以从机器人机体 的摄像机上接收视频信号。每个视觉子系统由平行放置的 8 个显示板组成。一个机体仅有一个运动指令信号和传输传感器的信号的接收器。该传感器信息从视频发射机传输。传输其他传感器的信息是可能的,如触摸和伺服错误通过视频 传输的信号整合成一个视频图像。该驱动器是包括一个模拟伺服电路和接收安置器的连接模块。离子参考价值来自于动作接收器。该动作控制子系统可以通过13个波段处理多达104个驱动器和每20兆秒发送参考价值的所有驱动器。3两个手和足的机器人图2显示了两个手和足的机器人的结构。机器人的主要电力组成部分是连接 着伺服驱动器控、制信号接收器定位传感器,发射机,电池驱动器,传感器和一个摄像头,视频发射机,没有电脑板。伺服驱动器包括一个齿轮传动电动机和伺 服电路模拟的方块。控制信号给每个伺服模块的位置参考。扭矩伺服模块可覆盖 2Kgcm -1 4Kgcm 的速度约 0 .2sec/60deg。控制信号传输无线电路编码的8个参考值。该机器人在图 2 中有两个接收器模块在芯片上以控制16个驱动器。图3说明了方向传感器使用了一套垂直开关。垂直开关是水银开关。当水银 开关(a)是倾斜时,下拉关闭的汞之间接触的两个电极。方向传感器安装两个汞开关,如图显示在(b)项。该交换机提供了两个比特信号用来检测 4 个方向的传 感器如图所示在(c)项。该机器人具有在其胸部的传感器并且它可以区分四个方向:面朝上,面朝下,站立和颠倒。该机体的结构设计和模拟在母亲环境下。该机体的运动学模型是被描述面向一个口齿不清的对象,这使我们能够描述几何实体模型和窗口界面设计的行为。 图 3 传感器的两个水银定位开关图4显示远程脑机器人的一些环境项目分类。这些分类为扩大发展各种机器人提供了丰富的平台。4基于视觉的平衡该机器人可以用两条腿站起来。因为它可以改变机体的重心,通过控制踝关节的角度,它可以进行静态的两足行走。如果地面不平整或不稳定,在静态步行期间机器人必需控制她的身体平衡。为了视觉平衡和保持移动平稳,它要有高速的视觉系统。我们已经用相关的芯片制定了一项跟踪视觉板。这个视觉板由带着特别 LSI 芯片(电位:运动 估计处理器)扩张转换器组成 ,与执行本地图像块匹配。 图 4 层次分类 图5 步行步态该输入处理器是作为参考程序块和一个图像搜索窗口形象。该大小的参考程序块可达16*16像素。该大小的搜索窗口取决于参考块的大小通常高达 32*32 像素,以便它能够包括 16 * 16且匹配。该处理器计算价值 256 萨赫勒(总和绝对 差)之间的参考块和 256 块在搜索窗口,还找到最佳匹配块,这就是其中的最低萨赫勒价值。当目标平移时块匹配是非常有力的。然而,普通的块匹配方法当它旋转时无法跟踪目标。为了克服这一困难,我们开发了一种新方法,跟随真正旋转目标的 候选模板。旋转模板法首先生成所有目标图像旋转,并且几个足够的候选参考模 板被选择并跟踪前面图的场景相匹配。图 5 展示了一个平衡实验。在这个实验中 机器人站在倾斜的木板上。机器人视觉跟踪着前面的场景。它会记住一个物体垂 直方向作为视觉跟踪的参照并产生了旋转图像的参考图象。如果视觉跟踪的参考 对象使用旋转图像,它可以衡量身体旋转。 为了保持身体平衡,机器人的反馈控 制其身体旋转来控制中心机体的重心。旋转视觉跟踪可以跟踪视频图像率。 图 6 双足步行该输入处理器是作为参考程序块和一个图像搜索窗口形象.该大小的参考程序块 可达 16*16 像素.该大小的搜索窗口取决于参考块的大小通常高达 32*32 像素,以便它能够包括16 * 16且匹配。该处理器计算价值 256 萨赫勒(总和绝对差)之间的参考块和256块在搜索窗口,还找到最佳匹配块,这就是其中的最低萨赫勒价值。当目标平移时块匹配是非常有力的。然而,普通的块匹配方法当它旋转时无法跟踪目标。为了克服这一困难,我们开发了一种新方法,跟随真正旋转目标的候选模板。旋转模板法首先生成所有目标图像旋转,并且几个足够的候选参考模板被选择并跟踪前面图的场景相匹配。图5展示了一个平衡实验。在这个实验中机器人站在倾斜的木板上。机器人视觉跟踪着前面的场景。它会记住一个物体垂直方向作为视觉跟踪的参照并产生 了旋转图像的参考图象。如果视觉跟踪的参考对象使用旋转图像,它可以衡量身 体旋转。为了保持身体平衡,机器人的反馈控制其身体旋转来控制中心机体的重心。旋转视觉跟踪可以跟踪视频图像率。 图 7 双足步行实验5 双足步行如果一个双足机器人可以自由的控制机器人的重心,它可以执行双足行走。 展示在图7的机器人在脚踝的位置有以左和以右的角度,它可以在特定的方式下 执行双足行走。该一个周期的一系列运动由八个阶段组成,如图6所示。一个步骤包括四个阶段:移动脚的重力中心,抬腿,向前移动,换腿。由于身体被描述用 实体模型,根据重心参数机器人可以产生一个机构配置移动重力中心。这一运动后,机器人可以抬起另一条腿并且向前走。在抬腿过程中机器人必须操纵机构配置,以保持支持脚上的重心。依赖于重心的高度作为平衡的稳定性,机器人选择 合适的膝盖角度.图7显示了一系列双足机器人行走的实验。6 滚动和站立 图8显示了一系列滚动,坐着和站起来的动作。这个动作要求胳膊和腿之间的协调。由于步行机器人有一个电池,该机器人可使用电池的重量做翻转动作。当机器人抬起左腿,向后移动左臂且右臂向前,它可以得到机体周围的旋转力矩。如果身体开始转动,右腿向后移动并且左脚依赖脸部返回原来位置。翻滚运动身体的变化方向从仰视到俯视。它可通过方向传感器核查。得到正面朝下的方向后,向下移动机器人的手臂以坐在两个脚上。这个动作引起了双手和地面之间的滑动。如果手臂的长度不够达到在脚上的身体重心,这个坐的运动要求有手臂来推动运动。站立运动是被控制的,以保持平衡。 图 8 一系列滚动和站立运动 7 通过集成传感器网络转型的综合 为了使上述描述的基本动作成为一体,我们通过一种方法来描述一种被认为是根据传感器状况的网络转型。图9显示了综合了基本动作机器人的状态转移图: 两足行走,滚动,坐着和站立。这种一体化提供了机器人保持行走甚至跌倒时的能力。普通的双足行走是由两步组成,连续的左腿在前和右腿在前。这个姿势依赖于背部和“脸部”和“站立”是一样的。也就是说,机器人的机体形状是相同的,但方向是不同的。 该机器人可以探测机器人是否依赖于背部或面部使用方向传感器。当机器人发觉跌倒时,它改变了依赖于背部或腹部通过移动不确定姿势的状况。如果机器人依赖于背部起来,一系列的动作将被计划执行:翻转、坐下和站立动作。如果这种情况是依赖于脸部,它不执行翻转而是移动手臂执行坐的动作。8结束语 本文提出了一个两手臂的可以执行静态双足行走,翻转和站立动作的机器人。 建立这种行为的关键是远程脑方法。正如实验表明,无线技术允许机体自由移动。这似乎也改变我们概念化机器人的一种方式。在我们的实验室已经发展一种新的 研究环境,更适合于机器人和真实世界的人工智能。这里提出的机器人是一个有腿的机器人。我们的视觉系统是基于高速块匹配功能实施大规模集成电路的运动估算。视觉系统提供了与人交往作用的机体活力和适应能力。机械狗表现出建立在跟踪测距的基础上的适应行为。机械类人猿已经表明跟踪和记忆的视觉功能和它们在互动行为上的综合。一个两手臂机器人的研究为智能机器人研究提供了一个新的领域。因为它的各种行为可能造成一个灵活的机体。远程脑方法将支持以学习为基础行为的研 究领域。下一个研究任务包括:如何借鉴人类行为以及如何让机器人提高自身的学术行为。Multi-degree of freedom walking robotMasayuki INABA, Fumio KANEHIRO Satoshi KAGAMI, Hirochika INOUE Department of Mechano-Informatics The University of Tokyo 7-3-l Hongo, Bunkyo-ku, 113 Tokyo, JAPANAbstract Focusing attention on flexibility and intelligent reactivity in the real world, it is more important to build, not a robot that wont fall down, but a robot that can get up if it does full down. This paper presents a research on a two-armed bipedal robot, an apelike robot, which can perform biped walking, rolling over and standing up. The robot consists of a head, two arms, and two legs. The control system of the biped robot is designed based on the remote-brained approach in which a robot does not bring its own brain within the body and talks with it by radio links. This remote-brained approach enables a robot to have both a heavy brain with powerful computation and a lightweight body with multiple joints. The robot can keep balance in standing using tracking vision, detect whether it falls down or not by a set of vertical sensors, and perform getting up motion colaborating two arms and two legs. The developed system and experimental results are described with illustrated real examples. 1 Introduction As human children show, it is indispensable to have capability of getting up motion in order to learn biped locomotion. In order to build a robot which tries to learn biped walking automatically, the body should be designed to have structures to support getting up as well as sensors to know whether it lays or not. When a biped robot has arms, it can perform various behaviors as well as walking. Research on biped walking robots has presented with realization.It has mainly focused on the dynamics in walking,treating it as an advanced problem in control.However, focusing attention on the intelligent reactivity in the real world, it is more important to build, not a robot that wont fall down, but a robot that can get up if it does fall down.In order to build a robot that can get up if it falls down, the robot needs sensing system to keep the body balance and to know whether it falls down or not. Although vision is one of the most important sensing functions of a robot, it is hard to build a robot with a powerful vision system on its own body because of the size and power limitation of a vision system. If we want to advance research on vision-based robot behaviors requiring dynamic reactions and intelligent reasoning based on experience, the robot body has to be lightweight enough to react quickly and have many DOFS in actuation to show a variety of intelligent behaviors. As for the legged robot ,there is only a little research on vision-based behaviors. The difficulties in advancing experimental research for vision-based legged robots are caused by the limitation of the vision hardware. It is hard to keep developing advanced vision software in limited hardware. In order to solve theproblems and advance the study of vision-based behaviors, we have adopted a new approach through building remote-brained robots. The body and the brain are connected by wireless links by using wireless cameras and remote-controlled actuators.As a robot body does not need computers on-board,it becomes easier to build a lightweight body with many DOFS in actuation.In this research, we developed a two-armed bipedal robot using the remote-brained robot environment and made it to perform balancing based on vision and getting up through cooperating arms and legs. The system and experimental results are described below. 2 The Remote-Brained System The remote-brained robot does not bring its own brain within the body. It leaves the brain in the mother environment and communicates with it by radio links. This allows us to build a robot with a free body and a heavy brain. The connection link between the body and the brain defines the interface between software and hardware. Bodies are designed to suit each research project and task. This enables us advance in performing research with a variety of real robot systems. A major advantage of remote-brained robots is that the robot can have a large and heavy brain based on super parallel computers. Although hardware technology for vision has advanced and produced powerful compact vision systems, the size of the hardware is still large. Wireless connection between the camera and the vision processor has been a research tool. The remote-brained approach allows us to progress in the study of a variety of experimental issues in vision-based robotics.Another advantage of remote-brained approach is that the robot bodies can be lightweight. This opens up the possibility of working with legged mobile robots. As with animals, if a robot has 4 limbs it can walk. We are focusing on vision-based adaptive behaviors of 4-limbed robots, mechanical animals, experimenting in a field as yet not much studied. The brain is raised in the mother environment in-herited over generations. The brain and the mother environment can be shared with newly designed robots. A developer using the environment can concentrate on the functional design of a brain. For robots where the brain is raised in a mother environment, it can benefit directly from the mothers evolution, meaning that the software gains power easily when the mother is upgraded to a more powerful computer.Figure 1 shows the configuration of the remote-brained system which consists of brain base, robot body and brain-body interface. In the remote-brained approach the design and theperformance of the interface between brain and body is the key. Our current implementation adopts a fully remotely brained approach, which means the body has no computer onboard. Current system consists of the vision subsystems, the non-vision sensor subsystem and the motion control subsystem. A block can receive video signals from cameras on robot bodies. The vision subsystems are parallel sets each consisting of eight vision boards. A body just has a receiver for motion instruction signals and a transmitter for sensor signals. The sensor information is transmitted from a video transmitter. It is possible to transmit other sensor information such as touch and servo error through the video transmitter by integrating the signals into a video image. The actuator is a geared module which includes an analog servo circuit and receives a position reference value from the motion receiver. The motion control subsystem can handle up to 104 actuators through 13 wave bands and send the reference values to all the actuators every 20msec.3 The Two-Armed Bipedal Robot Figure 2 shows the structure of the two-armed bipedal robot. The main electric components of the robot are joint servo actuators, control signal receivers, an orientation sensor with transmitter, a battery set for actuators and sensors sensor and a camera with video transmitter. There is no computer on-board. A servo actuator includes a geared motor and analog servo circuit in the box. The control signal to each servo module is position reference. The torque of servo modules available cover 2Kgcm - 14Kgcm with the speed about 0.2sec/60deg. The control signal transmitted on radio link encodes eight reference values. The robot in figure 2 has two receiver modules onboard to control 16 actuators. Figure 3 explains the orientation sensor using a set of vertical switches. The vertical switch is a mercury switch. When the mercury switch (a) is tilted, the drop of mercury closes the contact between the two electrodes. The orientation sensor mount two mercury switches such as shown in (b). The switches provides two bits signal to detect four orientation of the sensor as shown in (c). The robot has this sensor at its chest and it can distinguish four orientation; face up, face down, standing and upside down. The body structure is designed and simulated in the mother environment. The kinematic model of the body is described in an object-oriented lisp, Euslisp which has enabled us to describe the geometric solid model and window interface for behavior design.Figure 4 shows some of the classes in the programming environent for remote-brained robot written in Euslisp. The hierachy in the classes provides us with rich facilities for extending development of various robots.4 Vision-Based Balancing The robot can stand up on two legs. As it can change the gravity center of its body by controling the ankle angles, it can perform static bipedal walks. During static walking the robot has to control its body balance if the ground is not flat and stable.In order to perform vision-based balancing it is re-quired to have high speed vision system to keep ob-serving moving schene. We have developed a tracking vision board using a correlation chip. The vision board consists of a transputer augmented with a special LSI chip(MEP) : Motion Estimation Processor) which performs local image block matching.The inputs to the processor MEP are an image as a reference block and an image for a search window.The size of the reference blsearch window depends on the size of the reference block is usually up to 32 by 32 pixels so that it can include 16 * 16 possible matches. The processor calculates 256 values of SAD (sum of absolute difference) between the reference block and 256 blocks in the search window and also finds the best matching block, that is, the one which has the minimum SAD value. Clock is up to 16 by 16 pixels.The size of the search window depends on the size of the reference block is usually up to 32 by 32 pixels so that it can include 16 * 16 possible matches. The processor calculates 256 values of SAD (sum of absolute difference) between the reference block and 256 blocks in the search window and also finds the best matching block, that is, the one which has the minimum SAD value.Block matching is very powerful when the target moves only in translation. However, the ordinary block matching method cannot track the target when it rotates. In order to overcome this difficulty, we developed a new method which follows up the candidate templates to real rotation of the target. The rotated template method first generates all the rotated target images in advance, and several adequate candidates of the reference template are selected and matched is tracking the scene in the front view. It remembers the vertical orientation of an object as the reference for visual tracking and generates several rotated images of the reference image. If the vision tracks the reference object using the rotated images, it can measures the body rotation. In order to keep the body balance, the robot feedback controls its body rotation to control the center of the body gravity. The rotational visual tracker can track the image at video rate. 5 Biped Walking If a bipedal robot can control the center of gravity freely, it can perform biped walk. As the robot shown in Figure 2 has the degrees to left and right directions at the ankle position, it can perform bipedal walking in static way.The motion sequence of one cycle in biped walking consists of eight phases as shown in Figure 6. One step consists of four phases; move-gravity-center-on-foot,lift-leg, move-forward-leg, place-leg. As the body is described in solid model, the robot can generate a body configuration for move-gravity-center-on-foot according to the parameter of the hight of the gravity center. After this movement, the robot can lift the other leg and move it forward. In lifting leg, the robot has to control the configuration in order to keep the center of gravity above the supporting foot. As the stability in balance depends on the hight of the gravity center, the robot selects suitable angles of the knees.Figure 7 shows a sequence of experiments of the robot in biped walking 6 Rolling Over and Standing Up Figure 8 shows the sequence of rolling over, sitting and standing up. This motion requires coordination between arms and legs. As the robot foot consists of a battery, the robot can make use of the weight of the battery for the roll-over motion. When the robot throws up the left leg and moves the left arm back and the right arm forward, it can get rotary moment around the body. If the body starts turning, the right leg moves back and the left foot returns its position to lie on the face. This rollover motion changes the body orientation from face up to face down. It can be verified by the orientation sensor. After getting face down orientation, the robot moves the arms down to sit on two feet. This motion causes slip movement between hands and the ground. If the length of the arm is not enough to carry the center of gravity of the body onto feet, this sitting motion requires dynamic pushing motion by arms. The standing motion is controlled in order to keep the balance. 7 Integration through Building Sensor-Based Transition Net In order to integrate the basic actions described above, we adopted a method to describe a sensor-based transition network in which transition is considered according to sensor status. Figure 9 shows a state transition diagram of the robot which integrates basic actions: biped walking, rolling over, sitting, and standing up. This integration provides the robot with capability of keeping walking even when it falls down. The ordinary biped walk is composed by taking two states, Left-leg Fore and Right-leg Fore, successively.The poses in Lie on the Back and Lie on the Faceare as same as one in Stand. That is, the shape ofthe robot body is same but the orientation is different. The robot can detect whether the robot lies on the back or the face using the orientation sensor. When the robot detects falls down, it changes the state to Lie on the Back or Lie on the Front by moving to the neutral pose. If the robot gets up from Lie on the Back, the motion sequence is planned to execute Roll-over, Sit and Stand-up motions. If the state is Lie on the Face, it does not execute Roll-over but moves arms up to perform the sitting motion. 8 Concluding Remarks This paper has presented a two-armed bipedal robot which can perform statically biped walk, rolling over and standing up motions. The key to build such behaviors is the remote-brained approach. As the experiments have shown, wireless technologies permit robot bodies free movement. It also seems to change the way we conceptualize robotics. In our laboratory it has enabled the development of a new research environment, better suited to robotics and real-world AI. The robot presented here is a legged robot. As legged locomotion requires dynamic visual feedback control, its vision-based behaviors can prove the effectiveness of the vision system and the remote-brained system. Our vision system is based on high speed block matching function implemented with motion estimation LSI. The vision system provides the mechanical bodies with dynamic and adaptive capabilities in interaction with human. The mechanical dog has shown adaptive behaviors based on distancemeasurement by tracking. The mechanical ape has shown tracking and memory based visual functions and their integration in interactive behaviors. The research with a two-armed bipedal robot provides us with a new field in intelligent robotics research because of its variety of the possible behaviors created from the flexiblility of the body. The remote-brained approach will support learning-based behaviors in this research field. The next tasks in this research include how to learn from human actions and how to allow the robots to improve their own learned behaviors. References 1 I. Kate and H. Tsuik. The hydraulically powered biped walking machine with a high carrying capacity. In Proc. Of 4th Int. Sys. on External Control of Human Extremities,1972. 2 H. Miura and I. Shimoyama. Dynamic walk of a biped. International Journal of Robotics Research, Vol. 3, No. 2,pp. 60-74, 1984. 3 S. Kawamura, T. Kawamura, D. fijino, F. Miyazaki, and S. Arimoto. Realization of biped locomotion by motion pattern learning. Journal of the Robotics Society of Japan,Vol. 3, No. 3, p
温馨提示:
1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
2: 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
3.本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
提示  人人文库网所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。
关于本文
本文标题:四连杆履带式搜救机器人设计-灾害救援机器人【6张CAD高清图纸、说明书】【JS系列】
链接地址:https://www.renrendoc.com/p-49492000.html

官方联系方式

2:不支持迅雷下载,请使用浏览器下载   
3:不支持QQ浏览器下载,请用其他浏览器   
4:下载后的文档和图纸-无水印   
5:文档经过压缩,下载后原文更清晰   
关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

网站客服QQ:2881952447     

copyright@ 2020-2025  renrendoc.com 人人文库版权所有   联系电话:400-852-1180

备案号:蜀ICP备2022000484号-2       经营许可证: 川B2-20220663       公网安备川公网安备: 51019002004831号

本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知人人文库网,我们立即给予删除!