IROS2019国际学术会议论文集0920_第1页
IROS2019国际学术会议论文集0920_第2页
IROS2019国际学术会议论文集0920_第3页
IROS2019国际学术会议论文集0920_第4页
IROS2019国际学术会议论文集0920_第5页
已阅读5页,还剩2页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Toward A Ballbot for Physically Leading People: A Human-Centered Approach Zhongyu Li and Ralph Hollis AbstractThis work presents a new human-centered method for indoor service robots to provide people with physical assistance and active guidance while traveling through con- gested and narrow spaces. As most previous work is robot- centered, this paper develops an end-to-end framework which includes a feedback path of the measured human positions. The framework combines a planning algorithm and a human- robot interaction module to guide the led person to a specifi ed planned position. The approach is deployed on a person-size dynamically stable mobile robot, the CMU ballbot. Trials were conducted where the ballbot physically led a blindfolded person to safely navigate in a cluttered environment. I. INTRODUCTION AND RELATED WORK An autonomous robot that could lead frail, elderly, or sight-impaired people through a building, especially an unfa- miliar one, could be very useful. Two major types of robotic navigation aids have been developed: robotic canes 13 and robotic walkers 46. However, almost all previous robotic mobility aids are not aware of the led persons exact position and thus cannot take the persons motion and confi guration into consideration. Moreover, previous obstacle-avoidance algorithms 5, 7, 8 are based on the sensed spatial information relative to the robot itself and not the led person. This could lead to dangerous situations if the robot-leading-human system enters a narrow space. In a nutshell, previous robotic mobility aids are robot-centered and have drawbacks. In this paper we seek to ascertain the feasibility of a robotic navigation aid utilizing an experimental platform, the CMU ballbot 9. The ballbot is a person-size omnidirec- tional robot with intrinsic omnidirectional compliance owing to its balancing controller. It currently has a pair of simple 2-DOF arms with series elastic actuators (SEAs) and simple knob-like “hands” 10. Figure 1 shows a photograph of the robot, including the main features that pertain to our study. Thanks to its dynamical stability and resulting slim size and graceful motions, ballbot shows a potential to provide sight-impaired people or the elderly with active guidance and physical assistance in cluttered indoor environments. Besides basic balancing and navigation to goal loca- tions 11, successfully accomplishing the leading task re- quires the solution of three major sub-problems which are the subject of this paper. They are: Prob. 1 Keeping track of the position of the led person The authors are with The Robotics Institute, Carnegie Mellon Uni- versity, Pittsburgh, PA 15213, USA zhongyu li and rhol- Zhongyu Li is also with The School of Mechanical Engineering, Zhejiang University, Hangzhou, 310027, China Fig. 1: The CMU ballbot is leading a blindfolded person to avoid obstacles (cardboard boxes) with physical assistance. The CMU ballbot: 2-DOF arm mechanism (top left), LiDAR used for navigation (bottom left), pan/tilt turret used to track the led person (top right), and main and yaw drives (bottom right). Prob. 2 Human-robot motion planning that accounts for the per- sons position relative to obstacles in the environment Prob. 3 Communicating via force or speech dialog between the robot and person concerning the safe navigation task Effective solutions to the above problems could lead to a human-centered method where the robots motion depends on information relative to the led person. A. Related work Some solutions for the problems mentioned above have been proposed. For Prob. 1, previous blind-leading robots pay most attention to sensing the environment 2, 6, but they turn blind eyes to the persons they are leading. There is one robotic walker 12 that demonstrated the capacity to track a users movement but it is unable to navigate or avoid obstacles. To solve Prob. 1, a possible method is to utilize 2D LiDARs. Two common diffi culties for 2D LiDARs for following a human are 1) to track the position of the human over the full range of 360and 2) to keep the followed person moving within the scanning plane. Most previous techniques aiming to address such problems resort to deploying multiple 2D LiDARs at different locations 13 and heights 14, or by using 3D LiDARs 15. These approaches, however, incur both high cost and high computational complexity. For Prob. 2, various path planning algorithms have demonstrated the capacities to guide a person with limited vision 5, 7, 8. However, most previous work only generates a path for the robot itself and usually ignores the spatial relationship between the robot and the led person. The state of the art does 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Macau, China, November 4-8, 2019 978-1-7281-4003-2/19/$31.00 2019 IEEE4827 Ballbot Human Human-Robot Interaction Physical HRI Speech-Based HRI Request: Desired Goal Location Path Planning for Robot Goal: Following Safe Path Human- Robot Confi g Planning ballbots confi guration LiDAR-Based Human Following System desired human position - interactive force speech + + movement + actual position waypoints Stop? predicted position Fig. 2: Feedback framework for the ballbot leading people. When the ballbot is approaching obstacles and comes to a stop, the feedback loop begins to update over time to ensure that the led person follows a planned safe path. not provide specifi c planned positions for the led person. To address Prob. 3, previous work exploits the interactive force 1, vibrotactile signals 6 or qualitative speech 8 to inform the led person about the existences of surrounding obstacles and how to avoid them. But again, because the state of the art lacks the knowledge of the planned and actual positions for the led person, they are barely able to provide clear and accurate feedback instructions to fi ne tune the led persons positions. Most systems only provide desired direc- tions to the person, like “please turn right/left.” Furthermore, previous robotic navigation aids are based on the systems being specially designed, like robotic canes and walkers, and thus has limited applications. Recently, researchers tried to use general robot platforms, like PR2 16, to provide similar assistance, but due to the bulky size of this statically stable robot, the PR2 was later reported as not ideal for the blind-leading task 17. Currently, there is no complete work demonstrating the solution of these three mentioned problems together or using a general robot platform. B. Contributions This paper discusses several contributions. For Prob. 1, we are the fi rst to include a feedback path for the movement of the led person. A new and simple human following system is developed to track the 3D position of the persons movement close to the robot, based on a single 2D LiDAR mounted on the pan/tilt turret of the ballbot (Sec. II). To solve Prob. 2, besides path planning for the robot itself, we propose a model to describe the confi guration of the robot-leading- human system to fi nd specifi c safe positions for the led person along the robots path. Sec. III describes our human- robot confi guration planner. To address Prob. 3, Sec. IV discusses a new Human-Robot Interaction (HRI) module that is capable of combining coarse interactive forces with speech cues to fi ne tune the led persons positions. This HRI module is capable of providing clear quantitative instructions, such as “move 2 feet left.” Furthermore, the leading task is accomplished based on an end-to-end human-centered feedback framework that, to our knowledge, is the fi rst to address all mentioned problems for navigation aids based on a general robot platform. The system is illustrated in Fig. 2. When approaching obstacles, the ballbot comes to rest and starts the feedback loop. It guides the led person to an obstacle-free position based on the comparison between the planned and measured positions of the person. To evaluate the proposed new methods, exper- imental validations and conclusions are shown in Sec. V and Sec. VI, respectively. A benchmark of the blind-leading ca- pacity of the CMU ballbot discussed in the paper and current state-of-the-art blind-leading robots is shown in Table II. II. HUMAN FOLLOWING NEARBY THE ROBOT A. Sensory turret on the ballbot For human-robot interaction, the ballbot has a pan/tilt sensory turret with multiple sensors on its top. Shown in Fig. 1, there are an Asus Xtion RGB-Depth camera, a Hokuyo UST-05LN 5 m 2D laser range fi nder (or LiDAR), and a pair of speakers. The turret can do pan and tilt motions independently from the ballbots body yaw movement. For each revolute joint, a PID position controller is applied. Given that the inertia of the turret is relatively small com- pared to the ballbots body, the control architecture of the ballbot 9 can compensate the disturbance due to turret motion without modifi cation for our current study. B. 3-dimensional 2D-LiDAR-based human following system Awareness of a human presence near the ballot is critical for human-robot interaction tasks. At fi rst glance, the RGB-D camera on the turret would seem to be a good sensor for this function. However, the image area that its lens can cover is limited by the fi eld of view. To operate properly, the working distance is 0.8m,3.5m 18. However, when the ballbot is leading a person by the hand, the human-robot separation is very small, usually below 0.5 m, which is below the cameras lower limit for sensing depth. To address this problem, the 2D LiDAR deployed on the pan/tilt turret is chosen. Fig. 3: Human-robot relative positions when the LiDAR tilts down. pand tare tuned in real time to follow the led person. wb,wt,lt,htare constant geometry parameters. 4828 Given that the turrets LiDAR is 1.68 m above the fl oor when the tilt angle is 0, the LiDAR is tilted downward while panning as demonstrated in Fig. 3 in the case where the person to follow is shorter than 1.68 m. Therefore, the detector should be capable to classify the laser scan patterns read from both the humans head and upper body, as shown as pattern2and1in Fig. 3 respectively. 1) Human detection and tracking: The Hokuyo LiDAR has 180scan range with 0.5resolution. It updates a group of vectors of distances in the LiDAR frame at the frequency of 10 Hz. After clustering 19, the laser reading result is written as a set of scan clusters of the 3D locations of laser points. Then, a random forest classifi er, extended from the binary classifi cation in 13, is trained to classify the clusters as the humans head, upper body and non-human objects. For each scan cluster recognized as a human by the detector, no matter if it is a head or upper body, a linear Kalman fi lter, as demonstrated in 13, is applied to track its motion. The state variable to track at the kth time is xk= xk,yk, xk, yk. Here, x,y is the position of the human, which is the mean coordinate of the cluster, and x, y is the velocity. 2) Initialization: During initialization, the ballbot should focus on the person who is being led. The person is verbally instructed to stand in a given area beside the ballbot. If no person is detected when the tilt angle of the LiDAR is 0, the ballbots turret LiDAR will tilt down 90slowly (at 0.15 rad/s) to scan that area vertically. This process will return a group of segments with the same (x,y) but varied z detected as either a humans head or upper body. We use xk, yk, zkas the tracking values of the real-time 2D coordinates of the initialized persons position and the time-invariant value as the smallest height of the segment classifi ed as head of that person (near his/her neck), respectively. 3) 3-dimensional human following:If the person is shorter than the LiDAR, and if the LiDAR stays at a fi xed tilt angle, undetectable zones like the position3in Fig. 3 can still exist. Thus, the tilt angle must be adjusted in real time to follow the height of the scanned segment of the person, besides rotating the pan joint to follow the persons footprint in the 2D plane. At the kth time, the pan angle p, tilt angle t, and the 3D coordinate of the segment that the turret should follow ( x, y, z) have geometric relationships: p( x, y) = arctan x y ,(1) t( x, y, z) = ( 0,if z hb + 1 2,if z hb ,(2) where 1= arctan wt lt + arcsin( s w2 t+ l2t d2+ (ht z)2 sin1), 2= arctan lt wt + arctan d ht z , and d = p x2+ y2 wb. The quantities wb,wt,lt,htare constant geometry param- eters of the ballbot and LiDAR tilt, respectively, as shown in Fig. 3. According to Eq. (2), once the tracking height z is determined after initialization, the tilt angle can be tuned to maintain the height of the scanned segment on the led person. In this way, the LiDAR would not lose sight of the led person if he/she should move toward or away from the ballbot. Taking the scenario shown in Fig. 3 as an example, the 3D human following system operates as follows: At the kth time, the person to follow is standing at position 2, and the tilt angle is set to k t( xk, yk, z) based on Eq. (2). If the person moves to3at the k + 1-th time, to maintain the height of the scanned segment to be z, the tilt angle is updated to k+1 t ( xk+1, yk+1, z). 4) Human trajectory prediction: For the tracked person, a prediction of the persons trajectory in the next T seconds is obtained using the Kalman fi lter. The prediction of the state variable xk+1at the k +1-th time is determined only based on xkvia xk+1= Fk+1xk+ wk+1,(3) where wk+1 N(0,Qk+1). It only predicts the next 3 s human trajectory (to be compatible with Sec. III-A). III. HUMAN-ROBOT CONFIGURATION PLANNING A. Path planning for the ballbot: Differentially fl at naviga- tion The ballbot localizes itself using another 2D Hukoyu LiDAR mounted in its body. To derive dynamically feasible trajectories for the ballbot, a method of fast, analytic differ- entially fl at trajectory generation has been developed 11. A typical planning result is shown as the green path in Fig. 4a. As discussed in 11, to perform safe navigation maneuvers a trajectory replanner only considers the nearest 3 s trajectory segment (yellow path in Fig. 4a). This includes 1.5 s of the desired trajectory to execute based on the updated ballbot localization, followed by another 1.5 s trajectory as a backup which allows the ballbot to fully stop if necessary. B. The human-robot confi guration Theoretically, for each of the ballbots positions vb= (xb,yb) R2, a kinematically feasible obstacle-free posi- tion vh= (xh,yh) for the human should be determined. Kinematical feasibility requires the planned human position should not be too far away from the ballbot for the person to keep holding onto the ballbots hand. To guarantee this, some values must be considered. These values need to include the distance between human and robot, the human confi guration relative to the robot and moving direction, and the ballbots whole body confi guration. To obtain a simple but effective model, we consider the desired human position as the end- effector position of the ballbot. The desired human position in the fi xed maps frame is directly connected to and derived from the ballbots forward kinematics formulation: xh yh zh 1 = cysy0 xb sycy0 yb 001 0 000 1 T b h() 0 0 0 1 + cyx syy syx + cyy 0 0 (4) 4829 (a)(b) (c) Fig. 4: (a): The ballbot shown executing a trajectory among static obstacles (white boxes) alone. The desired path is shown in green. In blue is a re-planned path which the ballbot will execute, and the red is the backup path 11; (b): Human-Robot Confi guration. The planned human position for the current trajectory segment is the green bounding cylinder. The human-robot quadrilateral is the yellow bounding box. The incoming ballbots trajectory segment is cyan; (c): Confi guration planning from a room to the hallway, avoiding obstacles and passing through a narrow doorway. where c,sare cos,sin for short, and = (1,2) S2is the 2-DOF revolute joint of the ballbots arm that the human is holding, y S1stands for the ballbots body yaw joint, Tb h() is the transformation matrix from the ballbots base frame (attached to the center of the balls footprint) to the end of its hand, and x,y, which are both tuned to be -0.25 m, are the offsets from the ballbots hand to the desired human position in the ballbots odometry frame. To obtain a robust planning result, the Human-Robot Poly- gon (HR-PY, here shown as a rectangle) that encloses both the human and the ballbot shown in Fig. 4b is considered, instead of focusing on the humans position alone. The rectangles half-width is set to 0.25 m as a safe buffer and its length lpand orientation pis acquired via lp= kvb vhk + 0.25 2, p= arctan vb(2) vh(2) vb(1) vh(1) (5) where 0.25 m is a buffer as well. Hence, the confi guration space of the HR-PY is C = R2S2S1which is denoted by the ballbots confi guration qb= (xb,yb,1,2,y). Because the human is a highly unmodelable subject regarding the leading task, experiments described in Sec. V are needed to validate that the proposed model is effective to assign kinematically feasible positions for the human. C. Path planning for the human: HRC planning The ballbot is an omnidirectional single-wheeled mobile robot with a human-size footprint which can perform grace- ful motion. As a result, a person can fi t into anywhere the ballbot can go. To utilize this advantage and to reduce the planning complexity, we plan the Human-Robot Con- fi guration (HRC) based on each of the ballbots waypoints vb= (xb,yb) derived from the planned trajectory in Sec. III- A. Moreover, to emulate the scenario where a hu

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论