




已阅读5页,还剩3页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
In this paper we are proposing a collaborative SLAM system between a team of three heterogeneous agents a robot a human operator and an augmented reality head mounted display AR HMD The system allows for online editing of a map produced by a robot running SLAM Through hand gestures the user can edit in real time the robot map that is augmented on top of the physical environment Moreover the proposed system leverages the built in SLAM capabilities of the AR HMD to correct the robot s map and map areas that are not yet discovered by the robot Our method aims to combine the unique and complementary capabilities of each of the three different agents to produce the maximum possible mapping accuracy in the minimum amount of time The proposed system is implemented on ROS and Unity Experiments performed demonstrate the considerably superior SLAM outputs in terms of reducing mapping time eliminating maps post processing and increasing mapping accuracy I INTRODUCTION After more than two decades of research Simultaneous Localization and Mapping SLAM has become today the prime algorithm for localizing an agent in indoor environments and in many outdoor environments void of GPS On the other hand SLAM has not been as successful on the mapping part the maps they produce are relatively poor and serve little more than to satisfy the coupled localization problem The errors in most SLAM solutions can be attributed either to the errors in the range and bearing sensors used to the motion and observation models or to the inability of a sensor to represent the environment accurately 1 For example if a 2D LiDAR is used in SLAM it will not detect transparent objects or obstacles lower higher than its detection plane e g an overhanging ledge or a hole in the ground so these objects will not be registered as obstacles in the map Although the resulting maps are considered accurate from the robot s point of view errors such as these could have devastating effects on the robot during navigation To mitigate this issue most maps today are post processed this is a long and dull process which is difficult to do after the navigation is complete Any map edits usually require several trips to the mapped site for the sake of re measuring and correcting dimensions as well as marking forbidden areas that were not flagged by the robot during navigation A much better approach would be the ability to correct the maps in situ and in real time in tandem with the robot The idea we are proposing in this paper is just that to use different agents to work together in order to produce accurate maps during SLAM A Sidaoui and I Elhajj are with the Maroun Semaan Faculty of Engineering and Architecture Electrical and Computer Engineering Department American University of Beirut 1107 2020 Riad El Solh Beirut Lebanon email ams108 mail aub edu ie05 aub edu lb Through collaborative mapping global maps could be built by merging sub maps built by different agents 2 or by directly fusing sensory data from these agents 3 Moreover one agent can localize itself in a map built by another 4 In all cases better mapping accuracy is achieved through redundancy or through the fact that one agent can depend on the complementary beneficial properties of the other Recent advances in Augmented Reality AR have widened the human robot interaction HRI and human in the loop scenarios where the operator can program 5 control 6 or collaborate with a robot to enhance its performance 1 Moreover the latest AR head mounted displays such as Microsoft HoloLens have built in Visual SLAM algorithms that perform spatial mapping to produce 3D meshes of the operating environments 7 Using AR HMD in collaborative SLAM would allow for intuitive human interaction with the robot and robot HMD human collaborative SLAM Moreover it would enhance the performance of the entire system by including a human in the loop to supervise and assist in the map building procedure The strength of our proposed work lies in combining the unique and complementary traits of each agent to yield a highly accurate map in the minimum amount of time Our system allows a user wearing AR HMD to view the map produced by a robot and edit it in real time through an D Asmar is with the Maroun Semaan Faculty of Engineering and Architecture Mechanical Engineering Department American University of Beirut 1107 2020 Riad El Solh Beirut Lebanon email da20 aub edu lb Collaborative Human Augmented SLAM Abbas Sidaoui Imad H Elhajj Daniel Asmar a b c d e f Figure 1 Sample demonstration of the proposed collaborative SLAM system 2019 IEEE RSJ International Conference on Intelligent Robots and Systems IROS Macau China November 4 8 2019 978 1 7281 4003 2 19 31 00 2019 IEEE2131 intuitive interface Moreover based on the user s request the 3D mesh produced by the AR HMD is used to correct the robot s map or map areas that are not covered by the robot yet The final global map is produced upon demand and in real time from merging the robot s map the map produced from the AR HMD and the user edits This global map is augmented in real time on the physical environment through the AR HMD and it is used at the same time by the robot as a cost map for autonomous navigation Fig 1 shows a sample demonstration of the proposed system Fig 1a shows the operator wearing an AR HMD and the robot near him performing SLAM The occupancy grid map being produced from the robot is rendered in the user s view through a Microsoft HoloLens Fig 1b where the table which is higher than the robot s LiDAR plane is not represented in the map Fig 1c shows the operator adding the cells occupied by the table where white cells represent occupied areas and Fig 1d shows the boundaries of the table added by the user Fig 1e shows how the table is being detected by the HoloLens and the created 3D mesh Finally Fig 1f shows the corresponding projection of this table merged in the occupancy map The contributions of this work are a fully integrated online AR system that could be used to apply human robot HMD collaborative mapping in any SLAM algorithm that uses grid based cost maps for autonomous navigation the modeling of a Virtual AR sensor that is used to produce 2D maps from the mesh built by an AR HMD an online solution that allows merging of maps produced by a robot by an HMD and by a human operator The remainder of the paper is organized as follows Section presents a brief literature review Section shows our proposed methodology and Section presents how the system is implemented Experiments and results are presented in Section and finally Section concludes the paper II RELATED WORK A Collaborative SLAM The idea of collaborative SLAM is not new Schmuk and Chli 2 presented a collaborative keyframe based SLAM architecture consisting of multi UAV agents In their work each agent is performing short memory SLAM and a central station takes care of fusing sub maps into a global optimized map and sending it back to the agents However and since the visual odometry in the system relies on monocular cameras the system is vulnerable to sudden changes in scene depth or lighting conditions Dub et al 3 proposed an online pose graph SLAM for multiple mobile robots using 3D LiDARS Incremental pose graph optimization is performed using sequential constraints and 3D segment matching This centralized approach is based on a master agent that receives odometry and scans matching factors from the robots in order to perform pose graph optimization Since these methods adopt a centralized approach for map merging any connectivity loss will cause major errors in SLAM estimates Surmann et al 8 proposed a collaborative SLAM method that merges point clouds from camera and rotating 2D LiDAR in order to localize a mobile robot in a map produced by a UAV However the UAV is dependent on GPS for scan registration and localization Fankhauser et al 4 localized a legged robot in a map produced by UAV The problem with such approaches is that one agent will fail to localize when navigating in an area that is not yet mapped by the second agent In our system delays and short communication losses are not a limitation since each agent runs SLAM on its own Moreover the proposed system benefits from one agent e g AR HMD to correct the mapping errors of the other agent e g robot under the supervision of a human operator B Augmented Reality in Human Robot Interaction Integrating AR technology in human robot interaction is relatively new but is gaining more interest with the advancement of AR hardware Using AR HMDs allows the human operator to see the real robot and the environment while visual information is augmented on top of the physical surroundings This allows the human to interact intuitively and effectively with the robot Recently Krupke et al 6 used a Microsoft HoloLens to control a robotic arm in a pick and place application The users can select the objects to be picked and the location to be placed they can virtually observe the planned motions and confirm them before the robot starts moving Their results demonstrated that using the user s heading to select an object is less demanding more accurate and requires less time than using their fingers to point in mid air AR was also used by Liu et al 9 to teach a robot how to open bottles via a virtual gripper the user is shown the forces that are exerted by the robot as well as the execution plan Using an AR HMD to program robots and view their plans was presented in 10 5 and 11 In the context of navigation Zolotas et al 12 proposed an AR system to help users in controlling their wheelchairs providing them with rear view display a predicted path of the wheel chair augmented on the ground circles highlighting the obstacles presented in the predicted path and arrows that show the user and controller commands Finally the work of Reardon et al 13 proposed the use of AR in collaborative search and rescue missions The system consists of a robot running SLAM and communicating with HoloLens to display paths for the operator when the robot finds its goal Our system makes use of AR HMD to 1 visualize the map created by a robot performing SLAM aligned on the physical environment 2 evaluate the map correctness and 3 edit it in real time through intuitive gestures Moreover our proposed system benefits from the visual SLAM capabilities of the HoloLens to collaborate with the robot in map building C Human Augmented Mapping Although maintaining full autonomy in robotic tasks is the ultimate goal for researchers and designers achieving a fully autonomous mapping and effectively exploring an unknown environment is still a non trivial task 14 To increase mapping accuracy Topp and Christensen 15 introduced the concept of Human Augmented Mapping HAM They demonstrated along with 16 17 that allowing the human 2132 operator to guide the robot and add semantic information to the map increased the overall accuracy of SLAM To correct scans alignments when using 3D scanners Vieira and Ventura 18 applied virtual forces that are exerted by a user through a GUI This method cannot build complete maps since it lacks localization In our previous work 19 we proposed a real time interactive system that allows a human to correct SLAM maps Results of this system led us to improve and develop a second human in the loop Augmented SLAM system A SLAM 1 in which the operator can correct the robot s pose estimates when its uncertainty increases Moreover in that work we implemented our AR application on a HoloLens where the users could view and edit the robot s map which was superposed onto the real environment This paper extends our previous work by developing and implementing an algorithm to merge in real time heterogeneous maps 3D 2D human augmented generated by different sources AR HMD Robot Human This included an algorithm to convert a 3D mesh constructed by the AR HMD into a 2D occupancy grid III SYSTEM OVERVIEW This section presents the system overview and the methodology we propose for each module of our human HMD robot collaborative mapping Fig 2 shows the operation flowchart of our system A human operator is equipped with an AR HMD while a robot performs SLAM inside the same area and produces a 2D occupancy grid cost map for navigation For the augmented map to be aligned with the physical environment the operator has to initialize its position by looking at an AR marker placed on the robot This is a one time task at startup Re initializing the augmented map is required whenever the AR HMD detects that it had lost tracking or upon user s demand re initializing could be done manually or through the AR Marker The users can import the updated map from the robot at any time and the augmented map will be updated on the real environment through the AR HMD If any part of the map does not match the physical environment or the users wish to add parts of the map that are not yet traversed by the robot they can manually edit the map The users can activate the AR HMD auto mapping feature if they find it difficult to draw the boundaries of an obstacle or if they want the AR HMD to perform the map correction completion automatically In that case a part of the 3D mesh created by the AR HMD will be projected in 2D for the purpose of correcting completing the initial map The users heading determines what part of the mesh is used and they can perform manual edits even if the HMD auto mapping is activated The augmented map is updated in real time whenever the AR HMD mapping is activated the user performs edits or the updated cost map is imported from the robot The augmented map can be sent to the robot at any time where it is merged with the current SLAM map to update the cost map for path planning and autonomous navigation When requesting the map again the users receive the new updated cost map Fig 3 illustrates the high level architecture of our proposed system Below is a detailed description of the three main modules A AR HMD Map Builder This module produces an AR HMD occupancy grid map in two steps 1 raycasting to detect the obstacles and 2 Map Updating to build and update the AR HMD occupancy grid map The produced AR HMD map has the same size and resolution as the robot s map and its frame is aligned with the robot s map frame both maps share same indexes i and j We assume that the AR HMD device used is able to create a 3D mesh or point cloud of the environment and the relative transformation between the robot s map frame and the AR HMD frame is known Therefore the pose of the human operator and the heading vector in the robot s map frame are known in real time All poses and distances below are calculated in the map s frame Figure 3 System architecture Robot SLAM Costmap Merger Visual SLAM AR Map Merger AR HMD Human Human Perception Human Input Robot s map Heading Gesture AR HMD Map Builder map 3D mesh HMD Map updates map tf Path Planner Figure 2 Operation Flowchart Initialize Augmented Map AR HMD map User Edits Send Map Tracking Lost Re initialize Build update AR HMD Map Send edits Send Augmented Map to robot Update Augmented Map yes Robot Map updated Update Cost Map yes yes yes yes yesyes No NoNoNo No Generate Robot s cost map Import Map Human input AR HMD Robot Start Show Augmented Map 2133 Raycasting is a well known technique that is used to detect heading and distance to obstacles We are considering the AR HMD as a virtual 3D LiDAR that sends a series of rays originated from and have defined directions Our raycasting approach is presented in Algorithm 1 and illustrated in Fig 4 Each set of rays has a frame that is rotated by an angle around the vertical axis of the AR HMD Then in each set every ray s direction is calculated by rotating the ray s frame by angle around the set s frame The direction of every ray can be calculated through the function RayDirection 1 cos sin 0 sin cos 0 001 2 100 0cos sin 0 sin cos 3 Each ray is then sent through the function CastRays that returns a true value to the flag hit and the hit point if the ray intersects with the 3D mesh The horizontal distance between and is then calculated through Horizontal distance 6 i e the detected obstacle is not ground is added to the obstacles list O L then each cell c in the map is updated according to T U V1WW Q87 6 R S X U with 2PP Z0 1Z and V1WW 1 0 Moreover 5 for to step 6 1 RayDirection 7 hit CastRays hit 8 if hit 9 CastRays point 10 d Horizontal distance 11 if d 0 5 0 5 where gh and gi are the heading angles for yaw and pitch The AR Map Merger handles merging the maps from the robot and AR HMD Map builder and applies the human edits to the merged map It also sends the merged augmented map to the robot and to the user interface UI to be visualized The UI allows the operator to interact with the SLAM process and correct the map intuitively through heading based interaction a virtual ray is cast from the HL position in the heading direction As a result one can determine the user gaze when a ray intersects any virtual object or any part of the 3D mesh built by the HL Through our UI 1 the augmented map is rendered onto the physical world see Fig 1 and TABLE 1 DECISION RULES FOR AR MAP MERGER ALGORITHM Input Output Merged map Human edits AR HMD Map Merged Map Occupied Free Free Free Occupied Free unknown Free No edits Free Occupied Occupied Occupied unknown Occupied Free Occupied Free Occupied Occupied Occupied unknown Occupied No edits Free Free Occupied Occupied unknown Free Unknown Occupied Free Occupied Occupied Occupied unknown Occupied Free Free Free Occupied Occupied unknown Free No Edits Free Free Occupied Occupied unknown unknown TABLE 2 DECISION RULES FOR COST MAP MERGER ALGORITHM Input Output Merged Map Robot s Map Merged Map Free Free Free Occupied Occupied Unknown Free Occupied Free Occupied Occupied Occupied Unknown Occupied Unknown Free Free Occupied Occupied Unknown Unknown 2135 menu items see Fig 6 are placed to the left of the user The menu allows the user to import send augmented maps
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 职业高中班主任工作总结
- 养老服务活动策划方案模板
- 嘉善洁净车间施工方案
- 活动策划方案标准化执行清单
- 2025辅警招聘考试全真模拟模拟题及参考答案详解【夺分金卷】
- 自考专业(工商企业管理)题库试题带答案详解(培优B卷)
- 2024年安全员考试高频难、易错点题附完整答案详解(考点梳理)
- 高职单招模拟试题含完整答案详解(夺冠系列)
- 2024-2025学年自考专业(金融)题库检测试题打印含答案详解【达标题】
- 2024-2025学年度自考专业(汉语言文学)高频难、易错点题及答案详解【真题汇编】
- 科技论文写作教程市公开课金奖市赛课一等奖课件
- 中国古典乐器-古筝琵琶英文介绍(带翻译)课件
- 戴明环(PDCA循环)管理培训教材课件
- 塑胶场地施工方案
- 中小学高级职称英语全英答辩题
- 苏教版(新教材)三年级上册小学科学第二单元测试卷含答案
- 职业健康检查管理办法-解读课件
- 小学地方课程教案(全面完整版)
- 《非常规油气地质实验技术与应用》教学大纲
- 产生你的企业想法课件
- 国家职业技能标准——城市轨道交通列车司机(2020版)
评论
0/150
提交评论