




已阅读5页,还剩2页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
Multi-Sensor 6-DoF Localization For Aerial Robots In Complex GNSS-Denied Environments J.L. Paneque, J.R. Mart nez-de Dios and A. Ollero AbstractThe need for robots autonomously navigating in more and more complex environments has motivated intense R a testing scenario near Seville, see Fig. 3-right, namely Karting; and at a road bridge, see Fig. 3-left, namely Bridge. The robot position ground-truth in the Karting and ETSI scenarios was provided by the RKT- GPS, whereas in the Bridge scenario it is given by a Leica TotalStation. The experiments were performed in a realistic way, similarly to the envisioned Inspection and Maintenance (I&M) operation in AEROARMS. First, a multi-sensor map of the scenario was built in prior manually-assisted fl ights. Next, a number of fl ights in fully-autonomous navigation mode were performed and the proposed method computed pose estimations in real-time using that map. Figure 4 shows the results obtained in one bridge inspection experiment performed in December 2018. The robot 3D localization obtained with our method is shown in blue, and the ground- truth localization, in magenta. Only the geometrical compo- nent of the map is shown for better visualization. We compared our method to two of the most extended Fig. 3. Pictures of the experiment scenarios: left) Karting and right) Bridge. 1982 Fig. 4. Results in one bridge inspection experiment. The fi gure shows the geometrical map component (in green), the localization obtained by our method (in blue) and the ground-truth localization (in magenta). and publicly available localization techniques: LOAM (Lidar Odometry and Mapping) 14, based on LIDAR and ORB- SLAM2 (Oriented FAST and rotated BRIEF -SLAM2) 2, based on camera. We used the latest available codes of LOAM and ORB-SLAM2. The LOAM code was modifi ed to properly use all 32 channels of our LIDAR. ORB-SLAM2 was used in stereo mode. For better comparison, the optional sensors (IMU, altimeter, and UWB sensors) were not inte- grated in our method. LOAM, ORB-SLAM2, and our method are all initialized with the take-off pose of each fl ight. ORB- SLAM2 was used in localization mode with a precomputed map. The measurements were logged and processed off-line with the three methods. Camera images were logged at 60Hz, and LIDAR measurements, at 10Hz. Figure 5 shows the 3D localization errors versus ground- truth in one Karting experiment. The errors were generally low for all methods in the X axis, but LOAM gave worse results during the beginning of the fl ight due to the lack of rich geometrical features when fl ying at low altitudes after take-off. LOAM and ORB-SLAM2 were not accurate during mid-fl ight, since the robot was fl ying parallel to a pipe for performing inspection. The multi-modal solutions due to symmetries in the scenario made them loose accuracy. For ORB-SLAM2 the Z estimation is best when the robot is near to the ground at the end of the fl ight (since there are rich visual features on the fl oor), while LOAM gives better results for higher altitudes. Our method provided low- variability errors along the full fl ight. The results obtained in the rest of experiments offered similar conclusions. Table I shows the absolute translation RMS errors (trms) and maximum errors (tmax) in some experiments performed in the three scenarios. All were performed in exactly the same conditions as stated before. It has been noticed that, while LOAM and ORB-SLAM2 tend to give accurate results in many experiments, at different times in the experiments they were affected by the lack of 020406080100 0.2 0.4 0.6 0.8 1 X error(m) Proposed method LOAM ORB-SLAM2 020406080100 0.2 0.4 0.6 0.8 1 Y error(m) 020406080100 Time (s) 0.2 0.4 0.6 0.8 Z error(m) Fig. 5.3D localization errors in the experiment in Fig. 4 obtained by the proposed method, LOAM and ORB-SLAM2. features and by different scenario symmetries. LOAM had most problems when landing and taking off, and when fl ying near the bridge and the pipe for performing inspection. ORB- SLAM2 had most problems when visual features were very far from the robot, which is a common problem in large scenarios such as in the Bridge experiments. These situations affect their overall accuracy, but mostly their error variability. Our method takes advantage of the synergies of both sensors, which in combination with the multi-hypothesis framework lead to signifi cantly less variable solution. Also, our method assumes that a map with rich information is available and focuses only on the optimization of the robot pose. C. Analysis Our method uses visual and LIDAR features and can also integrate measurements from other frequently-used sensors if available. This section briefl y describes how integrating different sensors infl uence on the resulting accuracy and the computational burden. The implementation with four sets of sensors is analysed: S1) camera+IMU+altimeter, S2) LIDAR+IMU+altimeter, S3) LIDAR+camera and S4) TABLE I COMPARISON WITH OTHER METHODS IN DIFFERENT SCENARIOS ORB-SLAM2 2LOAM 14Our method Experimenttrmstmaxtrmstmaxtrmstmax (m)(m)(m)(m)(m)(m) Karting10.561.890.621.240.130.28 Karting20.180.720.310.850.110.19 Karting30.510.970.401.120.210.26 ETSI10.691.341.312.080.230.34 ETSI20.500.810.741.350.130.21 ETSI30.230.420.410.700.110.19 Bridge11.683.120.490.780.140.23 Bridge21.192.410.530.910.180.25 Bridge30.841.730.340.690.210.32 1983 TABLE II PERFORMANCE OF THE METHOD WITH DIFFERENT SENSOR SETUPS S1S2S3S4 Experimenttrmsuttrmsuttrmsuttrmsut (m)(ms)(m)(ms)(m)(ms)(m)(ms) Karting10.3960.34480.13550.0927 Karting20.2270.15510.11690.0631 Karting30.3460.35440.21610.1223 ETSI10.4680.62650.23730.1128 ETSI20.3760.51610.13910.0942 ETSI30.1870.24780.11850.0633 Bridge11.36110.18810.141570.1159 Bridge21.2790.43890.181750.1265 Bridge30.72120.29780.211470.0856 LIDAR+camera+IMU+altimeter+UWB. Table II compares their performance focusing on the mean error and the com- putational time (in ms) used in the Update stage (ut), which concentrates the most burden of our method. The best accuracy is obtained in S3 and S4, which exploit the combination of LIDAR and camera features. On the other hand, the computational burden is signifi cantly higher in confi gurations S2-S4 than in S1. This is due to the presence of 3D LIDAR features, whose iterative matching with the map is of great cost. Burden is alleviated in S2 by using an altimeter, and in S4 using all the optional sensors: altimeter, IMU and UWB. S3 and S4 have similar errors but integrating the optional sensors signifi cantly reduces burden in S4. These measurements are used in Step1 of the Prediction stage to reject hypothesis with low likelihood and save burden in Stage2. Bridge experiments have the largest and most varied map, the Update stage of S4 takes 60ms at most. Even in these cases, it is enough for our sensors frame rate. Reducing the number of hypothesis or the number of employed features can reduce this time if necessary, at the cost of accuracy. VI. CONCLUSIONS This work is motivated by aerial robots that need robust and accurate pose estimations for autonomous safe naviga- tion in complex GNSS-denied industrial and urban scenarios. This paper presents a robust multi-sensor multi-hypothesis localization method. It is based on three main ideas. First, it integrates camera and LIDAR features in the same statistical framework, benefi ting from their synergies and improving ro- bustness and accuracy in scenarios with low or varying den- sities of features. Second, to cope with the potentially strong symmetries in the scenarios, it adopts a multi-hypothesis approach where the different hypotheses are updated using the consistency between the gathered measurements and a pre-existing multi-sensor map. Third, its computational burden has been carefully addressed to operate in real time using feature and hypothesis fi ltering, effi cient hypothesis refi nement and codifi cation in a multi-core implementation. As many other robustness-driven methods, it assumes that a map of the scenario is available. This approach is valid in the envisioned I&M applications, in which many fl ights are performed in the same moderate-changing scenario. The proposed method has been compared to other well- known techniques and validated for closed-loop aerial robot navigation in three different urban and industrial scenarios. The integration of the next-generation 3D solid-state LI- DARs with higher scan rates but signifi cantly lower fi elds of view opens interesting challenges to be researched. Also, the extension of the method to consider semantic information is expected to provide additional robustness. These topics are object of current research. ACKNOWLEDGMENTS This work was performed in the UE project AEROARMS (H2020-ICT-2014-1-644271) and in ARM-EXTEND, funded by the Spanish R&D plan (DPI2017-89790-R). The research of J. L. Paneque is supported by the Spanish Ministerio de Educaci on y Formaci on Profesional FPU Program. REFERENCES 1 A. Ollero et al., “The aeroarms project: Aerial robots with advanced manipulation capabilities for inspection and maintenance,” IEEE Robotics and Automation Magazine, vol. 25, no. 4, 2018. 2 R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras,” IEEE Transactions on Robotics, 2017. 3 S. Kohlbrecher, J. Meyer, O. von Stryk, and U. Klingauf, “A fl exible and scalable slam system with full 3d motion estimation,” in IEEE Intl. Symp. on Safety, Security and Rescue Robotics (SSRR), 2011. 4 A. Pumarola, A. Vakhitov, A. Agudo, A. Sanfeliu, and F. Moreno- Noguer, “PL-SLAM: Real-time monocular visual SLAM with points and lines,” in IEEE ICRA, 2017. 5 J. Engel, T. Schops, and D. Cremers, “LSD-SLAM: Direct Monocular SLAM,” in European Conf. on Computer Vision (ECCV), 2014. 6 L. Kneip, M. Chli, and R. Siegwart, “Robust Real-Time Visual Odometry with a Single Camera and an IMU,” in BMVC, 2011. 7 K. Kapach and Y. Edan, “Evaluation of grid-map sensor fusion mapping algorithms,” in IEEE Intl. Conf. on Systems, Man and Cybernetics, 2007. 8 G. N utzi, S. Weiss, D. Scaramuzza, and R. Siegwart, “Fusion of imu and vision for absolute scale estimation in monocular slam,” Journal of Intelligent and Robotic Systems, vol. 61, 2011. 9 J. Zhang, M. Kaess, and S. Singh, “Real-time depth enhanced monoc- ular odometry,” in IEEE IROS, 2014. 10 J. Zhang and S. Singh, “Visual-lidar odometry and mapping: Low- drift, robust, and fast,” in IEEE ICRA, 2015. 11 I. Cvi si c, J. Cesi c, I. Markovi c, and I. Petrovi c, “SOFT-SLAM : Computationally Effi cient Stereo Visual SLAM for Autonomous UAVs,” Journal of Field Robotics, 2017. 12 C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” in IEEE ICRA, 2014. 13 J. R. Martinez-De Dios, A. Torres-Gonzalez, J. L. Paneque, D. Fuego- Garcia, J. R. A. Ramirez, and A. Olle
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 李清照高考试题及答案
- 广东工程造价自考试题及答案
- 控制感染考试题及答案
- 山东省日照市2025-2026学年高二上学期9月校际联合考试生物试题(含答案)
- 刻蚀工艺考试题及答案
- 考研考试题型及答案
- 康复培训考试题及答案
- 继电器线圈绕制工专业知识考核试卷及答案
- 医用材料产品生产工转正考核试卷及答案
- 阳极炉工设备维护与保养考核试卷及答案
- 2025年湖南长沙市第四医院(长沙市中西医结合医院)第一次劳务派遣人员招聘112人备考考试题库附答案解析
- 零配件追溯管理办法
- 风电场基础知识培训课件记录
- 2025广东广州市公安局第二批招聘交通辅警150人笔试参考题库附答案解析
- 2025年内科慢性疾病治疗路径分析测试答案及解析
- 2025秋人教版(2024)七年级上册英语学期教学计划
- 智能会计应用课件
- 2025全国小学生“学宪法、讲宪法”活动知识竞赛题库及答案
- 2025-2026学年北师大版小学数学四年级上册教学计划及进度表
- 【初一】【七年级】【语文上】【秋季】开学第一课《“语”你相遇今朝》【课件】
- 国防知识教育培训课件
评论
0/150
提交评论