IROS2019国际学术会议论文集0595_第1页
IROS2019国际学术会议论文集0595_第2页
IROS2019国际学术会议论文集0595_第3页
IROS2019国际学术会议论文集0595_第4页
IROS2019国际学术会议论文集0595_第5页
免费预览已结束,剩余1页可下载查看

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Sparse-3D Lidar Outdoor Map-Based Autonomous Vehicle Localization Syed Zeeshan Ahmed, Vincensius Billy Saputra, Saurab Verma, Member, IEEE, Kun Zhang and Albertus Hendrawan Adiwahono AbstractDiffi culties in capturing unique structures in the outdoor environment hinders the map-based Autonomous Ve- hicles (AV) localization performance. Accordingly, this neces- sitates the use of high resolution sensors to capture more information from the environment. However, this approach is costly and limits the mass deployment of AV. To overcome this drawback, in this paper, we propose a novel outdoor map-based localization method for Autonomous Vehicles in urban environments using sparse 3D lidar scan data. In the proposed method, a Point-to-Distribution (P2D) formulation of the Normal Distributions Transform (NDT) approach is applied in a Monte Carlo Localization (MCL) framework. The formulation improves the measurement model of localization by taking individual lidar point measurements into consideration. Additionally, to apply the localization to scalable outdoor envi- ronments, a fl exible and effi cient map structure is implemented. The experimental results indicate that the proposed approach signifi cantly improves the localization and its robustness in outdoor AV environments, especially with limited sparse lidar data. I. INTRODUCTION Localization systems are highly essential for mobile robots, especially Autonomous Vehicles, in order to pre- vent any potential accidents 1. A map-based lidar scan approach is one of the most prominent methods towards localization, in which the information of the environment is pre-processed and stored as a set of lidar scan data 2. With this approach, several works in literature have proposed localization solutions with assistance from unique structures such as planes and edges 3, 4. However, specially in outdoor environments, visibility of such unique structures to lidar sensors is very limited. This limitation is generally due to challenges encountered such as dynamic environments and, occasionally, limited availability and high variance of the structures 5, 6. To compensate for such limitations, generally the amount of sensor data collected is increased, typically in three ways. Firstly, by increasing the number of beams in the sensor, denser data can be obtained. Secondly, multiple sensors can be utilized, maximizing the fi eld of view and the information about the surroundings 7. Lastly, by data fusion with other sensors, such as vision cameras 810. Note that these approaches commonly lead to higher implementation costs. Furthermore, such approaches lead to signifi cantly higher computation requirement, thus higher CPU computation is The fi rst two authors contributed equally to this work. Syed Zee- shan Ahmed, Vincensius Billy Saputra, Saurab Verma, Kun Zhang and Albertus Hendrawan Adiwahono are with Department of Robotics and Automation Systems, Institute of Infocomm Research, A*STAR, Singapore 138632, zeeshan mukhtar, billy saputra, saurab verma, zhang kun, .sg required to maintain real-time processing. With the recent availability of cost-effective lidar sensors in the market, which output sparse scan data, it is of importance that the AV localization can make use of these devices, despite their limited data. Such cost-effective sparse lidar sensors will enable broader AV market acceptance and faster AV deployment. In literature, there exists some notable works on imple- mentation of outdoor localization using sparse lidar sensors. In 11, adaptive update of the orientation of sparse lidar sen- sor is performed to capture the whole environment. However, such an update step physically limits the speed of the robot since the sensor needs to slowly capture the environment by examining all potential orientations. In contrast, a different approach is implemented in 12 where the 3D map is compressed to 2D horizontal plane such that a sparse 2D lidar sensor is suffi cient to assist in the robot localization. By compressing the map, however, a lot of essential map information is lost, thus limiting its application to very specifi c outdoor environments. Since the dense lidar sensor can easily help localize in most outdoor environment, in 13, the sparse lidar scan data is accumulated temporally to retain suffi ciently large/dense amount of data for localization using conventional localization approaches. This approach, however, assumes that the robot motion speed is relatively slow and there exists suffi cient environmental structures (such as in urban areas) so that the accumulated data truly captures the environment. In this paper, we present a novel approach towards achiev- ing high accuracy map-based localization for AVs in outdoor environments using sparse lidar scan data. Typically, in the literature, Monte Carlo Localization (MCL), a probabilistic localization approach, is the most commonly used approach for map-based localization mainly because of its inherent robustness against dynamic environmental variations and map change 14. Map-based localization methods determine the position of a vehicle by comparing lidar scan data to a pre-built map 2. The map is generally structured as a 3D grid of cells, with each cell containing its oc- cupancy information 15. In recent times, the map struc- ture has been improved with each cell containing Normal- Distribution-Transform (NDT) parameters 1618. NDT is a piece-wise continuous representation of space in the map, as a set of normal-distribution parameters, namely mean and co-variance. The most widely accepted Distribution-to- Distribution (D2D) formulation of NDT 3, however, fi nds it diffi cult to construct a well-defi ned normal distribution with sparse lidar scan data. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Macau, China, November 4-8, 2019 978-1-7281-4003-2/19/$31.00 2019 IEEE1614 In this paper, we propose a novel Point-to-Distribution (P2D) formulation of NDT in MCL framework, to handle sparse lidar scan data. In this approach, each lidar scan points likelihood of belonging to a particular cell within the map is calculated given the cells NDT parameters. To the best of our knowledge, this is the fi rst attempt in the literature to apply P2D-NDT formulation in the MCL framework with an experimental setup. Accordingly, the main novelty of the proposed work is to use signifi cantly sparser lidar scan data for outdoor AV localization while maintaining similar accuracy and sampling rate in comparison with the state-of- the-art approaches. The rest of this paper is organized as follows. Section II describes the MCL framework used in this paper. Next, Section III details the sparse lidar scan data based mea- surement model. Later, Section IV examines the test setup and verifi cation results of the proposed approach. Lastly, Section V concludes the work. II. MONTECARLOLOCALIZATION Monte Carlo Localization (MCL) is a probabilistic method to estimate the pose of a robot (vehicle) from sensor inputs and an environment map. It is based on recursive Bayes fi lter approach and importance sampling. In particular, MCL estimates the posterior of a vehicles pose p(xt|ut,zt,m), given a map m, a control input utand an observation zt, where xt= xt,yt,at is the vehicle position (latitude, lon- gitude, altitude) at discrete time instant t. Here the posterior density is approximated by a set of N weighted particles xk t,wktNk=1, where wkt is the weight for particle k at time t 19. Our MCL approach is summarized in four main steps as described below. 1) Initialization: A one-time step to create N equally weighted (wk t|t=0 = 1/N) particles which are randomly distributed within an area centered around an initial estimate. In practice, the initial estimate can be a predefi ned parameter or obtained from available reliable source, such as a GNSS device. 2) Prediction: This step is to propagate each particle by using a motion model p(xt|ut,xt1). Where a control input utis applied to each particle such that its position changes from xt1to xt. The control input is the odometry which is obtained from wheel encoder and IMU. Noise sampled from a zero mean random distribution corresponding to the odometry drift is added to ut. 3) Update: In the update step, the weight of each par- ticle wk t is calculated based on the measurement model p(zt|xt,m), which is the likelihood of observing ztgiven the particle position xtand the map m. 4) Resampling: This step is to chose a new set of particles for the next iteration. The particles with higher weight get higher chance to be sampled. Then, the fi nal pose output is calculated based on a weighted average of all the particles before re-sampling. III. MEASUREMENTMODELS In this section, the measurement models in the update step in the MCL framework are elaborated. First, we describe the existing models and their functionality. Later, we extend upon these models to propose our P2D-NDT model, to handle sparse outdoor lidar scan data. A. Occupancy (OCC) Measurement Model This model is the conventional approach towards calcula- tion of each particles weight based on an occupancy map 19. Which is structured as a 3D grid of cells and each contains the probability of being occupied. First, the lidar scan data is transformed to the global co-ordinate frame, based on each particles pose xk t. Next, a matching score Lkt for each particle is calculated as Lk t = Np X i=1 Nm X j=1 (zt|xk t,m), (1) where Nprepresents the total number of points in the current lidar scan, Nmrepresents the total number of cells within the given map m. The function (.) has a value of 1 only if for a given point piin the lidar scan zt= piNp R3, the matching cell within the map m is occupied. Thus, the matching score Lk t represents the total number of matches found between piNpand m. In practice, for maps with small cell size, this is an approximation of the likelihood fi eld formulation in 19. Lastly, the weight wk t of each particle k is calculated as wk t = 1 PNk k=1w k t1 wk t1L k t. (2) The occupancy (OCC) measurement model is based only on the occupancy information. Thus, when sparse lidar scan data is used the weight obtained for most particles is nearly constant and the resultant weight distribution among the particles has high variance. B. D2D-NDT Measurement Model NDT was originally introduced by Biber and Strasser in 18 and later implemented by 3 in the MCL frame work for indoor robot localization. NDT is based on the piece-wise continuous representation of points in space, as a set of normal-distribution parameters, namely mean and co-variance. These NDT parameters are stored in a 3D grid structure, that capture the distribution of the points within each cell. Therefore, the NDT map can be defi ned as a collection of these parameters m = j,jNm j=1. Distribution-to-Distribution (D2D) is the most widely ac- cepted measurement model approach with the NDT map. In a D2D-NDT measurement model 3, apart from the map, the lidar scan data is also represented using NDT parameters. In this approach, fi rst, the Nppoints from the lidar scan data (in local coordinate) are segregated into Nv(? Np) uniformly sized voxels (3D cells). Next, the mean iand co-variance i for each voxel i is calculated. Thus, the lidar scan data is now represented as a set of NvNDT parameters p = i,iNv i=1. The matching score for each particle is now calculated as 1615 Lk t = Np X i=1 Nm X j=1 d1exp(d2 2 T ij(R k ti(R k t) T+j)1ij) (3) where ij= Rk ti+T k t j, d1and d2are scaling factors. Also, Rk t and Tk t respectively represent the rotation matrix and translation vector of the k particles pose xk t. The weight wk t for each particle is calculated as described in (2). The D2D model is based on the assumption that we have suffi ciently high number of Nppoints such that, the points can be further segregated into Nvvoxels. Unfortunately, this assumption breaks down due to iill-conditioning that leads to degeneracy in (3), when sparse lidar scan data is used. Thus, we have proposed to use all Nppoints directly for computing the particles matching score in our P2D measurement model as described below. C. P2D-NDT Measurement Model In our proposed approach, each lidar scan data point piis transformed into global coordinates based on each particles pose xk t. Then, a matching score for each particle is calculated as Lk t = Np X i=1 Nm X j=1 d1+d2exp(d3 2 (pij)T1 j (pij) (4) where d1, d2, d3are tunable scalars. Again, the weight of each particle wk t is calculated as mentioned in (2). Note that the matching score obtained in (4) represents the overall likelihood for a given lidar scan piNpwith respect to (w.r.t.) all the cells in the given map m. Practically how- ever, in order to improve the online computational effi ciency, (4) is implemented with only the map cells closest to i because the farther cells have negligible affect on matching score value. Our proposed Point-to-Distribution (P2D) NDT measure- ment model, has been designed to tackle: (i) high variance in weight distribution among particles generally observed in conventional occupancy (OCC) measurement models and, (ii) ill-conditioned co-variance iof the lidar scan voxels (in D2D-NDT model) that leads to degeneracy in (3), which occurs when spare lidar scan data is used with D2D-NDT measurement models. IV. TESTS ANDRESULTS In order to verify the performance and robustness of our proposed P2D-NDT measurement model in comparison to the original OCC and state-of-the-art D2D-NDT models, we have carried out extensive outdoor testing on our test vehicle as described in this section. Particularly, fi rst, we summarize the experimental test set-up used. Next, we examine the effect of resolution of each cell in the map and, the sparsity of online lidar scan data on the aforementioned measurement models performance. Lastly, we examine the robustness of the models by comparing the respective performance in two variants of outdoor environments and at two different vehicle speeds. Fig. 1.Hardware setup on the test vehicle describing the installation position of the sensors A. Test Setup 1) Hardware: All the experimental trials were carried out on a test vehicle setup as shown in Fig.1. This vehicle is installed with a 16-layer lidar, dual wheel encoders and an Inertial Measurement Unit (IMU) sensors. The output from the dual wheel encoders and the IMU is used to generate odometry data used by MCL in the motion model, and the output of the 16 layer lidar is used in the measurement model. The algorithms are implemented in C+ in order to achieve high sampling rate of 10 Hz. The experimental trails are run on a workstation computer equipped with a quad-core Intel Core-i7 CPU and 16 GB of RAM. 2) Test tracks: The test vehicle was driven around two test tracks (designated AV test sites in Singapore), each representative of two different environments that an AV might encounter. First, the test Track-1 is a 5.8 km route through dense and narrow urban canyon roads. In contrast, the Track-2 is a 3.8 km route through wide open expressway roads. 3) Ground Truth: The ground truth location of the AV is obtained via a high-precision dual-antenna Real-Time Kine- matic (RTK) GPS module, installed on the AV. The sensor has a minimal localization error of about 2 cm while the AV is in motion. However, RTK-GPS has occasional limited availability due to environmental conditions. To overcome this limitation, we employ GraphSLAM approach i.e. we fuse the sensor outputs from IMU, GPS and lidar scan data to obtain the ground truth 3D pose of the AV with minimal error at all time instants. 2022. 4) Map Structure & Map Management: The large map fi le-size (due to large area of operation for outdoor envi- ronment) typically cannot be loaded onto computer memory at an instant. Thus, to handle this challenge the map is divided into uniform chunks called map-patches as shown in Fig.2. This allows us to dynamically load only the relevant map-patches onto memory, instead of the entire map. The size of each map-patch is fi xed to 200 200 m2based on empirical optimization of several parameters such as fi le size and loading time. B. Cell resolution In this section, we examine the effects of cell resolution on the performance of measurement models. Fig. 3 encapsu- lates the overall localization error of the three measurement 1616 Fig. 2.(Top-view, Track-1) Map is segregated into 200200 m2patches. Fig. 3.Effect of cell resolution on overall average error in localization for different measurement models. At small cell resolution values D2D leads to relatively larger errors due to the models inability to generate viable distribution within a small cell using sparse lidar scan data. At large cell resolution values, OCC has signifi cantly large errors since the large occupancy cells are unable to capture essential amount of lidar information. Overall, P2D is observed to have the best localization accuracy across all cell resolutions. Fig. 4.Effect of cell resolution on average loading time of the relevant map-patches around the vehicle. The cell resolution of 1 m provides an optimal balance between loading time and localization accuracy. models w.r.t. variation in cell resolution, measured with AV motion at about 40 Km/h on Track-1. The values of the tunable parameters used in the experimental trials are d1= 0.1,d2= 0.9 and d3= 0.1. These values are manually set after examining a few experimental trails. From the fi gure, on one hand, we observe that when the cell resolution values are very low (around 0.3 m), the OCC and P2D models provide very low overall error (i.e. better localization accuracy). The OCC approach leads to low error because with better cell resolution (i.e. lower value), OCC gains substantially more information about the environm

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论