IROS2019国际学术会议论文集0822_第1页
IROS2019国际学术会议论文集0822_第2页
IROS2019国际学术会议论文集0822_第3页
IROS2019国际学术会议论文集0822_第4页
IROS2019国际学术会议论文集0822_第5页
免费预览已结束,剩余1页可下载查看

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、Automatic Calibration of Multiple 3D LiDARs in Urban Environments Jianhao Jiao, Yang Yu, Qinghai Liao, Haoyang Ye, Rui Fan, Ming Liu AbstractMultiple LiDARs have progressively emerged on autonomous vehicles for rendering a rich view and dense mea- surements. However, the lack of precise calibration

2、negatively affects their potential applications. In this paper, we propose a novel system that enables automatic multi-LiDAR calibration method without any calibration target, prior environment information, and manual initialization. Our approach starts with a hand-eye calibration by aligning the mo

3、tion of each sensor. The initial results are then refi ned by an appearance- based method by minimizing a cost function constructed by point-plane distance. Experimental results on simulated and real-world data demonstrate the reliability and accuracy of our calibration approach. The proposed approa

4、ch can calibrate a multi-LiDAR system with the rotation and translation errors less than 0.04rad and 0.1m respectively for a mobile platform. I. INTRODUCTION Accurate extrinsic calibration has become increasingly essential for the broad applications of multiple sensors. Numerous research work has be

5、en studied on 13. Over the past decades, LiDARs have appeared as a dominant sensor in mobile robotics for their active nature of providing accurate and stable distance measurements. They have been widely utilized in mapping 4 and object detection 5. However, LiDARs do not provide a high spatial reso

6、lution of measurements and are also sensitive to occlusion. These drawbacks limit their potential applications in robotic sys- tems. Fig. 1 (bottom) displays two examples, which is a point cloud captured by the top LiDAR. In block A, the pedestrians and vehicles are scanned with a few points, making

7、 the detection of these objects challenging. About block B, points are gathering together because of occlusion. Therefore, employing the multi-LiDAR setup on self-driving cars is necessary. Traditional calibration techniques for multiple sensors are done by either placing markers in scenes or hand-

8、labeled correspondences. However, these approaches suffer from impracticality and limited scalability to the multi- LiDAR confi guration. Additionally, there are surprisingly few discussions on calibrating multiple 3D LiDARs. The majority of current approaches involve one or more of the following as

9、sumptions: prior knowledge about the structure of environments 6, usage of additional sensors 7, and user-provided initial values of the extrinsic parameters 8. It This work was supported by the National Natural Science Foundation of China (Grant No. U1713211), the Research Grant Council of Hong Kon

10、g SAR Government, China, under Project No. 11210017, and No. 21202816, awarded to Prof. Ming Liu. Jianhao Jiao, Yang Yu, Qinghai Liao, Haoyang Ye, Rui Fan and Ming Liu are with RAM-LABhttps:/ram- in the Department of Electronic and Computer Engineering, The Hong Kong University of Sci- ence and Tech

11、nology, Hong Kong SAR, China.jjiao, yang.yu, qinghai.liao, hy.ye, eeruifan, eeliumust.hk 1 2 3 X Y Z X Y Z X Y Z 1.28m 1.2m 2.4m BA Top LiDAR Front LiDAR Tail LiDAR Fig. 1.(Top) Our vehicle consists of a multi-LiDAR system with unknown extrinsic parameters. (Bottom) A point cloud captured by l1. The

12、 white boxes indicate two drawbacks presented in a single LiDAR confi guration: (A) measurement sparsity and (B) occlusion. is also not trivial to register point clouds accurately without initial guess. Inspired by the progress of the hand-eye calibra- tion, we fi nd that the extrinsic parameters ca

13、n be recovered from individual motion provided by each LiDAR. Moreover, the geometric features in environments also form constraints to sensors relative transformations. Hence, we can conclude that the complementary usage of these approaches is a prospective solution to calibrate multiple LiDARs, In

14、 this paper, we proposed a novel system1which allows automatic calibration of multiple LiDARs in urban environ- ments. It consists of three components: motion estimation of each sensor, motion-based initialization, and appearance- based refi nement. We show a variety of experiments to demonstrate th

15、e reliability and accuracy of this approach. The contributions of this paper are summarized as follows: A pipeline to automatically calibrate the extrinsic pa- rameters of multiple LiDARs which releases the as- sumptions of calibration targets, prior knowledge about surroundings, and initial values

16、given by users. The usage of the motion-based method for initialization and appearance-based method for refi nement. Extensive experiments on simulated and real-world data. The rest of the paper is organized into the following sections. In Sect. II, the relevant literature is discussed. An overview

17、of the system pipeline is given in Sect. III. The methodology of our approach, which includes motion esti- mation, automatic initialization, and refi nement is introduced in Sect. IV, followed by experimental results presented in 1Code is available at 2019 IEEE/RSJ International Conference on Intell

18、igent Robots and Systems (IROS) Macau, China, November 4-8, 2019 978-1-7281-4003-2/19/$31.00 2019 IEEE15 LiDAR 1 (10 Hz) LiDAR 2 (10 Hz) LiDAR 3 (10 Hz) LiDAR Odometry and Mapping (Sect. IV-A) Linear Rotation and Translation Calibration (Sect. IV-B) Initialization Overlap Filter & Registration (Sect

19、. IV-C) Refinement Motion Estimation Fig. 2. This fi gure illustrates the full pipeline of the proposed approach (left) and the fused point clouds after calibration (right). Note that the red, green, and purple point clouds are captured by the top LiDAR, front LiDAR, and tail LiDAR respectively. Sec

20、t. V. Finally, Sect. VI summarizes the paper and discusses possible future work. II. RELATED WORK Besides to multi-LiDAR calibration, there are also exten- sive discussions on the calibration among LiDARs, cameras, and IMUs. In this section, we categorize them as appearance- based and motion-based m

21、ethods. A. Appearance-based approaches Appearance-based approaches that recover the spatial off- set using appearance cues in surroundings are considered as a category of registration problem. The key challenge is searching correspondences among data. Artifi cial markers that are observable to senso

22、rs have been prevalently used to acquire correspondences. Our previous work 6 calibrated the same sensors using three planar surfaces. For calibrating the LiDAR-camera setup, Zhou et al. 3 demonstrated a technique to form line and plane constraints between the two sensors in the presence of a chessb

23、oard, while Liao et al. 9 published a toolkit using an arbitrary polygon, which is more general. However, all these methods require markers in scenes. Our approach only utilizes common features such as edges and planes in outdoor environments. The automatic markerless calibration in arbitrary scenes

24、 has led the trend recently. He et al. 7 extracted geometric features among scan points to achieve robust registration, and their work was extended to a challenging scenario 10. Levinson 8 fi rst put forward an online calibration for a camera-LiDAR system. This is accomplished by aligning edge point

25、s with image contours and minimizing a cost function. However, their success highly relies on the initial- ization given by users. Compared with them, our approach can roughly recover the extrinsics from the sensors motion, which enables calibration without human intervention. B. Motion-based approa

26、ches The motion-based approaches treat calibration as a well- researched hand-eye calibration problem 11, where the extrinsic parameters are computed by combining the motions of all available sensors. The hand-eye calibration problem is usually referred to solve X in AX = XB, where A and B are the m

27、otions of two sensors, and X is their relative transformation. As described in 12, this problem has been addressed since the 1980s. The ongoing research focuses on the calibration of multiple sensors in outdoor environments. Heng et al. 1 proposed CamOdoCal, a versatile algorithm with a bundle adjus

28、tment to calibrate four cameras. Taylor et al. 2 provided a more general solution to multi-modal sensors. As presented in 13, motion-based approaches can also be utilized to estimate temporal offset between sensors. Furthermore, several state-of-the-art visual-inertial navigation systems adopted the

29、 motion-based approaches for online calibration 14. Although motion-based methods have been extensively developed, their accuracy is sensitive to the accumulated drifts of estimated motion. In contrast, our method takes advantages of geometric features to improve the calibration of multiple LiDARs.

30、III. OVERVIEW The notations are defi ned as follows. We denote 0,K the time interval during calibration, and defi ne li k as the sensor coordinate system of the ithLiDAR at timestamp k. The x, y and z axes of coordinate systems are pointing forward, left and upward respectively. We denote I the numb

31、er of LiDARs to be calibrated, and l1the reference LiDAR. The transformation, rotation, and translation from a to b are denoted by Ta b, R a b, and t a b respectively. Tl 1 li or (Rl 1 li,t l1 li) are the unknown transformations from l1 to li to be solved. LiDARs are synchronized, where point clouds

32、 are captured at the same time. We assume that the vehicle is able to perform suffi cient motions over a planar surface. With these notations and assumptions, the calibration problem can be defi ned as: Problem: Given a sequence of point clouds during calibra- tion, computing the extrinsics of a mul

33、ti-LiDAR system by combining motion information with surrounding appearance. The pipeline of our proposed calibration system consists of three phases, as illustrated in Fig. 2. The fi rst phase takes point clouds as input, and results in the incremental motions of each LiDAR within a time interval k

34、 1,k (Sect. IV- A). The second phase initializes Tl 1 li using a least-squares solution (Sect. IV-B). Finally, the third phase utilizes the appearance cues in surroundings to register different LiDARs for refi nement (Sect. IV-C). IV. METHODOLOGY A. Motion Estimation To calculate a set of incrementa

35、l motion between consec- utive frames of each LiDAR, the LeGO-LOAM algorithm 15 is used. This method makes use of line and edge features in environments to estimate sensors ego-motion. In our implementation, the individual transformations of all the sensors and extracted ground points are used to pr

36、ovide constraints to the extrinsic parameters, which are discussed in Sect. IV-B and Sect. IV-C. 16 X Y Z X Y Z X YZ X Y Z -1 Fig. 3.The transformations between different LiDARs at k 1,k. B. Initialization With l1as the reference sensor, we present a method of calibrating l1with lipairwise. To simpl

37、ify the notations, we replace l1,li with a,b to indicate the coordinate system of the reference sensor and the target sensor respec- tively. The constant transformation of two LiDARs can be initialized by aligning their estimated motion. Fig. 3 depicts the relationship between the motion of two LiDA

38、Rs and their relative transformation. As the vehicle moves, the extrinsic parameters can be recovered using these poses for any k: Tak1 ak Ta b = Ta bT bk1 bk ,(1) where (1) can be decomposed in terms of its rotation and translation components with the following equations: Rak1 ak Ra b = Ra bR bk1 b

39、k ,(2) (Rak1 ak I3)ta b = Ra bt bk1 bk tak1 ak .(3) The method described in 1 is used to solve these two equations. Based on (2), the pitch-roll rotation can be cal- culated directly using the estimated rotations, while the yaw rotation, the translation can be computed using (3). 1) Outlier Filter:

40、The poses of sensors have two con- straints that are independent of the extrinsic parameters, which were presented as the screw motion in 16: ak1 ak = bk1 bk (4) rak1 ak tak1 ak = rbk1 bk tbk1 bk ,(5) where denotes the angle of a rotation matrix R, and r is the corresponding rotation axis2. The scre

41、w motion residuals include rotation and translation residuals, which are calculated as: |a b|, kra ta rb tbk2. We adopt the screw motion residuals to evaluate the performance of the previous motion estimation phase. Fig. 4 shows the screw motion residuals in a real-world example, where we fi nd the

42、estimated motion is very noisy. Hence, the outliers are fi ltered if both their rotation and translation residuals are larger than the thresholds: ?r,?t. 2) Pitch-roll rotation computation: It is effi cient to use rotation matrix to solve (2) since the orthogonal constraint should be considered. For

43、 this reason, we employ the quater- nion (q = qw,qx,qy,qz) following Hamilton notation to 2The rotation angle and axis are calculated using the log() operator such that = log(R), = r. Rotation residual Translation residual Fig. 4. An example of the screw motion residuals in rotation and translation

44、of a set of estimated motion. ?r,?r are the thresholds to fi lter the outliers. represent rotation. (2) can be thus rewritten by substituting qa b = (qa b)z(q a b)yx as below: qan1 an (qa b)yx= (q a b)yx q bn1 bn h Q1(qan1 an ) Q2(qbn1 bn ) i (qa b)yx Qn1 n (qa b)yx= 0, (6) where Q1(q) = ?q wI3+ qxy

45、z qxyz q xyz qw ? Q2(q) = ?q wI3 qxyz qxyz q xyz qw ? (7) are matrix representations for left and right quaternion mul- tiplication, qxyzis the skew-symmetric matrix of qxyz= qx,qy,qz, and is the quaternion multiplication operator. With N pairs of fi ltered rotations, we are able to formulate an ove

46、r-constrained linear system as follows: Q0 1 . . . QN1 N 4N4 (qa b)yx= QN (q a b)yx= 0, (8) Using SVD to decompose QN= USV, (qa b)yx is com- puted by the weighted sum of v3and v4: (qa b)yx= 1v3+ 2v4, (9) where v3and v4are the last two column vectors of V, and 1,2are two scalars. Therefore, (qa b)yx

47、can be obtained by solving the following equations: x(qa b)yxy(q a b)yx = z(qa b)yxw(q a b)yx k(qa b)yxk = 1, (10) where xq,yq,zq,wqare the elements of a quaternion. 3) Yaw rotation and translation computation: Due to our planar motion assumption, the translational offset on z axis is unobservable.

48、Consequently, we set tz= 0 and rewrite (3) by removing the third row as follows: R1 ?t x ty ? ?cos() sin() sin()cos() ? t1= t2,(11) where txand tyare unknown translations along x and y axes, and is the unknown rotation angle around z axis. R1is the 22 upper-left submatrix of (Ran1 an I3), t1= t11,t1

49、2 are the fi rst two elements of Ra bt bn1 bn , and t2 17 Overlapping Region Fig. 5.The FOV of each LiDAR and overlapping region are visualized. denote the fi rst two elements of tan1 an . We can rewrite (12) as a matrix-vector equation: ?R 1J ? | z G24 tx ty cos() sin() , where J = ?t 11 t12 t12t11

50、 ? .(12) We can also construct a linear system from (12) with the fi ltered motion: G0 1 . . . GN1 N |z A2N4 tx ty cos() sin() |z x41 = (t2)0 1 . . . (t2)N1 N |z b2N1 ,(13) where x is obtained by applying the least-squares approach. C. Refi nement In this section, we combine the coarse initializatio

51、n results with the sensor measurements to refi ne the extrinsic parame- ters. Firstly, to recover the unknown tz, the ground points are utilized. And then we estimate a set of transformations T b a = (Ta b)k from a to b at each timestamp by registering Pakand Pbk. To improve the registration accurac

52、y, we also apply an overlap fi lter to retain the points that lie in the overlapping regions of two sensors. 1) Ground planes alignment: In the motion estimation phase, we have extracted K pairs of ground points Gak,Gbk of the reference and target LiDARs. Since the segmented point clouds are noisy,

53、we implement the random sample consensus (RANSAC) plane fi tting algorithm to reject out- liers. Denoting ca k,c b k as the centroids of Gak,Gbkafter fi ltering, we use the mean value of ?ca k Ra bc b k ? z at each timestamp to determine tz. 2) Overlap Filter: Precise registration between LiDARs is

54、challenging since their overlapping fi eld of view (FOV) is both limited and unknown. To tackle this issue, we employ an overlap fi lter to retains points that lie within the counterpart LiDARs FOV, as the gray area depicted in Fig. 5. We denote the original point cloud captured by a and b at k as P

55、ak,Pbk respectively. a point p Pbkcan be transformed from b to a using the initial Ta b as follows: e p = Ta bp. (14) And then we employ the KD-Tree searching method Fig. 6. This fi gure depicts the curves calibration error and registration error with l1? l3at R.T 2. realize the overlap fi lter to c

56、onstruct Sak,Sbk: Sak= ? p1 Pak: d(p1, e p2) r,p2 Pbk ? Sbk= ? p2 Pbk: d(e p2,p1) 0.8 are selected as the inputs for the following registration step. 3) Registration: To estimate the relative transformations, the point-to-plane Iterative Closest Point (ICP) is used. After registering all the point c

57、loud pairs captured at a different timestamp, we obtain a set of transformations T a b . By computing the registration errors and calibration errors computed with the ground truth of each element, we fi nd that their values are linear. An example of these errors over timestamp is depicted in Fig. 6,

58、 where the curves of registration error and calibration error have similar trends, especially the positions of the peak. Hence, we apply it to all calibration cases. In T a b , only a series of transformations with the minimum registration error are selected as candidates, and the refi nement result

59、s are computed as their mean values. V. EXPERIMENT In this section, we divide the evaluation into two separate steps. Firstly, the initial calibration experiments are presented with simulated data and real sensor sets. Then we test the refi nement on real sensor data and demonstrate that the initial results can be improved using appearance cues. A. Implementation Details We use the ICP library 17 to process point clouds. With empirically setting parameters: ?r= 0.01,?t= 0.01,r = 10, our method can obtain promising results. Since th

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论