IROS2019国际学术会议论文集Articulated Multi-Perspective Cameras andTheir Application to Truck Motion Estimation_第1页
IROS2019国际学术会议论文集Articulated Multi-Perspective Cameras andTheir Application to Truck Motion Estimation_第2页
IROS2019国际学术会议论文集Articulated Multi-Perspective Cameras andTheir Application to Truck Motion Estimation_第3页
IROS2019国际学术会议论文集Articulated Multi-Perspective Cameras andTheir Application to Truck Motion Estimation_第4页
IROS2019国际学术会议论文集Articulated Multi-Perspective Cameras andTheir Application to Truck Motion Estimation_第5页
已阅读5页,还剩3页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Articulated Multi-Perspective Cameras and Their Application to Truck Motion Estimation Xin Peng, Jiadi Cui and Laurent Kneip Mobile Perception Lab, ShanghaiTech University AbstractWhile monocular and stereo camera based mo- tion estimation has reached a level of maturity that enables industrial use, the community keeps exploring novel multi- sensor solutions to meet the high robustness and accuracy requirements of certain applications such as autonomous ve- hicles. The present paper focuses on motion estimation with multi-perspective camera systems. In particular, we look into the intricate case in which the cameras are distributed over an articulated body, a scenario that occurs in truck motion estimation where additional cameras are installed on the trailer. The resulting articulated multi-perspective camera is analyzed in theory and practice, and we show thatby taking the non- holonomic constraints of the vehicle into accounta single point correspondence measured from the trailer is suffi cient to render the additional unknown parameters given by the internal joint confi guration before and after a relative displacement fully observable. Optimizing over all parameters enhances the accuracy of the motion estimation of the entire system with respect to using the cameras on each rigid part alone. Results are confi rmed on both simulated and real data. I. INTRODUCTION Recent years have shown an increasing interest in using visual localization and mapping solutions for perception in emerging applications such as autonomous driving. It is generally agreed that cameras represent an attractive exte- roceptive sensor returning highly dense and rich data even in the face of a low cost, energy, or space budget. While fi rst solutions to the continuous visual odometry or visual SLAM problem were affected by robustness issues and confi ned to small, laboratory-scale environments 8, 2, modern solu- tions for stereo 7, 4 or even monocular camera setups 18 achieve an impressive level of stability and accuracy. This is particularly true if talking about hybrid sensor confi gurations that make use of additional interoceptive or exteroceptive sensors sensing for example the geometry of the environment (i.e. Lidar) or the dynamic state of the vehicle (i.e. IMU). The automotive industry currently expresses interest in exploiting so called surround-view camera systems com- posed of multiple cameras mounted on either side of a vehicle, thus forming an omni-directional, multi-perspective camera array. Such systems can already be found on todays automotive products to provide assistance in more controlled and repeatable scenarios such as low speed maneuvering in a parking lot. While predominantly used as a passive visualization assistance to human drivers, the community currently investigates whether such sensor systems could serve as a primary exterior perception modality for au- tonomous valet parking 3. Successful visual SLAM based on omni-directional multi-perspective cameras (MPC) has Fig. 1.The primary application scenario behind our work: Vision-based motion estimation for trucks. The fi gure shows the fi elds-of-view of two cameras mounted on a truck, one in the front, and one on the trailer. We call this setup an articulated multi-perspective camera, and it is the main subject of our research. already been demonstrated on several occasions. Represen- tative literature reaches from more fundamental, geometric investigations 21, 25, 12, 15, 20, 11, 13 to the presentation of full-scale, real-time systems 6, 5, 26. Our work falls into the former category and analyses the geometry of multi-perspective camera systems mounted on a ground vehicle. More specifi cally, we look at the interesting scenario of vehicles that comprise a towed trailer. The installation of a surround-view MPC on such a vehicle bares certain diffi culties, as the avoidance of self-occlusions requires a distribution of the cameras over both the front and the trailer part of the vehicle, thus causing a situation in which the relative transformation between the cameras is no longer constant. A vehicle with a towed trailer can be described as an articulated body, and the multiple cameras distributed over the body can hence be described as an articulated multi-perspective camera (AMPC). The contributions of our work are as follows: We introduce and analyse the AMPC. We list the basic constraints that govern its two-view geometry, and conclude the resulting number of degrees of freedom. We look into the most important application scenario of an AMPC, and prove that taking non-holonomic motion constraints of the truck into account enables full calibration in the two-view scenario. In particular, if the motion of one body is known, both the joint angle before and after the motion can be recovered from only a single feature correspondence. We prove that even in the case of cameras that have non- overlapping fi elds of view, optimizing over all extrinsic calibration parameters (i.e. the relative pose of the entire AMPC as well as the internal joint confi guration before and after a motion) maintains the benefi t of 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Macau, China, November 4-8, 2019 978-1-7281-4003-2/19/$31.00 2019 IEEE2052 superior motion estimation accuracy with respect to more traditional camera arrangements. The paper is organized as follows. After a discussion on further related work in Section II, Section III derives the AMPC from the MPC. Section IV then introduces our complete solution in the scenario of an articulated ground vehicle. To conclude, Section V presents results on both simulated and real data. II. FURTHER RELATED WORK Complete visual SLAM with multi-perspective camera systems remains a challenging problem, particularly in the case of non-overlapping fi elds of view. 6 presents a loosely- coupled framework which operates with a pair of cameras pointing into opposite directions. However, they only eval- uate their framework on small-scale datasets with arbitrary motion. As demonstrated in 1, the observability of scale may be affected by degenerate conditions, of which one is in particular given by the non-holonomic motion along circular arcs occurring in steering ground vehicles. Further complete pipelines for non-overlapping systems mounted on ground vehicles are presented in 3 and 5, which circumvent the scale observability issue by taking additional sensor information into account. 3 makes use of additional wheel odometry signals, while 5 includes readings from an inertial measurement unit. Another recent contribution 26 is coming from the open-source community, which however exploits a small overlap between two cameras. The literature also contains a vast body of more fundamen- tal investigations of the relative pose problem. An up-to-scale estimate of the relative pose between two calibrated views can be obtained from fi ve point-feature correspondences 14, 16, 19, 24. The case of non-holonomic ground vehicle motion has been addressed in 23 and 22, where the latter work in particular demonstrates that as soon as the camera has non-zero, orthogonal displacement from the non-steering axis, the scale of the motion does in fact become observable if suffi cient relative rotation is contained in the displacement. Multi-perspective camera systems can be described more generally using the generalised camera model, with both linear 17, 21 and non-linear solutions 12, 25 having been presented in the literature. Unless we are in the special case of a pure translation, these solvers can generally render scale observable, even in the absence of any overlap in the cameras fi elds of view. A relatively complete collection of solvers for both normal and generalised cameras is provided by the OpenGV library 10. A dedicate generalized solver for the non-holonomic planar motion of steering ground vehicles is presented in 15. The work of 27 proposed a method for a constrained, non-rigid stereo camera system. To the best of our knowl- edge, our work is the fi rst to look at the case of a non- overlapping multi-camera system where the cameras are distributed over two bodies that are connected through an articulation joint. Fig. 2.An articulated multi-perspective camera: Two bodies A and B that are connected by a joint undergo a relative displacement. The joint confi guration before and after the displacement is described by the 3D rotation matrices Q and Q0. Measurements are taken by cameras rigidly mounted on either body. III. THE ARTICULATED MULTI-PERSPECTIVE CAMERA Before we start introducing the AMPC, it is worth clar- ifying what we understand by an MPC. We distinguish an MPC from a generalized camera by the fact that the rays originate from one of a discrete fi nite set of camera centers. The corresponding camera frames furthermore have constant, known extrinsic transformations with respect to some body frame. There are no further assumptions as to what camera type is used and whether or not the fi elds of view of any two distinct cameras are overlapping. We merely assume that the motion of the entire system is suffi ciently small such that correspondences between images taken by the same camera over time can be obtained. The type of camera is left general by describing point observations by spatial direction vectors of unit-norm and originating from one of the camera centers. In the following, we will review the generalized essential matrix constraint adapted to the above defi ned MPC, and introduce the additional degrees of freedom of the AMPC. The section concludes with a brief discussion on the observability of the joint angle parameters. A. Review of MPC two-view geometry Though Figure 2 already describes the AMPC, it may serve well to describe a plain multi-perspective camera as well by only considering body A in both views. Let CAbe a camera installed on body A. Without loss of generality, let the 3D rotation from the camera to the body frame be equal to identity (if different, known extrinsic calibration parameters may be applied to reinstall this situation). Let tACfurthermore be the 3D position of this camera inside the reference frame of body A. Now let fi,f0 i be corresponding direction vectors to a common landmark observed by CA in both views. Let RAA0and tAA0furthermore be the transformation variables from A0to A such that a 3D point expressed in A0can be transformed to A using the equation pA= RAA0pA0+tAA0. As illustrated in 12, the 2053 generalized essential matrix constraint in this scenario leads to ? fi tAC fi ?T?bt AA0cRAA0 RAA0 RAA00 ? f0 i tAC f0 i ? = 0. (1) A 3D Euclidean transformation has 6 degrees of freedom, and hence six correspondences across the two views are necessary in order to solve for the relative pose 25. Note however thatin order to distinguish this case from the cen- tral relative pose problem and thus render scale observable the correspondences need to be observed by at least two distinct cameras CA1and CA2(though Figure 2 only shows one example correspondence measured by one camera on body A, (1) is valid for any camera mounted on A). B. The AMPC The AMPC distinguishes itself from a standard multi- perspective camera array by the presence of a second body B which is connected to A through a joint. In the most general case, this means that the problem is complicated by an individual 3D rotational joint confi guration in each view. Let us assume that there is a camera CBinstalled on B. The rotation from CBto B iswithout loss of generalityagain assumed to be identity, and the position of this camera inside B is given by tBC. Denoting the relative transformation of B by RBB0and tBB0, it is obvious that the generalized essential matrix constraint may equally well be applied to body B. Assuming that fj,f0 j is a feature correspondence measured by CB, we obtain ? fj tBC fj ?T?bt BB0cRBB0 RBB0 RBB00 ? f0 j tBC f0 j ? = 0. (2) Again, note that this constraint equally applies to any camera mounted on B. However, fi nding the relative transformation for each body part individually does not completely solve the pose estimation problem, as it may still leave the interior joint angle confi guration in each view unobserved. Now let us denote the joint confi guration in the fi rst and second view by Q and Q0 . Q is defi ned to be the rotation from B to A. A loop constraint on rotations then simply leads to the following additional constraint on the joint confi guration: RAA0Q0=QRBB0 RBB0=QTRAA0Q0(3) Constraining the origins of A and B to coincide with the joint centrewhich again can be done without loss of generalityleads to the following additional constraint on the relative displacements: tBB0=QTtAA0(4) RT BB0tBB0 =Q0TRT AA0tAA0 (5) C. Solution and observability An AMPC has 12 degrees of freedom, and a set of parameters that fully constrains the relative pose and the internal AMPC confi guration is given by the relative pose of part A (i.e. RAA0and tAA0 ) and the joint confi guration before and after the displacement (i.e. Q and Q0). It is clear that this set of parameters can be solved in two steps. First, the relative displacement of A can be solved using at least six correspondences measured by cameras installed on A, substituted into (1). The internal joint confi gurations can then be identifi ed by substituting (3) and (4) in (2), thus leading to the constraint ? Qfj Q(tBC fj) ?T?bt AA0cRAA0 RAA0 RAA00 ? Q0f0 j Q0 ? tBC f0 j ? # = 0.(6) With RAA0and tAA0already known, six correspondences in cameras mounted on B are suffi cient to determine Q and Q0. However, it is important to consider that Q and Q0 may become unobservable. With the relative displacement of A and B given, the observability of Q and Q0can be analysed by looking at equations (3) and (4) and the question whether or not they permit a unique solution of Q and Q0. Assuming for example that the relative rotation for both A and B is identity, (3) simply constrains Q to be equal to Q0. (4) still does not fully constrain Q, as it is easy to see that substituting Q XQ where X is an arbitrary rotation about the baseline axis tAA0would essentially not change the result. We now turn our attention to the most important practical application of an AMPC: A truck with trailer exerting planar motion. IV. AMPCAPPLIED TO TRUCK MOTION ESTIMATION A is from now on called the front, and B the trailer or back. We furthermore defi ne the body frames such that the x- axis is pointing forward, the y-axis to the left, and the z-axis upward. We make the common assumption of planar motion in ground vehicle applications. The relative displacement of the frontfrom now on simply denoted R and thence Fig. 3.AMPC in the special case of a truck with trailer. The relative displacements of either body become planar motions. All relative rotations including Q and Q0 become pure rotations about z, and the joint confi gu- ration can hence be described by angles and 0. Both bodies also contain a non-steering wheel axis with a distance along x of sAand sBfrom the joint centre. 2054 takes the special form of a pure rotation about the z axis and a zero displacement in the vertical direction. The joint angle confi guration matrices Q and Q0also turn into univariate expressions of the form Q = cossin0 sincos0 001 ,Q0= cos0sin00 sin0cos00 001 , (7) where and 0denote the z-rotation of the back with respect to the front in both views, respectively. Figure 3 describes our scenario. In the following, we will briefl y review existing options for calculating the motion of the front, followed by our core contribution of recovering the internal joint angle confi guration both before and after a relative displacement, and notably from a single feature point correspondence. A. Front motion estimation The classical solutions for estimating the motion of the front are given by either stereo or wheel odometry supported monocular visual odometry. 23 furthermore shows that by exploiting non-holonomic constraints that enforce locally circular motionan up-the-scale estimate of the relative pose can be recovered from a single point correspondence if the corresponding front camera has a zero distance along x from the non-steering wheel axis (i.e. tAC0 = sA, where sA denotes the distance along x between the non-steering axis and the joint). The non-holonomic motion of steering ground vehicles along circular arcs is described by the Ackermann model. If the orthogonal distance between the camera and the axis in the horizontal plane deviates from zero (i.e. tAC0 6= sA), even scale can be recovered by taking an additional second feature correspondence into account 15, 22. We assume to be in the latter case, and therefore recover the relative transformation parameters R and t from two point correspondences measured by either one or two cameras on the front. While the incidence relation holds for an arbitrary camera, our real data experiments rely on single front camera. B. Constraining the internal joint parameters The most straightforward solution to the calculation of and 0consists of simply considering two correspondences fj,f0 j measured by cameras mounted on the back, and substituting these measurements along with R and t into our modifi ed generalised essential matrix constraint given by (6). Doing so would result in two equations in two unknowns, and therefore fully constrain the problem. As we will show in the following, there exists however a more elegant solution to this problem if taking into account that also the back part may be subjected to non-holonomic motion constraints. In particular, we will show that these constraints enable us to fi nd both and 0at the hand of only a sin

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论