




免费预览已结束,剩余1页可下载查看
付费下载
下载本文档
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
A Robust Position and Posture Measurement System Using Visual Markers and an Inertia Measurement Unit Kunihiro Ogata1Hideyuki Tanaka1and Yoshio Matsumoto1 Abstract Automatic control of mobile robots and robot arms requires techniques for estimating the position and orien tation of objects with high accuracy and robustness Although methods using markers or machine learning have been de veloped general purpose and highly accurate estimations have not been realized Our team developed a high accuracy visual marker LentiMark for high accuracy estimations of position and posture but data are lost when the camera cannot obtain marker images We therefore developed the Marker IMU sys tem for integrating visual markers with an inertia measurement unit IMU When cameras cannot acquire the image of a visual marker any missing data are restored from IMU data However when calculating positions from acceleration sensor values sensor error increases estimation error Therefore we developed a method for error correction using measurements from before and after the missing data Evaluation experiments confi rm that missing data can be estimated using the proposed method I INTRODUCTION Detecting objects and estimating their position and posture is indispensable for autonomous robots to perform physical tasks but it is diffi cult for robots to realize these tasks in various situations Support tools must provide a practical way to safely perform these tasks as task failures can di rectly create physical hazards The development of machine learning technologies has realized recognition detection and segmentation of objects from RGB images 1 2 By applying image processing technologies to machine learning in robots a technique has been developed by which robots can pick up objects based on human voice instructions 3 However machine learning methods depend on training data and allowing robots to detect objects in various situations requires enormous amounts of training data Visual markers are a low cost and convenient tool allowing measurements of an object s position and posture using only a monocular camera with no training data They have been used in several robot control applications such as the navi gation of mobile robots and object manipulation via robotic arms However conventional markers have two fundamental problems related to estimating orientation degradation of orientation accuracy in frontal observations 4 and pose ambiguity making it impossible to uniquely determine ori entation 5 Our team developed a new visual marker Lenti Mark 6 7 to address these two biggest problems related to conventional visual markers LentiMark solves the problem of degraded orientation accuracy To address the problem 1Human Augmentation Research Center National Institute of Advanced Industrial Science and Technology AIST 6 2 3 Kashiwa no ha Kashiwa shi Chiba Japanogata kunihiro hideyuki tanaka yoshio matsumoto aist go jp Fig 1 Overview of the Marker IMU algorithm of pose ambiguity we developed a wavelike two tone thin structure known as a fl ip detection pattern FDP to uniquely determine pose orientation 7 These techniques allow high accuracy measurements of the position and orientation of visual markers Although position and orientation of a visual marker can be conveniently measured using a monocular camera when the camera cannot obtain an image of the marker its position and orientation cannot be calculated Specifi cally this occurs when the camera is positioned outside the marker angle range when the marker is out of the camera s fi eld of view or when the marker moves quickly and the image blurs II OBJECTIVE AND PREVIOUS STUDIES When a camera cannot obtain a marker image it is impos sible to measure the marker s position or posture even when using our improved marker We therefore propose a novel method for robustly measuring marker position and posture using our marker techniques and other sensors Devices that sense markers from the outside such as those using infrared or ultrasound sensors are affected by measuring range and obstacles in the same way as RGB cameras so those provide little advantage In contrast an inertia measurement unit IMU which consists of an acceleration sensor a gyroscope and a magnetic sensor is unaffected by measurement range or obstacles We therefore aimed at developing a system shown in Fig 1 that can robustly measure marker positions and posture by combining visual markers with an IMU We call this measurement system Marker IMU As Fig 1 shows when a visual marker is in the camera s measurement range States A and C its position and posture 2019 IEEE RSJ International Conference on Intelligent Robots and Systems IROS Macau China November 4 8 2019 978 1 7281 4003 2 19 31 00 2019 IEEE7491 can be measured using previous methods 6 7 However when a visual marker is outside the camera s measurement range State B position and posture data will be missing IMU data are used to estimate this missing data Systems for combining vision and inertia information have been developed in previous studies For example Tao developed a system that uses a camera and inertia sensors to measure motion of a human arm 8 However this system requires images and inertia data and cannot use inertia data to replace missing image data A system that can use cameras and inertia sensors to measure the movement locus of a human was also developed 9 To make these measurements many visual markers were pasted on the ceiling and a device consisting of a camera and an inertia sensor was used Neges et al developed an algorithm for estimating the locus of a human based on the position and orientation of natural markers e g exit signs and acceleration data allowing route estimations using only a camera and IMU mounted on a tablet PC 10 A body suit with many mounted visual markers and IMUs was developed 11 The whole body motion of a subject wearing the suit is estimated based on the markers IMUs and human kinematic information A method for estimating the position velocity and orientation of a rocket was developed to allow precision planetary land ings 12 13 This method calculates these parameters using an extended Kalman fi lter based on a combination of vision and IMU data Another developed algorithm can calculate the position of a quadrotor from non linear optimization using color images and an IMU 14 III VISUAL MARKER We developed LentiMark 6 to address degradation of orientation accuracy in frontal observations one of the biggest limitations of conventional visual markers A LentiMark Figure 2 shows a LentiMark system which consists of an AR marker and two one dimensional moir e patterns in which black parts seemingly move according to the visual line angle Fig 2 We call this moir e pattern a variable moir e pattern VMP and the darkest black part in a VMP a black peak Black peaks vary greatly with slight angle changes in frontal observations Fig 2 b LentiMark realizes accurate and stable orientation estimations using angle information from the positions of black peaks in each VMP even in frontal observations The estimation error is less than 1 However the effective angle range in this method is 27 because VMP patterns are unobservable from visual line angles exceeding 27 We therefore need another technique to eliminate pose ambiguities in this angle range The pose ambiguity is another signifi cant problem with convention visual markers which fl ips orientation estima tions from valid to invalid values Here absolute values of estimated visual line angles are almost the same as the true values but their signs are inverted Therefore we developed a thin quasi planar structure called a fl ip detection pattern FDP Fig 2 Overview of the VMP principle Figure 3 shows a FDP which is a wavelike two tone thin structure whose surface is a series of triangular prisms One side of each prism is black and the other is white The apparent black white ratio on the prism thus varies according to the viewed direction Fig 3 Overview of the FDP principle Figure 3 a shows the newly developed LentiMark with FDPs at opposite sides of VMPs In this study we used this LentiMark consisting of VMPs and FDPs B Marker Detection Algorithm The following describes the method for detecting visual markers in an input image acquired by a monocular camera Figure 4 shows the algorithm fl ow Fig 4 Algorithm for detecting visual markers in an image The input image is fi rst binarized and 2D barcodes are identifi ed in the binarized image If no 2D barcodes are detected there are no visual markers in the image Next four black dots surrounding the 2D barcode are detected If no black dots are detected there are no visual markers in the image If the 2D barcode and set of black dots are both found a visual marker is considered to be detected The position and posture are then calculated and measurement accuracy is improved using VMP 6 If a fl ip is detected the orientation direction is corrected 7 IV MARKER IMU SYSTEMALGORITHM We aim to develop a system that can robustly estimate position and posture through use of IMUs and visual mark 7492 ers An IMU can accurately measure its posture but it cannot estimate its position We thus propose a method for estimating the position of a Marker IMU based on IMU acceleration data Data from acceleration sensors include gravitational ac celeration so it is necessary to remove the gravitational acceleration component from acceleration sensor data as i pwog wR 1 i wR i i pr wgc i pr wR 1 i wgc 1 where i pr is a value from the acceleration sensor in its sensor coordinate system wRi is the IMU orientation in the world coordinate system as estimated from its gyroscope acceleration and magnitude sensor values wgc 00g Tis the gravitational acceleration vector for g and i pwog is the IMU acceleration value with the gravitational acceleration value removed Integrating the visual marker and IMU requires matching their coordinate systems The position and orientation of visual markers are calculated based on the camera coordinate system while the position and orientation of the IMU are calculated based on the world coordinate system The camera orientation with respect to the world coordinate system is unknown so visual marker orientation at a given time cannot be estimated from the IMU orientation alone But since the visual marker and IMU are integrated we can assume that changes in pose with respect to their initial poses are the same This is described as wRi Ti wRi init cRm Tm cRm init Ti wRi wR 1 i init Tm cRm cR 1 m init 2 where wRi init is the initial IMU orientation Tiis the relative IMU orientation with respect to the initial value cRm is the visual marker orientation with respect to the camera coordi nate system cRm init is the initial visual marker orientation and Tmis the relative visual marker orientation with respect to the camera coordinate system Tiand Tmare rotation matrices determined by movement of the Marker IMU Tm can be estimated using an arbitrary coordinate transform of Ti This coordinate transform is a function determined by the IMU specifi cation and mounting arrangement between a visual marker and its IMU The transform applied here is c Rm U Ti cRm init 3 where U A is a function that transforms A to an arbitrary coordinate system and c Rmis the marker s orientation matrix as estimated by the IMU s orientation matrix Using Eq 3 the marker s orientation matrix can be estimated from the IMU s orientation matrix and the initial orientation of a visual marker From Eqs 1 2 and 3 the marker s acceleration in the camera coordinate system c pwog can be estimated as c pwog c Rm mT i i pwog 4 where mTi is an arbitrary matrix that translates from the IMU s to the visual marker s coordinate system c pwog is integrated twice to estimate the Marker IMU position Since acceleration sensor values include error with respect to true values when the position is calculated by integrating the value of the acceleration sensor for the second order accumulating sensor error increases position error Kourogi developed a method for estimating trajectories of walking hu mans by using acceleration magnetic sensors a gyroscope and a camera 15 He integrated this human walking model with a Kalman fi lter to minimize position error In this study we propose a method for correcting position error by assuming that sensor value error is modeled as a time function Acceleration values with respect to the camera coordinate system are modeled as an equation including sensor error as c pwog c ptrue e t 5 where c ptrue is the true Marker IMU acceleration value e t is the error model for acceleration and t is time For simplicity e t is approximated by a polynomial equation and positional error is defi ned as Eq 6 The x value is handled as ex t dtdt n k 0 ax ktk 6 where ex t is the x axis error model and ax kis the polynomial coeffi cient For simplicity similar explanations for y and z are omitted From Eqs 5 and 6 the Marker IMU position with integral error removed can be obtained as cptrue cpwog n k 0 ax ktk 7 Identifying parameters for Eq 6 requires boundary con ditions When the camera detects a visual marker its position and posture can be measured If no visual marker is detected position and posture data will be missing see Fig 1 Data from before and after missing data are thus used as boundary conditions allowing identifi cation of error model parameters Figure 5 shows an overview of the boundary conditions Fig 5 Boundary conditions for calculating parameters for the polynomial equation From Fig 5 the camera can obtain marker positions within A and C but not within B We assume points a and b as 7493 boundary conditions thus obtaining n k 0 ax ktk a 0 n k 0 ax ktk b xwog xb 8 Where xwogis as the integration value of acceleration between a and b Parameters for the error model can be identifi ed by solving these simultaneous equations However the solutions are indefi nite if there are fewer equations than parameters for n 2 so convergence calculations are needed to obtain the parameters Therefore in this study we used the simplest model setting n 2 and s t taand assuming the model for positional error to be a quadratic function The model equation is thus ex t dtdt axs2 ax t ta 2 9 and axcan be solved for based on the boundary condition V EVALUATION EXPERIMENT A Experimental Conditions To verify the effectiveness of the proposed method we simultaneously measured data from a visual marker and an IMU The IMU is a MTw device Xsens that can connect to a PC via wireless communication MTw can calculate orientation data with high accuracy using a real time Kalman fi lter Visual marker images were obtained by a web camera mounted on a desktop PC Dell We used LEAG SDK LEAG Solutions to compute the positions and postures of visual markers Since the IMU sampling time about 5 ms is shorter than that of the visual marker about 150 ms sensing data were obtained using multithreaded processing The IMU can thus obtain more sensor data than the visual marker Figure 6 shows an overview of the coordinate systems for the Marker IMU and for camera images Fig 6 Overview and coordinate system for Marker IMU B Results IMU and visual marker data were simultaneously mea sured and we assumed visual marker data to be partially missing Marker IMU positions and postures were thus esti mated by the proposed method Numbers of missing marker data were set to 1 20 about 3 0 sec 2 10 about 1 5 sec and 3 1 about 0 3 sec and Marker IMU positions were estimated in these ranges Marker IMU postures were estimated over the entire interval An experimenter carried the Marker IMU by hand moving it randomly Figures 7 8 and 9 show the experimental results Figure 7 shows the posture results including roll angle around the x axis pitch angle around the y axis and yaw angle around the z axis calculated as c Rmand cRm Figures 8 and 9 show positions as estimated from initial and terminal values in each fi gure In these fi gures green lines indicate data as calculated by the proposed method black lines indicate data integrated without the proposed method and red dots are visual marker data set as true values Fig 7 Roll pitch and yaw orientation angles Fig 8 Marker IMU positions with 20 missing data 7494 Fig 9 Marker IMU positions with 10 missing data Figure 7 shows that the proposed method properly esti mated postures In Figs 8 and 9 the black lines deviate from true values over time while the green lines fi t the true values Moreover error between true and estimated values in Fig 9 is smaller than in Fig 8 Figure 10 shows mean absolute error between true and estimated values Fig 10 Average value of error between true and estimated values Fewer missing data results in smaller error between true and estimated values Using the proposed method however error between true and estimated values is suffi ciently small regardless of the number of missing data This confi rms the proposed method s validity C Estimating Missing Marker Data Measurements ensured that some visual marker data would be missing and this missing data was estimated using the proposed method Figure 11 shows the measured images When visual marker data can be obtained three lines are displayed on the visual marker see Fig 6 In Fig 11 a the visual marker was hidden with respect to the camera as the Marker IMU rotated so visual marker data could not be measured Similarly in Fig 11 b the Marker IMU was moved outside the measurement range Fig 11 Measured images including missing data Specifi cally the Marker IMU was moved upward in the negative y axis direction in the image until it left the measurement range then returned to the image center Figures 12 13 14 and 15 show the experimental results Missing positional data were replaced using the proposed method and orientation data were estimated across the entire interval Fig 12 Estimated positions when the Marker IMU was rotated in Fig 11 a These experimental results demonstrate continuity in da
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2025年金融理财-理赔员考试历年参考题库含答案解析(5套典型题)
- 护理病例论文写作指导
- 2025年贵州省高职院校分类招生考试(文化综合)历年参考题库含答案详解(5套)
- 护理文书岗位竞聘
- 2025年计算机技术与软件考试(中级系统集成项目管理工程师·基础知识)历年参考题库含答案详解(5卷)
- 2025年西藏公开遴选公务员考试(案例分析)历年参考题库含答案详解(5套)
- 2025年职业技能鉴定考试(渠道维护工·高级/三级)历年参考题库含答案详解(5卷)
- 下肢肿瘤术后护理措施
- 2025年福建省机关事业单位工勤人员技能等级考试(公路收费及监控员·初级)历年参考题库含答案详解(5套)
- 护理大量不保留灌肠
- 碳纤维、粘钢加固施工方案
- 四年级数学上册《大数的认识》单元测试卷
- DB23∕1270-2019 黑龙江省居住建筑节能设计标准
- 浅谈地下室底板无梁楼盖设计
- ISO14001内部审核检查表
- 立柱桩施工汇总
- 双块式无砟轨道施工工艺及质量控制
- 管理会计知识点整理
- 导管相关血流感染的治疗
- 工程进度款支付申请书
- 我国常见的草坪草
评论
0/150
提交评论