外文资料--Robot companion localization at home and in the office_第1页
外文资料--Robot companion localization at home and in the office_第2页
外文资料--Robot companion localization at home and in the office_第3页
外文资料--Robot companion localization at home and in the office_第4页
外文资料--Robot companion localization at home and in the office_第5页
已阅读5页,还剩14页未读 继续免费阅读

付费下载

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Robot companion localization at home and in the office Arnoud Visser Jurgen Sturm Frans Groen Intelligent Autonomous Systems, Universiteit van Amsterdam http:/www.science.uva.nl/research/ias/ Abstract The abilities of mobile robots depend greatly on the performance of basic skills such as vision and localization. Although great progress has been made to explore and map extensive public areas with large holonomic robots on wheels, less attention is paid on the localization of a small robot companion in a confined environment as a room in office or at home. In this article, a localization algorithm for the popular Sony entertainment robot Aibo inside a room is worked out. This algorithm can provide localization information based on the natural appearance of the walls of the room. The algorithm starts making a scan of the surroundings by turning the head and the body of the robot on a certain spot. The robot learns the appearance of the surroundings at that spot by storing color transitions at different angles in a panoramic index. The stored panoramic appearance is used to determine the orientation (including a confidence value) relative to the learned spot for other points in the room. When multiple spots are learned, an absolute position estimate can be made. The applicability of this kind of localization is demonstrated in two environments: at home and in an office. 1 Introduction 1.1 Context Humans orientate easily in their natural environments. To be able to interact with humans, mob ile robots also need to know where they are. Robot localization is therefore an important basic skill of a mobile robot, as a robot companion like the Aibo. Yet, the Sony entertainment software contained no localization software until the latest release1. Still, many other applications for a robot companion - like collecting a news paper from the front door - strongly depend on fast, accurate and robust position estimates. As long as the localization of a walking robot, like the Aibo, is based on odometry after sparse observations, no robust and accurate position estimates can be expected. Most of the localization research with the Aibo has concentrated on the RoboCup. At the RoboCup2 artificial landmarks as colored flags, goals and field lines can be used to achieve localization accuracies below six centimeters 6, 8. The price that these RoboCup approaches pay is their total dependency on artificial landmarks of known shape, positions and color. Most algorithms even require manual calibration of the act ual colors and lighting conditions used on a field and still are quite susceptible for disturbances around the field, as for instance produced by brightly colored clothes in the audience. The interest of the RoboCup community in more general solutions has been (and still is) growing over the past few years. The almost-SLAM challenge3 of the 4-Legged league is a good example of the state-of-the-art in this community. For this challenge additional landmarks with bright colors are placed around the borders on a RoboCup field. The robots get one minute to walk around and explore the field. Then, the normal beacons and goals are covered up or removed, and the robot must then move to a series of five points on the field, using the information learnt during the first 1Aibo Mind 3 remembers the direction of its station and toys relative to its current orientation 2RoboCup Four Legged League homepage, last accessed in May 2006, http:/www.tzi.de/4legged 3Details about the Simultaneous Localization and Mapping challenge can be found at http:/www.tzi.de/ 4legged/pub/Website/Downloads/Challenges2005.pdf 1 minute. The winner of this challenge 6 reached the five points by using mainly the information of the field lines. The additional landmarks were only used to break the symmetry on the soccer field. A more ambitious challenge is formulated in the newly founded RoboCup Home league 4. In this challenge the robot has to safely navigate toward objects in the living room environment. The robot gets 5 minutes to learn the environment. After the learning phase, the robot has to visit 4 distinct places/objects in the scenario, at least 4 meters away from each other, within 5 minutes. 1.2 Related Work Many researchers have worked on the SLAM problem in general, for instance on panoramic images 1, 2, 4, 5. These approaches are inspiring, but only partially transferable to the 4-Legged league. The Aibo is not equipped with an omni-directional high-quality camera. The camera in the nose has only a horizontal opening angle of 56.9 degrees and a resolution of 416 x 320 pixels. Further, the horizon in the images is not a constant, but depends on the movements of the head and legs of the walking robot. So each image is taken from a slightly different perspective, and the path of the camera center is only in first approximation a circle. Further, the images are taken while the head is moving. When moving at full speed, this can give a difference of 5.4 degrees between the top and the bottom of the image. So the image seems to be tilted as a function of the turning speed of the head. Still, the location of the horizon can be calculated by solving the kinematic equations of the robot. To process the images, a 576 Mhz processor is available in the Aibo, which means that only simple image processing algorithms are applicable. In practice, the image is analyzed by following scan-lines with a direction relative the calculated horizon. In our approach, multiple sectors above the horizon are analyzed, with in each sector multiple scan-lines in the vertical direction. One of the general approaches 3 divides the image in multiple sectors, but this image is omni-directional and the sector is analyzed on the average color of the sector. Our method analysis each sector on a different characteristic feature: the frequency of colortransitions. 2 Approach The main idea is quite intuitive: we would like the robot to generate and store a 360o circular panorama image of its environment while it is in the learning phase. After that, it should alig n each new image with the stored panorama, and from that the robot should be able to derive its relative orientation (in the localization phase). This alignment is not trivial because the new image can be translated, rotated, stretched and perspectively distorted when the robot does not stand at the point where the panorama was originally learned 11. Of course, the Aibo is not able (at least not in real-time) to compute this alignment on fullresolution images. Therefore a reduced feature space is designed so that the computations become tractable5 on an Aibo. So, a reduced circular 360o panorama model of the environment is learned. Figure 1 gives a quick overview of the algorithms main components. The Aibo performs a calibration phase before the actual learning can start. In this phase the Aibo first decides on a suitable camera setting (i.e. camera gain and the shutter setting) based on the dynamic range of brightness in the autoshutter step. Then it collects color pixels by turning its head for a while and finally clusters these into 10 most important color classes in the color clustering step using a standard implementation of the Expectation-Maximization algorithm assuming a Gaussian mixture model 9. The result of the calibration phase is an automatically generated lookup-table that maps every YCbCr color onto one of the 10 color classes and can therefore be used to segment incoming images into its characteristic color patches (see figure 2(a). These initialization steps are worked out in more detail in 10. 4RoboCup Home League homepage, last accessed in May 2006, http:/www.ai.rug.nl/robocupathome/ 5Our algorithm consumes per image frame approximately 16 milliseconds, therefore we can easily process images at the full Aibo frame rate (30fps). Figure 1: Architecture of our algorithm (a) Unsupervised learned color segmentation. (b) Sectors and frequent color transitions visualized. Figure 2: Image processing: from the raw image to sector representation. This conversion consumes approximately 6 milliseconds/frame on a Sony Aibo ERS7. 2.1 Sector signature correlation Every incoming image is now divided into its corresponding sectors 6. The sectors are located above the calculated horizon, which is generated by solving the kinematics of the robot. Using the lookup table from the unsupervised learned color clustering, we can compute the sector features by counting per sector the transition frequencies between each two color classes in vertical direction. This yields the histograms of 10x10 transition frequencies per sector, which we subsequently discretize into 5 logarithmically scaled bins. In figure 2(b) we displayed the most frequent color transitions for each sector. Some sectors have multiple color transitions in the most frequent bin, other sectors have a single or no dominant color transition. This is only visualization; not only the most frequent color transitions, but the frequency of all 100 color transitions are used as characteristic feature of the sector. In the learning phase we estimate all these 80x(10x10) distributions 7 by turning the head and body of the robot. We define a single distribution for a currently perceived sector by Pcurrent (i, j, bin) = _ 1 discretize (freq (i, j) = bin 0 otherwise (1) where i, j are indices of the color classes and bin one of the five frequency bins. Each sector is seen multiple times and the many frequency count samples are combined into a distribution learned 680 sectors corresponding to 360o; with an opening angle of the Aibo camera of approx. 50o, this yields between 10 and 12 sectors per image (depending on the head pan/tilt) 7When we use 16bit integers, a complete panorama model can be described by (80 sectors)x(10 colors x 10 colors)x(5 bins)x(2 byte) = 80 KB of memory. for that sector by the equation: Plearned (i, j, bin) = Pcountsector (i, j, bin) bin2frequencyBins countsector (i, j, bin) (2) After the learning phase we can simply multiply the current and the learned distribution to get the correlation between a currently perceived and a learned sector: Corr(Pcurrent, Plearned) = Y i,j2colorClasses, bin2frequencyBins Plearned (i, j, bin) Pcurrent (i, j, bin) (3) 2.2 Alignment After all the correlations between the stored panorama and the new image signatures were evaluated, we would like to get an alignment between the stored and seen sectors so that the overall likelihood of the alignment becomes maximal. In other words, we want to find a diagonal path with the minimal cost through the correlation matrix. This minimal path is indicated as green dots in figure 3. The path is extended to a green line for the sectors that are not visible in the latest perceived image. We consider the fitted path to be the true alignment and extract the rotational estimate robot from the offset from its center pixel to the diagonal (_sectors): robot = 360_ 80 _sectors (4) This rotational estimate is the difference between the solid green line and the dashed white line in figure 3, indicated by the orange halter. Further, we try to estimate the noise by fitting again a path through the correlation matrix far away from the best-fitted path. SNR = P (x,y)2minimumPath Corr(x, y) P (x,y)2noisePath Corr(x, y) (5) The noise path is indicated in figure 3 with red dots. (a) Robot standing on the trained spot (matching line is just the diagonal) (b) Robot turned right by 45 degrees (matching line displaced to the left) F igure 3: Visualization of the alignment step while the robot is scanning with its head. The green solid line marks the minimum path (assumed true alignment) while the red line marks the second-minimal path (assumed peak noise). The white dashed line represents the diagonal, while the orange halter illustrates the distance between the found alignment and the center diagonal (_sectors). 2.3 Position Estimation with Panoramic Localization The algorithm described in the previous section can be used to get a robust bearing estimate together with a confidence value for a single trained spot. As we finally want to use this algorithm to obtain full localization we extended the approach to support multiple training spots. The main idea is that the robot determines to which amount its current position resembles with the previously learned spots and then uses interpolation to estimate its exact position. As we think that this approach could also be useful for the RoboCup Home league (where robot localization in complex environments like kitchens and living rooms is required) it could become possible that we finally want to store a comprehensive panorama model library containing dozens of previously trained spots (for an overview see 1). However, due to the computation time of the feature space conversion and panorama matching, per frame only a single training spot and its corresponding panorama model can be selected. Therefore, the robot cycles through the learned training spots one-by-one. Every panorama model is associated with a gradually changed confidence value representing a sliding average on the confidence values we get from the per-image matching. After training, the robot memorizes a given spot by storing the confidence values received from the training spots. By comparing a new confidence value with its stored reference, it is easy to deduce whether the robot stands closer or farther from the imprinted target spot. We assume that the imprinted target spot is located somewhere between the training spots. Then, to compute the final position estimate, we simply weight each training spot with its normalized corresponding confidence value: positionrobot = X i positioni Pconfidencei j confidencej (6) This should yield zero when the robot is assumed to stand at the target spot or a translation estimate towards the robots position when the confidence values are not in balance anymore. To prove the validity of this idea, we trained the robot on four spots on regular 4-Legged field in our robolab. The spots were located along the axes approximately 1m away from the center. As target spot, we simply chose the center of the field. The training itself was performed fully autonomously by the Aibo and took less than 10 minutes. After training was complete, the Aibo walked back to the center of the field. We recorded the found position and kidnapped the robot to an arbitrary position around the field and let it walk back again. Please be aware that our approach for multi-spot localization is at this moment rather primitive and has to be only understood as a proof-of-concept. In the end, the panoramic localization data from vision should of course be processed by a more sophisticated localization algorithm, like a Kalman or particle filter (last not least to incorporate movement data from the robot). 3 Results 3.1 Environments We selected four different environments to test our algorithm under a variety of circumstances. The first two experiments were conducted at home and in an office environment 8 to measure performance under real-world circumstances. The experiments were performed on a cloudy morning, sunny afternoon and late in the evening. Furthermore, we conducted exhaustive tests in our laboratory. Even more challenging, we took an Aibo outdoors (see 7). 3.2 Measured results Figure 4(a) illustrates the results of a rotational test in a normal living room. As the error in the rotation estimates ranges between -4.5 and +4.5 degrees, we may assume an error in alignment of a single sector9; moreover, the size of the confidence interval can be translated into maximal two sectors, which corresponds to the maximal angular resolution of our approach. 8XX office, DECIS lab, Delft 9full circle of 3600 divided by 80 sectors (a) Rotational test in natural environment (living room, sunny afternoon) (b) Translational test in natural environment (childs room, late in the evening) Figure 4: Typical orientation estimation results of experiments conducted at home. In the rotational experiment on the left the robot is rotated over 90 degrees on the same spot, and every 5 degrees its orientation is estimated. The robot is able to find its true orientation with an error estimate equal to one sector of 4.5 degrees. The translational test on the right is performed in a childs room. The robot is translated over a straight line of 1.5 meter, which covers the major part of the free space in this room. The robot is able to maintain a good estimate of its orientation; although the error estimate increases away from the location where the appearance of the surroundings was learned. Figure 4(b) shows the effects of a translational dislocation in a childs room. The robot was moved onto a straight line back and forth through the room (via the trained spot somewhere in the middle). The robot is able to estimate its orientation quite well on this line. The discrepancy with the true orientation is between +12.1 and -8.6 degrees, close to the walls. This is also reflected in the computed confidence interval, which grows steadily when the robot is moved away from the trained spot. The results are quite impressive for the relatively big movements in a small room and the resulting significant perspective changes in that room. Figure 5(a) also stems from a translational test (cloudy morning) which has been conducted in an office environment. The free space in this office is much larger than at home. The robot was moved along a 14m long straight line to the left and right and its orientation was estimated. Note the error estimate stays low at the right side of this plot. This is an artifact which nicely reflects the repetition of similarly looking working islands in the office. In both translational tests it can be seen intuitively that the rotation estimates are within acceptable range. This can also be shown quantitatively (see figure 5(b): both the orientation error and the confidence interval increase slowly and in a graceful way when the robot is moved away from the training spot. Finally, figure 6 shows the result of the experiment to estimate the absolute position with multiple learned spots. It can be seen that the localization is not as accurate as traditional approaches, but can still be useful for some applications (bearing in mind that no artificial landmarks are required). We recorded repeatedly a derivation to the upper right that we think can be explained by the fact that different learning spots dont produce equally strong confidence values; we believe to be able to correct for that by means of confidence value normalization in the near future. 4 Conclusion Although at first sight the algorithm seems to rely on specific texture features of the surro unding surfaces, in practice no dependency could be found. This can be explained by two reasons: firstly, as the (vertical) position of a color transition is not used anyway, the algorithm is quite robust against (vertical) scaling. Secondly, as the algorithm aligns on many color transitions in the background (typically more than a hundred in the same sector), the few color transitions produced by objects in the foreground (like beacons and spectators) have a minor impact on the match (because their sizes relative to the background are comparatively small). The lack of an accurate absolute position estimates seems to be a clear drawback with respect to the other methods, but bearing information alone can already be very useful for certain applications. (a) Translational test in natural environment (office, cloudy morning) (b) Signal degradation as a function of the distance to the learned spot (measured in the laboratory) Figure 5: Challenging orientation results. On the left a translational test in office environment over 14 meters along a line 80 centimeters from the learned spot (only one). A translation to the left of the office increases the error estimate increases, as expected. When translating to the right of the office to the orientation estimate oscillates, but the error estimate stays low. This is due to repeating patterns in the office, after 4 meters there is another group of desks and chairs which resembles the learned appearance. Normally the error estimate increases as a second order polynome of the distance, as demonstrated with our qualitative measurements on the right. For this measurements the rotation estimate was recorded on a grid of 99 locations regulary distributed over our laboratory. For example, a robot companion can highly benefit from a robust bearing estimation towards the front door. With this bearing estimating the robot can walk into the right direction. Only when the front door is nearly reached, and the robot has to search for a newspaper, additional distance estimation could be advantageous. The requirements of this approach for both the robot as well as for its environment are quite moderate (on a Sony Aibo ERS7, the computation time is below 20ms/frame). The robot itself needs only a simple camera and medium computational power, while most natural environments carry, as shown, enough panoramic information the algorithm can lock on to. Therefore, this method becomes interesting for the newly established RoboCup Home league, where fast localization information is needed in natural but completely unknown environments. As the training on a single spot can be completed in less than one minute on a Sony Aibo in an arbitrary place, robot research can move outside the laboratory into non-prepared places. Progress in this domain facilitates the advancement of mobile robots - and thereby robotics research itself - into more natural environments. Acknowledgements The Dutch AIBO Team is financially supported by DECIS Lab, University of Amsterdam, Technical University of Delft, Saxion University of Professional Education, University of Groningen and University of Utrecht. We are grateful for the contributions by all of our team members, working on RoboCup related research and other research. Special thanks to Olaf Booij, David Knibbe and Niek Wijngaards. References 1 R. Bunschoten. Mapping and Localization from a Panoramic Vision Sensor. PhD thesis, Universiteit van Amsterdam, November 2003. 2 J. Gonzalez and S. Lacroix. Rover localization in natural environments by indexing panoramic images. In International Conference on Robotics and Automation 2002, Washington, DC (USA), 2002. Figure 6: The position estimation after learning the appearance of the surroundings on 4 spots (indicated with yellow arrows). The small dots indicate the distribution of positions reached every time the robot tried to return to the central spot after been placed randomly somewhere on the field. 3 Horst-Michael Gross et al. Omnivision-based probabilistic self-localization for a mobile shopping assistant continued. In F.C.A. Groen, editor, IEEE/RSJ Int. Conf. on Intell. Robots and Systems (IROS 2003), pages 15051511. IEEE omnipress, 2003. 4 Matjaz Jogan and Leonardis Ales. Robust localization using panoramic view-based recognition. In Proceedings 15th International Conference on Pattern Recognition ICPR00, pages 136139, Barcelona, Spain, 2000. 5 T. Pajdla and V.Hlavac. Zero phase representation of panoramic images for image based localization. In Proceedings of the 8th International Conference on Computer Analysis of Images and Patterns: CAIP99, volume 1689 of Lecture Notes on Computer Science, pages 550557. Springer Verlag, September 1999. 6 T. Rofer et al. GermanTeam RoboCup 2005. 247 pages, 2005. 7 J. Sturm, P. van Rossum, and A. Visser. Panoramic localization in the 4-legged league. In Proc. 10th RoboCup International Symposium, Bremen, June 2006. To be published in the Lecture Notes on Artificial Intelligence series, Springer Verlag, Berlin. 8 J. Sturm, A. Visser, and N. Wijngaards. Dutch aibo team: Technical report robocup 2005. Technical report, Dutch Aibo Team, October 2005. 9 J.J. Verbeek. Mixture models for clustering and dimension reduction. PhD thesis, Universiteit van Amsterdam, December 2004. 10 Arnoud Visser, Paul van Rossum, Joost Westra, Jurgen Sturm, Dave van Soest, and Mark de Greef. Dutch aibo team at robocup 2006. In Proceedings CD RoboCup 2006, Bremen, Germany, June 2006. 11 A. Zomet, D. Feldman, S. Peleg, and D. Weinshall. Mosaicing new views: The crossed-slits projection, June 2003. 机器人伴侣定位于家庭及办公室 维瑟 斯特姆 弗兰斯葛瑞恩 摘要 以东机器人的能力在很大程度上取决于基本技能,如性能视觉化和本地化。虽然已经取得了很大的进步,探索和广泛的映射与车轮大完整约束机器人的公共区域,较少重视对本土化在一个密闭的环境作为一个房间,办公室或家里小机器人伴侣。在这篇文章中,定位算法里面流行的索尼娱乐机器人爱宝房间都制定。该算法可以基于自然提供本地化信 息外观的房间墙壁。该算法开始时,使周边的扫描转动机械手的头部和身体上的某个地方。机器人学的外观 在这点在全景存储的色彩过渡在不同角度的周围索引。所存储的全景外观是用来确定方向(包括置信度值)相对于其他点在房间里学到点。当多个斑点了解到,可以绝对位置估计有关。这类的适用性本地化是体现在两个环境:在家里和办公室。 1 介绍 1.1 背景 人类在自然的环境中轻松定向。为了能够与人类交互,移动机器人也需要知道他们在哪里。因此,机器人定位是重要的基本技能移动机器人的,作为一个机器人同伴像艾博。然而,索尼娱乐软 件不含本地化软件,直到最新的 release1 。尽管如此,许多其它应用的机器人伴侣 - 就像从大门口收集报纸 - 强烈地依赖于快速,准确和稳健的位置估计。只要步行机器人的定位,如爱宝,被稀疏的观察,没有强大和精确的位置估计后,基于测距 可以预期的 大部分的国产化研究与爱宝已经集中在机器人世界杯。在 RoboCup2 人工地标如彩旗,目标和磁力线可以用来实现国产化下面 6 厘米精度 6,8 。这些机器人世界杯接近付出的代价是其对人工路标总抚养已知的形状,位置和颜色。大多数算法甚至需要实际的手动校准 用在一个领域,还 是色彩和光线条件都相当敏感的干扰周围领域,如例如通过颜色鲜艳的衣服在观众产生。机器人世界杯社会在更广泛的解决方案的兴趣一直是(现在仍然是)成长在过去的几年里。 4 条腿联赛的几乎 SLAM challenge3 是一个很好的例子 - - 国家的最先进的在这个社区。对于这个挑战,色彩明亮额外地标被安置在边界上的 RoboCup 领域。该机器人获得一分钟的时间四处走走,探索领域。然后,在正常信标和目标得以掩盖或去除,并且机器人必须再移动到一系列的领域五点,使用过程中的第一次了解到的信息 1Aibo 心灵 3 记住它的驻地和玩具相 关的方向,当前方向 2RoboCup 四条腿联盟主页 2006 年 5 月最后一次访问, http:/www.tzi.de/4legged 3 关于同步定位与地图挑战 Details 可以在 http:/www.tzi.de/被发现 分钟。这一挑战的优胜者 6通过使用主要的信息达到了五点磁力线。额外的地标只用来打破在足球场上的对称性。一个更雄心勃勃的挑战是制定新成立的RoboCup Home 的 league4。在这种挑战的机器人具有安全地向着在起居室环境中的对象进行导航。该机器人获得 5 分钟即可学会环境。学习阶段后,机器 人已访问 4 在方案中的不同的地点 /对象,彼此远离至少 4 米,在 5 分钟内。 1.2 相关工作 在 SLAM 问题一般在全景图像的许多研究人员的工作,例如 1 , 2, 4 , 5 。这些方法是鼓舞人心的,但只是部分转让给 4 条腿联赛。爱宝没有配备全向高品质摄像头。在鼻子相机有 56.9 度只有水平张角和 416 320 像素的分辨率。此外,在图像地平线不是一个常数,而是取决于头部和腿部的运动该步行机器人。因此,每个图象被从稍微不同的角度,并且在该路径相机中心只在第一次近似圆形。另外,在图像拍摄时的头部在移动。当全速移动, 这可以给 5.4 度的差异顶部和之间在图像的底部。这样的图像似乎是倾斜的转弯速度的一个函数头。尽管如此,地平线的位置可以通过求解的运动学公式计算 机器人。处理图像,一个 576 Mhz 处理器可在爱宝,这意味着只有简单的图像处理算法都适用。在实践中,该图像是由下列分析扫描线与相对方向计算的地平线 。在我们的方法中,上述多个部门在地平线上进行分析,以在每个扇区中的多个扫描线在垂直方向上。一一般的方法 3 将图像在多个部门,但这个形象是全方位的并且该部门在行业的平均颜色进行分析。我们的方法分析各部门的不同的特征: colortransitions 的频率。 2 方法 其主要思想是非常直观的:我们希望机器人能够生成并存储一个 360 度圆形它的环境的全景图像,而它是在学习阶段。在这之后,它应该对齐每个新的图像与存储的全景图,并从该机器人应该能够获得其相对定向(在本地化阶段) 。此对应是不平凡的,因为新的图像可以平移,旋转,拉伸,当机器人不站在透视畸变其中全景最初学到的点 11 。当然,艾博不能(至少不是实时的)来计算对全分辨率对准这图像。因此降低特征空间的设计,使之成为计算 tractable5上的爱宝。因此,环 境的减少 360 度环形全景模型教训。图 1 给出了该算法的主要部件的快速概览。爱宝进行校准阶段前的实际学习就可以开始。在这个阶段的爱宝第一个基于决定在一个合适的相机设置(即摄像机的增益和快门设置)上的亮度在 autoshutter 步骤的动态范围。然后,它通过收集彩色像素回头了一会儿,最后这些聚类成 10 个最重要的颜色类别使用标准的实施期望最大化算法的颜色聚类步骤假设一个高斯混合模型 9 。校准阶段的结果是一个自动生成的查找表,每 YCbCr 色彩映射到一个 10 色的类,并可以因此可用于段传入图像转化成其特征色块(参见 图 2 的( a )。这些初始化步骤中更详细制定了在 10 。 图 1:建筑我们的算法 ( 1) 无监督学色彩分割 ( 2)部门和频繁的色彩过渡可视化。 图 2:图像处理:从原始图像到行业代表性。这种转换会消耗在索尼爱宝 ERS7约 60 毫秒 /帧。 2.1 部门签章的相关性。 每 个进入的形象也分成了相应的 sectors6 。该扇区位于上方计算出的视野,这是由求解机器人的运动学产生。使用该查找从表中得知无监督的聚类颜色,我们可以通过计算计算行业特点每扇区的过渡,垂直方向每两种颜色之间的频率。这个收益率每扇区 10X10 过渡频率,这是我们后来离散成 5 的直方图对数标箱。在图 2( b )显示,我们最常见的色彩过渡的每个部门。一些部门有多个颜色过渡中最常见的垃圾桶,其他部门有单个或没有主导的色彩过渡。这是唯一的可视化 ;不仅是最常见的颜色转换,但是所有的 100 色转换的频率被用作的特征部门。在学习阶段,我们估计这些 80 倍( 10 10 ) distributions7 通过转动头 和体的机器人。我们通过定义一个当前感知领域的单一分配 其中 i, j 是颜色类和斌五个频率点之一的指标。每个扇区为 见过多次,许多频率计数的样本组合成学到了分布 该扇区的公式 : 学习阶段后,我们可以简单地乘以电流和教训分布得到 当前感知和教训部门之间的关系: 2.2 对齐 之后,存储全景和新的图像特征之间的所有关系进行了评价, 我们想获得的存储和看到部门之间的对齐,使整体的可能性 对齐变得最大。换句话说,我们希望找到与对角线路径 通过相关矩阵最小的成本。这个最小的路径表示为绿色圆点图 3,该路径延伸到绿色的线为不可见于最新的感知到的扇区 图像。 我们认为安装路径是真正的走线和提取旋转 估计“ 机器人 从偏移从它的中心像素的对角线(扇区): 这种旋转估计是绿色实线和虚线白线之间的差值 在图 3 中,由橙笼头表示。此外,我们试图通过再次拟合来估计噪声 通过相关矩

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论