



全文预览已结束
下载本文档
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
A Novel Semi-Autonomous Control Framework for Retina Confocal Endomicroscopy Scanning* Zhaoshuo Li1, Mahya Shahbazi1, Niravkumar Patel1, Eimear O Sullivan2, Haojie Zhang2, Khushi Vyas2, Preetham Chalasani1, Peter L. Gehlbach3, Iulian Iordachita1, Guang-Zhong Yang2, Russell H. Taylor1 AbstractIn this paper, a novel semi-autonomous control framework is presented for enabling probe-based confocal laser endomicroscopy (pCLE) scan of the retinal tissue. With pCLE, retinal layers such as nerve fi ber layer (NFL) and retinal ganglion cell (RGC) can be scanned and characterized in real- time for an improved diagnosis and surgical outcome prediction. However, the limited fi eld of view of the pCLE system and the micron-scale optimal focus distance of the probe, which are in the order of physiological hand tremor, act as barriers to successful manual scan of retinal tissue. Therefore, a novel sensorless framework is proposed for real- time semi-autonomous endomicroscopy scanning during retinal surgery. The framework consists of the Steady-Hand Eye Robot (SHER) integrated with a pCLE system, where the motion of the probe is controlled semi-autonomously. Through a hybrid motion control strategy, the system autonomously controls the confocal probe to optimize the sharpness and quality of the pCLE images, while providing the surgeon with the ability to scan the tissue in a tremor-free manner. Effectiveness of the proposed architecture is validated through experimental evaluations as well as a user study involving 9 participants. It is shown through statistical analyses that the proposed framework can reduce the work load experienced by the users in a statistically-signifi cant manner, while also enhancing their performance in retaining pCLE images with optimized quality. I. INTRODUCTION Retinal detachment is a vision threatening condition in which the retina separates from the Retinal Pigment Epithe- lium (RPE) and the choroidal blood vessels that underlie the RPE. These supportive structures provide nourishment and oxygen to the attached retina. While success following surgical repair depends on a myriad of factors, one important factor correlating with functional recovery of the reattached retina is the duration of detachment 1. Therefore, it is rea- sonable to speculate that visualization of changes occurring during detachment of the retina at cellular level can predict functional outcomes. One promising approach for real-time imaging and in- vivo characterization of tissues at the cellular level is probe- *Thisworkwasfundedinpartby:NSFNRIGrantsIIS- 1327657, 1637789; Natural Sciences and Engineering Research Council of Canada (NSERC) Postdoctoral Fellowship #516873; Johns Hopkins internal funds; Robotic Endobronchial Optical Tomography (REBOT) Grant EP/N019318/1; EP/P012779/1 Micro-robotics for Surgery; and NIH R01 Grant 1R01EB023943-01. 1Authors with the Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland 21218, USA 2Authors with the Hamlyn Centre for Robotic Surgery, Imperial College London, SW7 2AZ, London, UK 3Author with the Johns Hopkins Wilmer Eye Institute, Johns Hopkins Hospital, 600 N. Wolfe Street, Maryland 21287, USA based confocal laser endomicroscopy 2, 3. pCLE is a recent optical visualization technique that translates con- ventional microscopy to a clinical setting. pCLE can be used to facilitate cellular level imaging of biological tissue at confi ned sites within the body. Effectiveness of using pCLE in a robot-assisted setting for optical biopsy has been investigated in 4, showing promising results in transvaginal peritoneoscopy. pCLE is, however, limited by its Field-of-View (FOV). The FOV in pCLE is constrained by the small size of the fi ber bundles and is typically less than half a millimeter 5. Consequently, the number of morphological features that can be viewed at a given instant may limit accurate characterization of the tissue. To address the limited FOV in pCLE, mosaicking algorithms can be used to synthesize a larger view of the tissue 6. While mosaicking algorithms are able to provide macro coverage of the tissue surface for accurate histopathological analysis, challenges with manual acquisition of high-quality contiguous image streams from the confocal laser endomicroscopy probe largely prohibit successful mosaic synthesis due to 6: physiological hand tremor, with magnitudes up to a couple of hundreds micron 7, which is in the order of magnitude of the probe image; and fragility of the detached retina, which extremely limits the tolerable range of forces that can be applied to the tissue; and patient movement and also motion of the detached retina, which add to complexities of manual image acquisition. Moreover, for high-quality image acquisition using pCLE, the probe should be at an optimal micron-scale distance to the tissue surface depending on the distal optics design of the probe. Consistently achieving and maintaining these levels of accuracy is extremely challenging with manual manipulation of the pCLE probe. For example, our pCLE imaging system with non-contact lens 8 (manufactured at Hamlyn Centre, Imperial College, London, England), has an optimal focus distance of around 700 m and a focus range of 200 m, and the image sharpness drops considerably beyond this focus range. The drop in sharpness occurs in both directions of deviation from the optimal tissue-probe distance 9. Fig. 1 shows a comparison of the probes view for four cases of out- of-range, in-focus, back-focus, and front-focus, measured at a tissue distance of 2.34 mm,1.16 mm,0.69 mm, and fully in contact with the tissue, respectively. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Macau, China, November 4-8, 2019 978-1-7281-4003-2/19/$31.00 2019 IEEE7083 (a)(b)(c)(d) Fig. 1.Sample probe views, (a) Out-of-range, (b) Back-focus, (c) In-focus, and (d) Front-focus view. Attempts have been made in the literature at automation of probe handling using both active and passive force-control methodologies. In 10, a hand-held device was presented to enable steady probe-tissue contact using a Cellvizio pCLE system. To maintain a desired contact between the target tissue and the pCLE probe, the device relies on contact force measurements acquired using a force sensor as an additional sensing modality. Work in this area also includes a versatile pick-up probe 5, which uses a low-air friction bearing with adaptive force control strategy to enable force- based scanning. Robotic integration of pCLE probe has also been achieved in 11, 12, by presenting a framework that combined pCLE with Optical Coherence Tomography (OCT) for large-area scanning. While the above techniques offer promising results, they rely on external sensing modalities to provide force or dis- tance information for axial control of the probe motion (and therefore probe-tissue distance) during scanning procedures. Precise acquisition of such measurements for retinal scanning purposes, however, is very challenging due to extremely con- strained environment of a patients eyeball, and signifi cantly small FOV. Further, the delicacy of the retinal tissue prohibits use of contact-based sensing modalities. Therefore, in this paper, a novel image-based control framework is presented for semi-autonomous mosaic-based image acquisition and scan of the retina at cellular level in a sensor-less fashion. The proposed approach enables pCLE scan of the tissue, without the need for additional contact/distance-sensing modalities, such as a force sensor or an auxiliary vision system. The setup consists of the Steady-Hand Eye Robot (SHER) 13 (developed at Johns Hopkins University), integrated with a high-speed Line-Scan Confocal Laser Endomi- croscopy (LS-CLE) system with a custom non-contact fi ber- bundle imaging probe (developed at the Hamlyn Centre). The proposed shared control framework enables a surgeon to use the confocal microscopy probe to scan over various areas of the tissue surface, while the robot adjusts autonomously the probe-to-tissue distance such that the confocal image quality is optimized. The autonomous adjustment of the probe-to- tissue distance is performed solely based on the feedback from the confocal image itself, eliminating the need for any external modality of distance/force measure. The proposed shared control strategy has been validated through experi- mental results. A set of user studies involving 9 participants is also conducted to evaluate effectiveness of the framework. It is shown that the proposed semi-autonomous platform results in statistically-signifi cant reduction in the work load level of the users and increase in their ability of maintaining an optimized view of the pCLE images. The rest of this paper is organized as follows: Section II presents the proposed methodology. Section III presents the experimental validations. Section IV discusses the user study, and Section V concludes this paper. II. THEPROPOSEDMETHODOLOGY The setup includes the SHER integrated with an LS-CLE system. SHER is a cooperatively controlled, 5-Degree-of- Freedom (DOF) robot for performing vitreoretinal surgeries. The SHER has a bidirectional repeatability of 3 m and positioning resolution of 1 m. The pCLE system used for image acquisition is a high- speed line-scanning fi bre bundle endomicroscope. The en- domicroscope is coupled to a customized probe (Fujikura FGH-30-800G) of 30,000-core fi bre bundle with distal micro-lens (SELFOC ILW-1.30). The distance between the probe and the lens is optimized so that it has an optimal focus distance of around 700 m and a focus range of 200 m. The proposed control framework consists of: 1) a hybrid semi-autonomous motion controller, 2) a mid-level opti- mizer, and 3) a low-level controller. The high-level semi- autonomous hybrid controller includes two components: cooperative control 14 of the confocal endomi- croscopy probe (shared control between the robot and the surgeon) in directions lateral to the scanning surface to cancel out hand tremors of the surgeon as s/he moves the probe to scan the region of interest, autonomous control of the probe in the axial direction (perpendicular to the surface of the tissue) to control the probe-tissue distance for an optimized image quality. The control law for the high-level hybrid controller is given as follows: xdes= Kc xdes,c+ Ka xdes,a(1) where subscripts c and a refer to cooperative and autonomous components of the desired motion, respectively; xdes,cand xdes,a, indicate the desired components of the motion derived based on the cooperative mode in the lateral direction and the autonomous mode in the axial direction, respectively. Also, Kcand Kaare projection matrices that extract the lateral and axial motions of the robot along the tool shaft, respec- tively. The following subsections describe each component in further details. A. High-Level Hybrid Controller- Lateral Direction The desired motion along the lateral direction of the tissue surface, xdes,c , is specifi ed based on an admittance cooperative scheme to provide the surgeon with control over the scanning region. SHER has a 6-DOF force/torque sensor (ATI Nano 17, ATI Industrial Automation, Apex, NC, USA) measuring user interaction forces at the tool handle, which are fed as an input to the admittance control law 13: 7084 xdes,t= Fh,t (2) xdes,c= Adrt xdes,t (3) where xdes,tand xdes,care the desired robot handle velocities in the tool frame and in the robot frame, respectively. Fh,t is the interaction force applied by the user hand in the tool frame; denotes the admittance gain, which can be controlled in real-time by the user through a foot pedal; Adrt is the adjoint transformation associated with the coordinate frame transformation from tool to robot coordinate systems, as follows: Adrt= ? Rrt prtRrt 0Rrt ? (4) where prtis the skew symmetric matrix that is associated with the vector prt, the translation component of the tool frame in the robot frame. Rrtis the rotation component of the tool frame in the robot frame. Also, Kcin Equation 1 stands for the projection matrix applied to extract the lateral motions of the robot, given by: Kc= ?RT cR033 033133 ? ,c= 100 010 000 (5) where R is the orientation of the tissue normal expressed in the base frame of the robot. Due to the small workspace inside the eyeball and, therefore, small variation of the retinal surface normal, it is assumed that the tissue surface normal is always aligned with the axis of the robots tool (confocal probe). B. High-Level Hybrid Controller- Axial Direction While the lateral motion of the robot is specifi ed based on the cooperative control strategy, the desired motion along the axial direction of the tissue surface, xdes,a, is adjusted autonomously by the robot in such a way that the confocal image quality is optimized. Due to the constrained space inside a patients eyeball, including a probe-to-tissue distance (depth) sensing apparatus is extremely challenging, if not impossible. Therefore, the image blur quality is used in this paper as an indirect and sensor-less measure of probe-to- tissue distance. Effectiveness of using image blur metrics as a depth sensing modality has been previously validated by our collaborators at Imperial College London 9. The control strategy presented therein, however, is only effective for contact-based confocal probes, which is not a suitable type of probe for retinal scanning due to fragility of the retina tissue. Moreover, the algorithm presented in 9 is dependent on characteristics of the tissue, necessitating a pre-operative calibration phase. The calibration process requires pressing the contact-based probe onto the tissue, while collecting a series of images from the pCLE system as well as the corresponding force values applied to the tissue. Performing this calibration process along with exertion and measurement of force values, however, is not feasible for retinal scanning due to constrained working environment and fragility of the Fig. 2.Evaluation of four image metrics with respect to probe-to-tissue distance. The optimal view is achieved around probe-to-tissue distance of 0.73mm. retinal tissue. Therefore, in this paper, we have developed a novel image-based approach for optimizing the image quality and sharpness without the need for any extra sensing modality, while also accommodating for the use of the non- contact LS-CLE imaging probe. For this purpose, we investigated effectiveness of several blur metrics to use in real-time control of the robot: Cr et e- Roffet (CR) 15, Marziliano Blurring Metric (MBM) 16, Cumulative Probability of Blur Detection (CPBD) 17, and image intensity. Fig. 2 shows the four metrics calculated for the confocal images during an experiment. In this experi- ment, the robot was commanded to move from a far distance to almost touching the surface, also shown in Fig. 2. In this experiment, the optimal view was achieved around probe- to-tissue distance of 0.73mm. As can be seen in Fig. 2, all the four metrics indicated a consistent pattern around the optimal view (maximized value for CR and MBM and minimized value for CPBD and intensity). Among these four metrics, however, the CR has the lowest level of noise, and highest signal-to-noise ratio. CR score is a no-reference blur metric with a low implementation cost and high robustness to the presence of noise in the image. CR is also capable of evaluating motion blur and focal blur, all of which make it an effective and effi cient score for real-time control of robotic systems. Therefore, this metric was chosen to incorporate into our auto-focus control strategy. The working principle of the CR metric is that, compared to blurring a sharp image, re- blurring an already-blurred image will not result in signifi - cant changes in intensities of neighbouring pixels. Given an 7085 input image I of pixel size mn, the image is fi rst convolved with a low-pass fi lter, obtaining the blurred images B. The absolute differences between the before-blur and after-blur images in each pixel neighbourhood are then found as follow along the horizontal and vertical axes: dIB,v= m,n X i,j=1 max(0,|Ii,j Ii1,j| |Bi,j Bi1,j|) dIB,h= m,n X i,j=1 max(0,|Ii,j Ii,j1| |Bi,j Bi,j1|) (6) where subscript v and h refer to vertical and horizontal axes, respectively. The differences (dIB,vand dIB,h) are then normalized between 0 and 1 using the following equations: blurv= dI,v dIB,v dI,v blurh= dI,h dIB,h dI,h (7) where dI,vand dI,hare the differences in pixel neighbour- hood in the original image, given as follows: dI,v= m,n X i,j=1 |Ii,j Ii1,j| dI,h= m,n X i,j=1 |Ii,j Ii,j1| The sharpness (quality), s, of the given image, I, is then calculated as sI= 1 max(blurv,blurh)(8) Fig. 3 shows an illustration of the CR score with respect to the probe-to-tissue distance. An interesting observation is that while the metric is almost symmetric around the optimal probe-to-tissue distance, it has an asymmetric pattern at farther distances. This asymmetric pattern is used in our framework to distinguish a probe-to-tissue distance that is too far. Two thresholds are defi ned to categorize the probe- to-tissue distance into two cases. Details of the logic is depicted in Algorithm 1. First, the current image-quality score smoothed by a moving average fi lter, sI, is obtained, and the change in the score with respect to the previous frame, s, is then calculated. The movement of the robot, x, along the normal direction of the tissue, U, is also obtained based on the current and previous values of the robot position (xrobot,currand xrobot,prev, respectively). When the image quality score is below T1(example shown in Fig. 1a), the probe is specifi ed as too far away from the scanning surface by the system, transferring the full control of
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 3种不同消化对策植食性哺乳动物对食源纤维的消化特征的比较
- 澳大利亚留学生住宿安排及安全保障协议
- 废水补充与水质监测服务合同
- 住宅房产抵押债务清偿与房产交易合同
- 环保技术研发与市场推广补充协议
- 青少年心理咨询服务费用标准合同
- 硅酸盐主导电解液铝合金微弧氧化热控膜层结构和性能研究
- 环保企业股权交割及绿色金融支持协议
- 购物中心餐饮区美食广场运营管理合同
- 企业级数据库资源访问权限授权及维护合同
- 白酒酿造工艺课件
- 雷霆传奇亲测-h5修改汇总
- 关节镜技术在骨科的应用
- 2023年版-肿瘤内科临床路径
- 《中国传统文化心理学》课件第五章 传统文化与心理治疗(修)
- Q∕GDW 11445-2015 国家电网公司管理信息系统安全基线要求
- java考试管理系统源代码开题报告外文翻译英文文献计001
- 机械创新设计技术结课论文
- 人教版九年级历史中考【政治经济专题复习课件44张】(共44张)
- T∕CSEA 6-2018 锌镍合金电镀技术条件
- 湘教版初中地理会考重点图复习汇集
评论
0/150
提交评论