进度表.doc

一种多钻头台式钻床的设计【说明书+CAD】

收藏

压缩包内文档预览:
预览图
编号:77608957    类型:共享资源    大小:2.26MB    格式:ZIP    上传时间:2020-05-06 上传人:柒哥 IP属地:湖南
40
积分
关 键 词:
说明书+CAD 一种 钻头 台式 钻床 设计 说明书 CAD
资源描述:

购买设计请充值后下载,,资源目录下的文件所见即所得,都可以点开预览,,资料完整,充值下载可得到资源目录里的所有文件。。。【注】:dwg后缀为CAD图纸,doc,docx为WORD文档,原稿无水印,可编辑。。。具体请见文件预览,有不明白之处,可咨询QQ:12401814

内容简介:
相机标定的一个灵活的新技术张正友,高级会员,IEEE摘要:我们提出一个灵活的新技术轻松地标定相机。它只需要相机观察平面模式显示在一些(至少两个),不同的方向。相机或平面模式可以自由移动。运动不需要知道。径向透镜畸变是建模。提出的程序由一个封闭的解,紧随其后的是一个非线性优化基于最大似然准则。计算机模拟和真实的数据被用来测试提出的技术和很好的结果被获得。与经典技术使用昂贵设备,如两个或三个正交平面,该技术是容易的使用和灵活。它提出三维计算机视觉实验室的一个步骤实际使用的环境。相应的软件是可用的作者的Web页面。索引词:从飞机相机标定,标定,2 d模式,灵活plane-based标定、绝对二次曲线投影映射,镜头畸变,封闭的解、最大似然估计、灵活设置。1、动机相机标定是一个必要的步骤,3 d计算机视觉,以从二维图像中提取指标信息。很多工作已经完成,从摄影测量社区(见2,4引用几),最近在计算机视觉(9,8,23,7,25,24,16,6引用一些)。我们可以将这些技术大致分为两类:摄影测量标定和自标定。三维引用对象的标定。相机标定是通过观察执行标定在三维空间对象的几何形状,而闻名良好的精度。标定可以非常有效地完成5。标定的对象通常包含两个或三个飞机相互正交。有时一架飞机经历一场精确已知的翻译也使用23。这些方法需要昂贵的标定器,一个复杂的设置。自标定。这类技术不使用任何标定的对象。只是通过移动相机在一个静态的场景,场景提供了一般的僵化约束16,14相机的内部参数利用图像信息从一个镜头位移独自一人。因此,如果图像是由相同的相机固定的内部参数之间的对应关系三张图片足以内部和恢复外部参数允许我们重建3 d结构相似15,12。虽然这方法非常灵活,它还不成熟1。因为有很多参数估计,我们不能总是获得可靠的结果。其他技术存在:正交方向的消失点3,13,从纯旋转标定11,20。 我们目前的研究重点是桌面视觉系统(dv)因为使用dv的潜力很大。相机变得廉价,无处不在。一个德国焊接学会旨在将军公共不是专家在计算机视觉。一个典型的计算机用户只会不时执行视觉任务,所以他们会不愿意投资昂贵的设备。因此,灵活性、健壮性和低成本是很重要的。相机本文描述标定技术开发考虑到这些因素。该技术只需要相机观察平面模式显示在一些(至少两个)不同的方向。模式可以印在一个激光打印机和附加到合理平面表面(如。硬封面)。要么是相机或平面模式可以移动。运动不需要知道。本文提出的方法,它使用二维指标信息、摄影测量标定之间的谎言使用显式的3 d模型和自标定,使用运动刚性或相当于隐含的三维信息。两个电脑仿真和实际数据被用来测试算法技术,取得了很好的结果。相比与传统技术,提出技术相当更灵活的:任何人都可以做一个标定模式由他/她自己和安装非常简单。与自检校相比,它获得相当程度的鲁棒性。我们相信新技术进步三维计算机视觉从实验室一步环境的现实世界。值得注意的是,瑞格 22 最近开发了一种自标定从平面场景的至少五个视图技术。他的技术比我们的更灵活,但难以初始化。Liebowitz和Zisserman 13 描述了一种度量校正技术使用度量信息平面的透视图像,如已知角,两虽然未知角,与一个已知的长度比。他们还提到,内部标定相机参数可以提供至少三个这样的纠正飞机,虽然没有实验结果。本文的修订过程中,我们注意到出版通过与马来亚 21 一个独立但他们类似的工作。使用一个简化的摄像机模型(图像轴相互正交其他)并研究了退化的配置详细的一个和两个平面的情况下,这是非常重要的在实践中,如果只有一个或两个视图用于摄像机标定双语对照。本文的组织结构如下:第二部分描述了基本约束通过观察一个平面。第三节描述了标定过程。我们先从封闭的解,其次是非线性优化。径向透镜畸变也建模。第四节提供了实验结果。这两个计算机仿真和实际数据用于验证提出了技术。在附录中,我们提供了许多细节,包括技术评估单应性飞机模型及其之间的形象。2基本方程我们对摄像机内参数的约束通过观察一个单一的平面设置。我们开始于本文中使用的符号。2.1符号二维点是由M U表示;五T。一个三维点用M X;Y;Z我们useexT。表示增广向量加1的最后一个元素:EM U;V;1Tandfm X;Y;Z;1摄像头是T。由通常的针孔模型:一个三维点之间的关系及其图像投影M由下式给出 年代是一个任意的比例因子,R;t,称为外在吗参数是相关的旋转和翻译坐标系到摄像机坐标系,称为相机的内在矩阵,半;爆破的坐标主点和规模因素在图像u和v轴、和参数描述这两个图像的倾斜轴。我们使用缩写欧美一y1T或日元。2.2平面单应between themodeland its形象 没有损失的普遍性,我们假设模型飞机在Z0世界坐标系统。让我们表示的第i个列由国际扶轮旋转矩阵R。从(1),我们有滥用的符号,我们仍然useMto表示一个点的模型飞机,但米 X,YT由于Z总是等于零。反过来,调频 X,Y;1T。因此,一个模型M和M是其形象单应性相关的H:很清楚,3 3 H矩阵定义比例因子。2.3约束的内在参数考虑到模型飞机的形象,一个单应性估计(见附录)。让我们表示通过Hh1 h2 h3。(2),我们有是一个任意的标量。使用知识,r1和r2正交时,我们有什么这是两个基本约束的内在参数,给定一个单应性。因为单应性有8度自由和有六个外部参数(三个旋转和三个翻译),我们只能得到两个约束的内在参数。请注意,yTAy1实际上描述了的形象绝对二次曲线15。在下一节中,我们将给出一个几何解释。2.4几何解释我们现在正相关(3)和(4)绝对二次曲线16,15。不难验证飞机模型,在我们约定,在摄像机坐标系的描述以下方程:在无穷远处,w0分,w1。这架飞机平面的交点在无穷远处的线,我们可以很容易地看到两个特定的点线。任何点上它是一个线性的结合这两点,即现在,让我们来计算上面的交集的绝对二次曲线。根据定义,x1,称为循环点18,满足:xT1 x10,即,ar1br2Tar1br20,或者a2b20。解决方案是bai,i2日元。也就是说,这两个相交点这双的意义在于复共轭点他们是欧氏变换不变量。他们的图像平面的投影,比例因子,点他们1是在绝对二次曲线的形象,所描述的一个yTAy115。这给了要求真实和虚构的部分是零收益率(3)和(4)。3解决摄像机标定本节提供细节如何有效解决摄像机标定问题。我们开始分析解决方案,紧随其后的是一个非线性优化技术的基础上最大似然准则。最后,我们考虑镜头失真,给分析和非线性的解决方案。3.1封闭的解注意,B是对称的,由6维向量定义让第i个列向量的H嗨 hi1,hi2;至于T。然后,我们有因此,两个基本约束条件(3)和(4),从一个鉴于单应性,可以重写为两个均匀b方程:如果n模型飞机是观察到的图像,通过叠加等方程(8),我们有 V 2 n 6矩阵。如果n 3中,我们将一般独特的解决方案b定义比例因子。如果n2,我们可以实施skewless约束0,即。0,1,0,0,0,0 b0,这是作为一个额外的添加方程(9)。(如果n1,我们可以吗只有解决两个摄像机内参数,如。,假设半线性和爆破是已知的(如。在图像中心)和0,确实是我们在19基于头部姿势的决心眼睛和嘴的事实相当共面。事实上,蔡23已经提到,从一架飞机是焦距可能,但不正确的说,长宽比不是。)解决方案9)是著名的相关的特征向量VTV的前主持人最小特征值(等价于右奇异向量V与最小奇异值有关。一旦估计b,我们可以计算所有相机内在参数如下。矩阵B,如3.1节所述,估计一个比例因子,即,B与一个任意yTA规模。没有困难,我们唯一可以提取的内在从矩阵B参数。一次是已知的,外在的参数为每个图像容易计算。(2),我们有当然,因为噪音数据,so-computed矩阵Rr1、r2、r3没有,一般来说,满足旋转矩阵的性质。最好的旋转矩阵然后可以通过例如奇异值分解10,26。3.2最大似然估计上面的解决方案是通过最小化一个代数距离并不是身体有意义。我们可以改进它通过最大似然推理。我们给定的n图像模型飞机,arempoints模型飞机。假设图像点损坏独立和恒等分布的噪声。的maximumlikelihood估计可以通过最小化以下功能: m;国际扶轮;钛;Mj是点Mj的投影图像,根据旋转R(2)。由一个向量的三个参数化参数,用r,平行于旋转轴和的大小等于旋转角度。R和R是相关的罗德里格斯公式5。(10)是一种非线性最小化最小化问题,这是解决Levenberg -马夸特医生算法实现Minpack17。它需要的初始猜测;星期五tiji1:ng可获得使用在前一节中描述的技术。桌面相机通常有可见的透镜畸变,特别是径向组件。我们有包括这些同时最小化(10)。请参阅技术报告,26,为更多的细节。3.3总结建议标定过程如下:1。打印模式并将其附加到一个平面的表面。2。采取一些飞机模型在不同的图像方向通过移动飞机或相机。3。检测图像中的特征点。4。估计五个固有参数和所有的外在参数使用封闭的解,如描述的3.1节5。完善所有参数,包括透镜畸变参数,通过最小化(10)。有一个简并配置在我的技术飞机是相互平行的。请参阅技术报告,26,更详细的描述4实验结果 该算法已经在计算机进行测试模拟数据和真实数据。包括封闭解找到一个小2 n 6矩阵的奇异值分解,其中n是图像的数量。非线性优化在Levenberg-Marquardt算法需要迭代3到5收敛。由于空间限制,我们在本节描述一套实验与真实数据标定在不同的距离相机模式。读者被称为26更多的实验结果与电脑模拟和实际数据,以下Web页面:/ 张/ Calib /实验数据和软件。示例图1所示。摄像机标定是一个现成的CCD相机PULNiX 6毫米镜头。图像分辨率为640 480人。图1中可以看出,模型飞机包含9 9和9个特殊点用于广场自动识别参考之间的通信点在图像模型飞机和方角。这是印在600 DPI的A4纸激光打印机和附加一个纸板。总共10飞机的图片拍摄(6所示在图1)。其中五(称为设定)在近距离拍摄,其他五个组(称为B)在一个更大的距离。我们应用我们的标定算法设置一组,B,和也整个集(称为集+ B)。结果如表1所示。为直观的理解,我们估计之间的角度图像轴、#、而不是倾斜的因素。我们可以看到,角度#非常接近90,与几乎所有的现代如预期CCD摄像机。摄像机参数估计一致所有三组图片,除了变形参数与设置b .原因是标定模式只占中央图像在B组的一部分,透镜畸变并不重要,因此不能可靠地估计。5结论在本文中,我们开发了一种灵活的新技术很容易标定相机。该技术只需要相机从几个不同的方向观察平面模式。尽管方向的最小数量是两个像素是方形的,我们建议四个或五个不同的方向更好的质量。我们可以移动摄像机或平面模式。运动不需要知道,但不应该一个纯粹的翻译。只有两个方向的数量时,一个人应该避免定位平面平行模式图像平面。模式可以是任何东西,只要我们知道飞机上的指标。例如,我们可以打印的模式激光打印机和纸附加到一个合理的平面表面比如书的封面。我们甚至可以用已知的一本书大小,因为四个角落足以估计飞机单应性。图1。两组在不同距离拍摄的图像标定模式。每组包含5个图片。(一)三组在近距离拍摄的照片显示。(b)三张图片的显示设置在一个更大的距离。 径向透镜畸变是建模。提出了过程由一个封闭的解,紧随其后的是一个非线性基于最大似然准则的改进。两个电脑仿真和实际数据被用来测试算法技术,取得了很好的结果。相比astwo等经典技术使用昂贵的设备或三个正交平面,该技术成果相当大的灵活性附录估计之间的单应性模型飞机和它的形象有很多方法来估计之间的单应性模型飞机和它的形象。在这里,我们提出一种基于的技术最大似然准则。让心肌梗死和心肌梗死模型和形象点,分别。理想情况下,他们应该满足(2)。实践中,他们不因为提取图像的噪声点。让我们假设mi与意味着0被高斯噪声和协方差矩阵mi。然后,最大似然H的估计是通过最小化以下功能 表1 标定结果与图1所示的图像在实践中,我们只是假设mi2我所有。这是如果点是独立提取相同的合理过程。在这种情况下,上述问题成为非线性最小二乘,即。,明P我kmi测y mik2。的非线性最小化Levenberg-Marquardt算法进行吗为实现Minpack17。这需要一个初始猜测,可以得到如下:当我们给出n点,我们有n以上方程,可以写成矩阵方程为Lx0,在L 2 n 9吗矩阵。x是定义了一个比例因子,解决方案是著名的的右奇异向量L联系在一起最小奇异值(或相当于,LTL的特征向量与最小特征值)有关。在L,一些元素常数1,一些像素,有些是在世界坐标,和有些是乘法。这使得L条件很差数值。可以通过执行一个更好的结果简单的数据规范化运行上述过程之前。应答作者要感谢布莱恩京特对他的软件角落里提取和许多讨论和比尔特里格富有洞察力的评论。他还要感谢安德鲁Zisserman让他CVPR 98工作13作者的的关注。它使用相同的约束,但是在不同的形式。他还要感谢视觉小组的成员吗微软鼓励研究和讨论。Anandan和查尔斯循环检查的一个早期版本的英语。匿名评论者的建设性意见感激地承认,帮助作者提高这张纸上。引用A Flexible New Techniquefor Camera CalibrationZhengyou Zhang, Senior Member, IEEEAbstractWe propose a flexible new technique to easily calibrate a camera. Itonly requires the camera to observe a planar pattern shown at a few (at least two)different orientations. Either the camera or the planar pattern can be freely moved.The motion need not be known. Radial lens distortion is modeled. The proposedprocedure consists of a closed-form solution, followed by a nonlinear refinementbased on the maximum likelihood criterion. Both computer simulation and realdata have been used to test the proposed technique and very good results havebeen obtained. Compared with classical techniques which use expensiveequipment such as two or three orthogonal planes, the proposed technique is easyto use and flexible. It advances 3D computer vision one more step from laboratoryenvironments to real world use. The corresponding software is available from theauthors Web page.Index TermsCamera calibration, calibration from planes, 2D pattern, flexibleplane-based calibration, absolute conic, projective mapping, lens distortion,closed-form solution, maximum likelihood estimation, flexible setup.?1MOTIVATIONSCAMERAcalibration is a necessary step in 3D computer vision inorder to extract metric information from 2D images. Much workhas been done, starting in the photogrammetry community (see 2,4 to cite a few), and more recently in computer vision (9, 8,23, 7, 25, 24, 16, 6 to cite a few). We can classify thosetechniques roughly into two categories: photogrammetric calibra-tion and self-calibration.Three-dimensional reference object-based calibration.Camera calibration is performed by observing a calibrationobject whose geometry in 3D space is known with verygood precision. Calibration can be done very efficiently 5.The calibration object usually consists of two or threeplanes orthogonal to each other. Sometimes a planeundergoing a precisely known translation is also used23. These approaches require an expensive calibrationapparatus, and an elaborate setup.Self-calibration. Techniques in this category do not useany calibration object. Just by moving a camera in a staticscene, the rigidity of the scene provides in general twoconstraints 16, 14 on the cameras internal parametersfrom one camera displacement by using image informationalone. Therefore, if images are taken by the same camerawith fixed internal parameters, correspondences betweenthree images are sufficient to recover both the internal andexternal parameters which allow us to reconstruct3D structure up to a similarity 15, 12. While thisapproach is very flexible, it is not yet mature 1. Becausethere are many parameters to estimate, we cannot alwaysobtain reliable results.Other techniques exist: vanishing points for orthogonal directions3, 13, and calibration from pure rotation 11, 20.Our current research is focused on a desktop vision system(DVS) since the potential for using DVSs is large. Cameras arebecoming inexpensive and ubiquitous. A DVS aims at the generalpublic who are not experts in computer vision. A typical computeruser will perform vision tasks only from time to time, so they willnot be willing to invest money for expensive equipment. Therefore,flexibility, robustness, and low cost are important. The cameracalibration technique described in this paper was developed withthese considerations in mind.The proposed technique only requires the camera to observe aplanar pattern shown at a few (at least two) different orientations.The pattern can be printed on a laser printer and attached to a“reasonable” planar surface (e.g., a hard book cover). Either thecamera or the planar pattern can be moved by hand. The motionneed not be known. The proposed approach, which uses 2D metricinformation, lies between the photogrammetric calibration, whichuses explicit 3D model, and self-calibration, which uses motionrigidity or equivalently implicit 3D information. Both computersimulation and real data have been used to test the proposedtechnique and very good results have been obtained. Comparedwith classical techniques, the proposed technique is considerablymore flexible: Anyone can make a calibration pattern by him/her-self and the setup is very easy. Compared with self-calibration, itgains a considerable degree of robustness. We believe the newtechnique advances 3D computer vision one step from laboratoryenvironments to the real world.Note that Triggs 22 recently developed a self-calibrationtechnique from at least five views of a planar scene. His techniqueis more flexible than ours, but has difficulty to initialize. Liebowitzand Zisserman 13 described a technique of metric rectification forperspective images of planes using metric information, such as aknown angle, two equal though unknown angles, and a knownlength ratio. They also mentioned that calibration of the internalcamera parameters is possible provided at least three such rectifiedplanes, although no experimental results were shown.During the revision of this paper, we notice the publication ofan independent but similar work by Sturm and Maybank 21.Theyuse a simplified camera model (image axes are orthogonal to eachother) and have studied the degenerate configurations exhaus-tively for the case of one and two planes, which are very importantin practice if only one or two views are used for camera calibration.The paper is organized as follows: Section 2 describes the basicconstraints from observing a single plane. Section 3 describes thecalibration procedure. We start with a closed-form solution,followed by nonlinear optimization. Radial lens distortion is alsomodeled. Section 4 provides the experimental results. Bothcomputer simulation and real data are used to validate theproposed technique. In the Appendix, we provides a number ofdetails, including the techniques for estimating the homographybetween the model plane and its image.2BASICEQUATIONSWe examine the constraints on the cameras intrinsic parametersprovided by observing a single plane. We start with the notationused in this paper.2.1NotationA 2D point is denoted by m ? ?u;v?T. A 3D point is denoted byM ? ?X;Y;Z?T.Weuse e xtodenotetheaugmentedvectorbyadding1asthelastelement: e m ? ?u;v;1?TandfM ? ?X;Y;Z;1?T.Acameraismodeled by the usual pinhole: The relationship between a 3D pointM and its image projection m is given byse m ? A?Rt?fM;with A ?u00?v00012435;?1?1330IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL. 22,NO. 11,NOVEMBER 2000. The author is with Microsoft Research, One Microsoft Way, Redmond, WA98052-6399. E-mail: zhang.Manuscript received 26 Apr. 2000; revised 25 Aug. 2000; accepted 7 Sept.2000.Recommended for acceptance by A. Shashua.For information on obtaining reprints of this article, please send e-mail to:tpami, and reference IEEECS Log Number 111998.0162-8828/00/$10.00 ? 2000 IEEEwhere s is an arbitrary scale factor, ?R;t?, called the extrinsicparameters is the rotation and translation which relates the worldcoordinate system to the camera coordinate system, and A is calledthe camera intrinsic matrix, with ?u0;v0? the coordinates of theprincipal point, ? and ? the scale factors in image u and v axes, and? the parameter describing the skew of the two image axes.We use the abbreviation A?Tfor ?A?1?Tor ?AT?1.2.2Homography between the Model Plane and Its ImageWithout loss of generality, we assume the model plane is on Z ? 0of the world coordinate system. Lets denote the ith column of therotation matrix R by ri. From (1), we havesuv12435? A r1r2r3t?XY0126643775? A r1r2t?XY12435:By abuse of notation, we still use M to denote a point on the modelplane, but M ? ?X;Y?Tsince Z is always equal to zero. In turn,fM ? ?X;Y;1?T. Therefore, a model point M and its image m isrelated by a homography H:se m ? HfMwithH ? A r1r2t?:?2?As is clear, the 3 ? 3 matrix H is defined up to a scale factor.2.3Constraints on the Intrinsic ParametersGiven an image of the model plane, an homography can beestimated (see the Appendix). Lets denote it by H ? h1h2h3?.From (2), we haveh1h2h3? ? ?A r1r2t?;where ? is an arbitrary scalar. Using the knowledge that r1and r2are orthonormal, we havehT1A?TA?1h2? 0?3?hT1A?TA?1h1? hT2A?TA?1h2:?4?These are the two basic constraints on the intrinsic parameters,given one homography. Because a homography has 8 degrees offreedom and there are six extrinsic parameters (three for rotationand three for translation), we can only obtain two constraints onthe intrinsic parameters. Note that A?TA?1actually describes theimage of the absolute conic 15. In the next section, we will give ageometric interpretation.2.4Geometric InterpretationWe are now relating (3) and (4) to the absolute conic 16, 15.It is not difficult to verify that the model plane, under ourconvention, is described in the camera coordinate system by thefollowing equation:r3rT3t?Txyzw26643775? 0;where w ? 0 for points at infinity and w ? 1 otherwise. This planeintersects the plane at infinity at a line and we can easily see thatr10?andr20?are two particular points on that line. Any point on it is a linearcombination of these two points, i.e.,x1? ar10? br20?ar1? br20?:Now, lets compute the intersection of the above line with theabsolute conic. By definition, the point x1, known as the circularpoint 18, satisfies: xT1x1? 0, i.e., ?ar1? br2?T?ar1? br2? ? 0, ora2? b2? 0. The solution is b ? ?ai, where i2? ?1. That is, the twointersection points arex1? ar1? ir20?:The significance of this pair of complex conjugate points lies in thefact that they are invariant to Euclidean transformations. Theirprojection in the image plane is given, up to a scale factor, bye m1? A?r1? ir2? ? h1? ih2:Point e m1is on the image of the absolute conic, described byA?TA?115. This gives?h1? ih2?TA?TA?1?h1? ih2? ? 0:Requiring that both real and imaginary parts be zero yields (3)and (4).3SOLVINGCAMERACALIBRATIONThis section provides the details how to effectively solve thecamera calibration problem. We start with an analytical solution,followed by a nonlinear optimization technique based on themaximum-likelihood criterion. Finally, we take into account lensdistortion, giving both analytical and nonlinear solutions.3.1Closed-Form SolutionLetB ? A?TA?1?B11B12B13B12B22B23B13B23B33264375?1?2?2?v0?u0?2?2?2?2?2?1?2?v0?u0?2?2?v0?2v0?u0?2?v0?u0?2?2?v0?2?v0?u0?2?2?2?v20?2?12666437775:?5?Note that B is symmetric, defined by a 6D vectorb ? ?B11;B12;B22;B13;B23;B33?T:?6?Let the ith column vector of H be hi? ?hi1;hi2;hi3?T. Then, wehavehTiBhj? vTijb?7?withvij?hi1hj1;hi1hj2? hi2hj1;hi2hj2;hi3hj1? hi1hj3;hi3hj2? hi2hj3;hi3hj3?T:Therefore, the two fundamental constraints (3) and (4), from agiven homography, can be rewritten as two homogeneousequations in b:vT12?v11? v22?T?b ? 0:?8?IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL. 22,NO. 11,NOVEMBER 20001331If n images of the model plane are observed, by stacking n suchequations as (8), we haveVb ? 0;?9?where V is a 2n ? 6 matrix. If n ? 3, we will have in general aunique solution b defined up to a scale factor. If n ? 2, we canimpose the skewless constraint ? ? 0, i.e., ?0;1;0;0;0;0?b ? 0,which is added as an additional equation to (9). (If n ? 1, we canonly solve two camera intrinsic parameters, e.g., ? and ?, assumingu0and v0are known (e.g., at the image center) and ? ? 0, and thatis indeed what we did in 19 for head pose determination basedon the fact that eyes and mouth are reasonably coplanar. In fact,Tsai 23 already mentions that focal length from one plane ispossible, but incorrectly says that aspect ratio is not.) The solutionto (9) is well-known as the eigenvector of VTV associated with thesmallest eigenvalue (equivalently, the right singular vector of Vassociated with the smallest singular value).Once b is estimated, we can compute all camera intrinsicparameters as follows. The matrix B, as described in Section 3.1, isestimated up to a scale factor, i.e., B ? ?A?TA with ? an arbitraryscale. Without difficulty, we can uniquely extract the intrinsicparameters from matrix B.v0? ?B12B13? B11B23?=?B11B22? B212? ? B33? ?B213? v0?B12B13? B11B23?=B11? ?=B11p? ?B11=?B11B22? B212?q? ? ?B12?2?=?u0? ?v0=? ? B13?2=?:Once A is known, the extrinsic parameters for each image isreadily computed. From (2), we haver1? ?A?1h1;r2? ?A?1h2;r3? r1? r2;t ? ?A?1h3with ? ? 1=kA?1h1k ? 1=kA?1h2k. Of course, because of noise indata, the so-computed matrix R ? ?r1;r2;r3? does not, in general,satisfy the properties of a rotation matrix. The best rotation matrixcan then be obtained through for example singular valuedecomposition 10, 26.3.2Maximum-Likelihood EstimationThe above solution is obtained through minimizing an algebraicdistance which is not physically meaningful. We can refine itthrough maximum-likelihood inference.Wearegivennimagesofamodelplaneandtherearempointsonthe model plane. Assume that the image points are corrupted byindependent and identically distributed noise. The maximum-likelihood estimate can be obtained by minimizing the followingfunctional:Xni?1Xmj?1kmij? m?A;Ri;ti;Mj?k2;?10?where m?A;Ri;ti;Mj? is the projection of point Mjin image i,according to (2). A rotation R is parameterized by a vector of threeparameters, denoted by r, which is parallel to the rotation axis andwhose magnitude is equal to the rotation angle. R and r are relatedby the Rodrigues formula 5. Minimizing (10) is a nonlinearminimization problem, which is solved with the Levenberg-Marquardt Algorithm as implemented in Minpack 17. It requiresan initial guess of A;fRi;tiji ? 1:ng which can be obtained usingthe technique described in the previous section.Desktop cameras usually have visible lens distortion, especiallythe radial components. We have included these while minimizing(10). Refer to the technical report, 26, for more details.3.3SummaryThe recommended calibration procedure is as follows:1.Print a pattern and attach it to a planar surface.2.Take a few images of the model plane under differentorientations by moving either the plane or the camera.3.Detect the feature points in the images.4.Estimate the five intrinsic parameters and all the extrinsicparameters using the closed-form solution, as described inSection 3.15.Refine all parameters, including lens distortion para-meters, by minimizing (10).There is a degenerate configuration in my technique whenplanes are parallel to each other. Refer to the technical report, 26,for a more detailed description.4EXPERIMENTALRESULTSThe proposed algorithm has been tested on both computersimulated data and real data. The closed-form solution involvesfinding a singular value decomposition of a small 2n ? 6 matrix,where n is the number of images. The nonlinear refinementwithin the Levenberg-Marquardt Algorithm takes 3 to 5 iterationsto converge. Due to space limitation, we describe in this sectionone set of experiments with real data when the calibrationpattern is at different distances from the camera. The reader isreferred to 26 for more experimental results with both computersimulated and real data, and to the following Web page: http:/zhang/Calib/ for some experimentaldata and the software.The example is shown in Fig. 1. The camera to be calibrated isan off-the-shelf PULNiX CCD camera with 6 mm lens. The imageresolution is 640 ? 480. As can be seen in Fig. 1, the model planecontains 9 ? 9 squares with nine special dots which are used toidentify automatically the correspondence between referencepoints on the model plane and square corners in images. It wasprinted on a A4 paper with a 600 DPI laser printer and attached toa cardboard.In total,10imagesoftheplane weretaken(sixofthem areshowninFig.1).Fiveofthem (calledSetA)weretakenatcloserange, whilethe other five (called Set B) were taken at a larger distance. Weapplied our calibration algorithm to Set A, Set B, and also to thewhole set (called Set A+B). The results are shown in Table 1. Forintuitive understanding, we show the estimated angle between theimage axes, #, instead of the skew factor ?. We can see that the angle# is very close to 90?, as expected with almost all modernCCD cameras. The cameras parameters were estimated consistentlyforall threesets ofimages, except the distortion parameters with SetB. Thereason isthat the calibration pattern only occupies the centralpart of the image in Set B, where lens distortion is not significantand therefore cannot be estimated reliably.5CONCLUSIONIn this paper, we have developed a flexible new technique to easilycalibrate a camera. The technique only requires the camera toobserve a planar pattern from a few different orientations.Although the minimum number of orientations is two if pixelsare square, we recommend four or five different orientations forbetter quality. We can move either the camera or the planar1332IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL. 22,NO. 11,NOVEMBER 2000pattern. The motion does not need to be known, but should not bea pure translation. When the number of orientations is only two,one should avoid positioning the planar pattern parallel to theimage plane. The pattern could be anything, as long as we knowthe metric on the plane. For example, we can print a pattern with alaser printer and attach the paper to a reasonable planar surfacesuch as a hard book cover. We can even use a book with knownsize because the four corners are enough to estimate the planehomographies.Radial lens distortion is modeled. The proposed procedureconsists of a closed-form solution, followed by a nonlinearrefinement based on a maximum-likelihood criterion. Both compu-ter simulation and real data have been used to test the proposedtechnique and very good results have been obtained. Comparedwith classical techniques which use expensive equipment such astwo or three orthogonal planes, the proposed technique gainsconsiderable flexibility.APPENDIXESTIMATINGHOMOGRAPHYBETWEEN THEMODELPLANE ANDITSIMAGEThere are many ways to estimate the homography between themodel plane and its image. Here, we present a technique based ona maximum-likelihood criterion. Let Miand mibe the model andimage points, respectively. Ideally, they should satisfy (2). Inpractice, they dont because of noise in the extracted image points.Lets assume that miis corrupted by Gaussian noise with mean 0and covariance matrix ?mi. Then, the maximum-likelihoodestimation of H is obtained by minimizing the following functionalIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,VOL. 22,NO. 11,NOVEMBER 20001333Fig. 1. Two sets of images taken at different distances to the calibration pattern. Each set contains five images. (a) Three images from the set taken at a close distanceare shown. (b) Three images from the set taken at a larger distance are shown.Xi?mi? mi?T?1mi?mi? mi?;where mi?1?hT3Mi?hT1Mi?hT2Mi2435with?hi;the ith row of H:In practice, we simply assume ?mi? ?2I for all i. This isreasonable if points are extracted independently with the sameprocedure. In this case, the above problem becomes a nonlinearleast-squares one, i.e., minHPikmi? mik2. The nonlinear mini-mization is conducted with the Levenberg-Marquardt Algorithmas implemented in Minpack 17. This requires an initial guess,which can be obtained as follows:Let x ? ?hT1;?hT2;?hT3?T. Then, (2) can be rewritten asfMT0T?ufMT0TfMT?vfMT?x ? 0:When we are given n points, we have n above equations, whichcan be written in matrix equation as Lx ? 0, where L is a 2n ? 9matrix. As x is defined up to a scale factor, the solution is well-known to be the right singular vector of L associated with thesmallest singular value (or equivalently, the eigenvector of LTLassociated with the smallest eigenvalue). In L, some elements areconstant 1, some are in pixels, some are in world coordinates, andsome are multiplication of both. This makes L poorly conditionednumerically. Much better results can be obtained by performing asimple data normalization prior to running the above procedure.ACKNOWLEDGMENTSThe author would like to thank Brian Guenter for his software oncorner extraction and for many discussions and to Bill Triggs forinsightful comments. He would also like to thank AndrewZisserman for bringing his CVPR 98 work 13 to the authorsattention. It uses the same constraint but in different form. Hewould also like to thank the members of the Vision Group atMiscrosoft Research for encouragement and discussions. Anandanand Charles Loop have checked the English of an early version.The constructive comments from the anonymous reviewers aregratefully acknowledged which have helped the author to improvethe paper.REFERENCES1S. Bougnoux, “From Projective to Euclidean Space under any PracticalSituation, a Criticism of Self-Calibration,” Proc. Sixth Intl Conf. ComputerVision, pp. 790-796, Jan. 1998.2D.C. Brown, “Close-Range Camera Calibration,” Photogrammetric Eng.,vol. 37, no. 8, pp. 855-866, 1971.3B. Caprile and V. Torre, “Using Vanishing Points for Camera Calibration,”Intl J. Computer Vision, vol. 4, no. 2, pp. 127-140, Mar. 1990.4W. Faig, “Calibration of Close-Range Photogrammetry Systems: Mathema-tical Formulation,” Photogrammetric Eng. and Remote Sensing, vol. 41, no. 12,pp. 1,479-1,486, 1975.5O. Faugeras, Three-Dimensional Computer Vision: A Geometric Viewpoint. MITPress, 1993.6O. Faugeras, T. Luong, and S. Maybank, “Camera Self-Calibration: Theoryand Experiments,” Proc Second European Conf. Computer Vision, pp. 321-334,May 1992.7O. Faugeras and G. Toscani, “The Calibration Problem for Stereo,” Proc.IEEE Conf. Computer Vision and Pattern Recognition, pp. 15-20, June 1986.8S. Ganapathy, “Decomposition of Transformation Matrices for RobotVision,” Pattern Recognition Letters, vol. 2, pp. 401
温馨提示:
1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
2: 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
3.本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
提示  人人文库网所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。
关于本文
本文标题:一种多钻头台式钻床的设计【说明书+CAD】
链接地址:https://www.renrendoc.com/p-77608957.html

官方联系方式

2:不支持迅雷下载,请使用浏览器下载   
3:不支持QQ浏览器下载,请用其他浏览器   
4:下载后的文档和图纸-无水印   
5:文档经过压缩,下载后原文更清晰   
关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

网站客服QQ:2881952447     

copyright@ 2020-2025  renrendoc.com 人人文库版权所有   联系电话:400-852-1180

备案号:蜀ICP备2022000484号-2       经营许可证: 川B2-20220663       公网安备川公网安备: 51019002004831号

本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知人人文库网,我们立即给予删除!