




已阅读5页,还剩10页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
Definition of an EdgeAnedgeis a significant change in the grayscale values between adjacent pixels in an image. In NIVision, edge detection works on a 1D profile of pixel values along a search region, as shown in the following figure. The1D search region can take the form of a line, the perimeter of a circle or ellipse, the boundary of a rectangle or polygon, or a freehand region. The software analyzes the pixel values along the profile to detect significant intensity changes. You can specify characteristics of the intensity changes to determine which changes constitute an edge.1Search Lines2EdgesCharacteristics of an EdgeThe following figure illustrates a common model that is used to characterize an edge.Gray LevelIntensitiesSearchDirection1Grayscale Profile2Edge Length3Edge Strength4Edge LocationThe following list includes the main parameters of this model. Edge strengthDefines the minimum difference in the grayscale values between the background and the edge. The edge strength is also called the edge contrast. The following figure shows an image that has different edge strengths. The strength of an edge can vary for the following reasons: Lighting conditionsIf the overall light in the scene is low, the edges in image will have low strengths. The following figure illustrates a change in the edge strength along the boundary of an object relative to different lighting conditions. Objects with different grayscale characteristicsThe presence of a very bright object causes other objects in the image with lower overall intensities to have edges with smaller strengths.ABC Edge lengthDefines the distance in which the desired grayscale difference between the edge and the background must occur. The length characterizes the slope of the edge. Use a longer edge length, defined by the size of the kernel used to detect edges, to detect edges with a gradual transition between the background and the edge. Edge locationThex,ylocation of an edge in the image. Edge polarityDefines whether an edge is rising or falling. A rising edge is characterized by an increase in grayscale values as you cross the edge. A falling edge is characterized by a decrease in grayscale values as you cross the edge. The polarity of an edge is linked to the search direction. The following figure shows examples of edge polarities.Edge Detection MethodsNIVision offers two ways to perform edge detection. Both methods compute the edge strength at each pixel along the1D profile. An edge occurs when the edge strength is greater than a minimum strength. Additional checks find the correct location of the edge. You can specify the minimum strength by using theMinimum Edge StrengthorThreshold Levelparameter in the software.Simple Edge DetectionThe software uses the pixel value at any point along the pixel profile to define the edge strength at that point. To locate an edge point, the software scans the pixel profile pixel by pixel from the beginning to the end. A rising edge is detected at the first point at which the pixel value is greater than a threshold value plus a hysteresis value. Set this threshold value to define the minimum edge strength required for qualifying edges. Use the hysteresis value to declare different edge strengths for the rising and falling edges. When a rising edge is detected, the software looks for a falling edge. A falling edge is detected when the pixel value falls below the specified threshold value. This process is repeated until the end of the pixel profile. The first edge along the profile can be either a rising or falling edge. The following figure illustrates the simple edge model.The simple edge detection method works well when there is little noise in the image and when there is a distinct demarcation between the object and the background.Gray LevelIntensitiesPixels1Grayscale Profile2Threshold Value3Hysteresis4Rising Edge Location5Falling Edge LocationAdvanced Edge DetectionThe edge detection algorithm uses a kernel operator to compute the edge strength. The kernel operator is a local approximation of a Fourier transform of the first derivative. The kernel is applied to each point in the search region where edges are to be located. For example, for a kernel size of 5, the operator is a ramp function that has 5 entries in the kernel. The entries are 2,1,0,1,2. The width of the kernel size is user-specified and should be based on the expected sharpness, or slope, of the edges to be located. The following figure shows the pixel data along a search line and the equivalent edge magnitudes computed using a kernel of size5. Peaks in the edge magnitude profile above a user-specified threshold are the edge points detected by the algorithm.Pixel IntensitiesEdge Magnitudes1Edge Location2Minimum Edge ThresholdTo reduce the effect of noise in image, the edge detection algorithm can be configured to extract image data along a search region that is wider than the pixels in the image. The thickness of the search region is specified by the search width parameter. The data in the extracted region is averaged in a direction perpendicular to the search region before the edge magnitudes and edge locations are detected. A search width greater than 1 also can be used to find a “best” or “average” edge location or a poorly formed object. The following figure shows how the search width is defined.1Search Width2Search LineSubpixel AccuracyWhen the resolution of the image is high enough, most measurement applications make accurate measurements using pixel accuracy only. However, it is sometimes difficult to obtain the minimum image resolution needed by a machine vision application because of limits on the size of the sensors available or the price. In these cases, you need to find edge positions with subpixel accuracy.Subpixel analysis is a software method that estimates the pixel values that a higher resolution imaging system would have provided. In the edge detection algorithm, the subpixel location of an edge is calculated using a parabolic fit to the edge-detected data points. At each edge position of interest, the peak or maximum value is found along with the value of one pixel on each side of the peak. The peak position represents the location of the edge to the nearest whole pixel.Using the three data points and the coefficientsa,b, andc, a parabola is fitted to the data points using the expressionax2+bx2+c.The procedure for determining the coefficients a, b, and c in the expression is as follows:Let the three points which include the whole pixel peak location and one neighbor on each side be represented by (x0, y0), (x1, y1), and (x2, y2). We will let x0 = 1, x1 = 0, and x2 = 1 without loss of generality. We now substitute these points in the equation for a parabola and solve fora,b, andc. The result isa=(y0 +y2 2y1)2b=(y2 2y02)2c=y1, which is not needed and can be ignored.The maximum of the function is computed by taking the first derivative of the parabolic function and setting the result equal to 0. Solving for x yieldsx=b2aThis provides the subpixel offset from the whole pixel location where the estimate of the true edge location lies.The following illustrates how a parabolic function is fitted to the detected edge pixel location using the magnitude at the peak location and the neighboring pixels. The subpixel location of an edge point is estimated from the parabolic fit.1InterpolatedPeakLocation2Neighboring Pixel3InterpolatingFunctionWith the imaging system components and software tools currently available, you can reliably estimate 1/25 subpixel accuracy. However, results from an estimation depend heavily on the imaging setup, such as lighting conditions, and the camera lens. Before resorting to subpixel information, try to improve the image resolution. Refer tosystem setup and calibrationfor more information about improving images.Signal-to-Noise RatioThe edge detection algorithm computes the signal-to-noise ratio for each detected edge point. The signal-to-noise ratio can be used to differentiate between a true, reliable, edge and a noisy, unreliable, edge. A high signal-to-noise ratio signifies a reliable edge, while a low signal-to-noise ratio implies the detected edge point is unreliable.In the edge detection algorithm, the signal-to-noise ratio is computed differently depending on the type of edges you want to search for in the image.When looking for the first, first and last, or all edges along search lines, the noise level associated with a detected edge point is the strength of the edge that lies immediately before the detected edge and whose strength is less than the user-specified minimum edge threshold, as shown in the following figure.1Edge1Magnitude2Edge 2 Magnitude3Threshold Level4Edge2Noise5Edge1NoiseWhen looking for the best edge, the noise level is the strength of the second strongest edge before or after the detected edge, as shown in the following figure.1BestEdgeMagnitude2Best Edge Noise3ThresholdLevelCalibration Support for Edge DetectionThe edge detection algorithm uses calibration information in the edge detection process if the original image is calibrated. For simple calibration, edge detection is performed directly on the image and the detected edge point locations are transformed into real-world coordinates. For perspective and non-linear distortion calibration, edge detection is performed on a corrected image. However, instead of correcting the entire image, only the area corresponding to the search region used for edge detection is corrected. Figure A and Figure B illustrate the edge detection process for calibrated images. Figure A shows an uncalibrated distorted image. Figure B shows the same image in a corrected image space.1SearchLine2Search Width3CorrectedAreaInformation about the detected edge points is returned in both pixels and real-world units. Refer tosystem setup and calibrationfor more information about calibrating images.Extending Edge Detection to 2D Search Regions The edge detection tool in NIVision works on a1D profile. Therake,spoke, andconcentric raketools extend the use of edge detection to two dimensions. In these edge detection variations, the2D search area is covered by a number of search lines over which the edge detection is performed. You can control the number of the search lines used in the search region by defining the separation between the lines.RakeA Rake works on a rectangular search region, along search lines that are drawn parallel to the orientation of the rectangle. Control the number of lines in the area by specifying the search direction as left to right or right to left for a horizontally oriented rectangle. Specify the search direction as top to bottom or bottom to top for a vertically oriented rectangle. The following figure illustrates the basics of the rake function.1SearchArea2Search Line3SearchDirection4Edge PointsSpokeA Spoke works on an annular search region, along search lines that are drawn from the center of the region to the outer boundary and that fall within the search area. Control the number of lines in the region by specifying the angle between each line. Specify the search direction as either from the center outward or from the outer boundary to the center. The following figure illustrates the basics of the spoke function.1SearchArea2Search Line3SearchDirection4Edge PointsConcentric RakeA Concentric Rake works on an annular search region. It is an adaptation of the rake to an annular region. The following illustrates the basics of the concentric rake. Edge detection is performed along search lines that occur in the search region and that are concentric to the outer circular boundary. Control the number of concentric search lines that are used for the edge detection by specifying the radial distance between the concentric lines in pixels. Specify the direction of the search as either clockwise or anti-clockwise.1SearchArea2Search Line3SearchDirection4Edge PointsFinding Straight EdgesFinding straight edges is another extension of edge detection to2D search regions. Finding straight edges involves finding straight edges, or lines, in an image within a2D search region. Straight edges are located by first locating1D edge points in the search region and then computing the straight lines that best fit the detected edge points. Straight edge methods can be broadly classified into two distinct groups based on how the1D edge points are detected in the image.Rake-Based MethodsA Rake is used to detect edge points within a rectangular search region. Straight lines are then fit to the edge points. Three different options are available to determine the edge points through which the straight lines are fit.First EdgesA straight line is fit through the first edge point detected along each search line in the Rake. The method used to fit the straight line is described indimensional measurements. The following figure shows an example of the straight edge detected on an object using the first dark to bright edges in the Rake along with the computed edge magnitudes along one search line in the Rake.Search DirectionBest EdgesA straight line is fit through the best edge point along each search line in the Rake. The method used to fit the straight line us described indimensional measurements. The following figure shows an example of the straight edge detected on an object using the best dark to bright edges in the Rake along with the computed edge magnitudes along one search line in the Rake.Search DirectionHough-Based MethodsIn this method, a Hough transform is used to detect the straight edges, or lines, in an image. The Hough transform is a standard technique used in image analysis to find curves that can be parameterized, such as straight lines, polynomials, and circles. For detecting straight lines in an image, NIVision uses the parameterized form of the line=xcos +ysinwhere,is the perpendicular distance from the origin to the line and is the angle of the normal from the origin to the line. Using this parameterization, a point (x,y) in the image is transformed into a sinusoidal curve in the (, ), or Hough space. The following figure illustrates the sinusoidal curves formed by three image points in the Hough space. The curves associated with colinear points in the image, intersect at a unique point in the Hough space. The coordinates (, ) of the intersection are used to define an equation for the corresponding line in the image. For example, the intersection point of the curves formed by points 1 and 2 represent the equation for Line1 in the image.The following figure illustrates how NIVision uses the Hough transform to detect straight edges in an image. The location (x, y) of each detected edge point is mapped to a sinusoidal curve in the (, ) space. The Hough space is implemented as a two-dimensional histogram where the axes represent the quantized values forand . The range foris determined by the size of the search region, while the range for is determined by the angle range for straight lines to be detected in the image. Each edge location in the image maps to multiple locations in the Hough histogram, and the count at each location in the histogram is incremented by one. Locations in the histogram with a count of two or more correspond to intersection points between curves in the (, ) space. Figure B shows a two-dimensional image of the Hough histogram. The intensity of each pixel corresponds to the value of the histogram at that location. Locations where multiple curves intersect appear darker than other locations in the histogram. Darker pixels indicate stronger evidence for the presence of a straight edge in the original image because more points lie on the line. The following figure also shows the line formed by four edge points detected in the image and the corresponding intersection point in the Hough histogram.1EdgePoint2Straight Edge3SearchRegion4Search LineStraight edges in the image are detected by identifying local maxima, or peaks in the Hough histogram. The local maxima are sorted in descending order based on the histogram count. To improve the computational speed of the straight edge detection process, only a few of the strongest peaks are considered as candidates for detected straight edges. For each candidate, a score is computed in the original image for the line that corresponds to the candidate. The line with the best score is returned as the straight edge. The Hough-based method also can be used to detect multiple straight edges in the original image. In this case, the straight edges are returned based on their scores.Projection-Based MethodsThe projection-based method for detecting straight edges is an extension of the1D edge detection process discussed in theadvanced edge detectionsection. The following figure illustrates the projection-based straight edge d
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 考点解析北师大版8年级数学上册期末测试卷及一套参考答案详解
- 房地产教育活动
- 传统媒体发展报告
- 职业防护知识培训课件
- 房屋安全教育
- 2025年护理气道管理试题答案及解析
- 2026届河北省魏县第五中学高一化学第一学期期中达标检测试题含解析
- 2026届四川蓉城名校联盟化学高一第一学期期中考试模拟试题含解析
- 江西省共青城市中考数学能力提升B卷题库【含答案详解】
- 橡胶胶管加工创新创业项目商业计划书
- 《研学旅行指导师实务》课件-第6章 基(营)地研学课程操作
- 膀胱炎护理课件
- 搅拌站建设可行性研究报告
- 插秧劳动指导课件
- 幕墙施工培训课件
- 2025年北京东城二中学英语八下期末联考试题含答案
- 2025届安徽省蒙城县英语七下期末考试试题含答案
- 设备巡回检查管理制度
- 产房安全核查管理制度
- 阿尔茨海默症的护理
- (2025)公共基础知识考试试题附及答案
评论
0/150
提交评论