外文文献及译文_第1页
外文文献及译文_第2页
外文文献及译文_第3页
外文文献及译文_第4页
外文文献及译文_第5页
已阅读5页,还剩21页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

本科毕业设计外文文献及译文文献、资料题目:Food Handling Using Computer 文献、资料来源:文献、资料发表(出版)日期:2008.12 院 (部): 机电工程学院专 业: 机械工程及自动化班 级: 机械054姓 名: 刘翠芹学 号: 2005071222指导教师: 董明晓教授翻译日期:2009.5.15外文文献:Food Handling and Packaging using Computer vision and RobotAbstractEven though the use of robot vision system in manufacturing sectors is now a commonplace, however, the technology embodied in these devices is poorly matched to industrial needs of food processors. In particular, food processing imposes special demands upon machinery. For instance the vision sensor must be programmed to detect the position of single and isolated object as well as overlapping or occluding objects. Special grippers have to be designed for handling of food articles such that they have minimum contact with the food items and hence causing minimum damage to them. In this project, started over a year ago, a vision guidance system is being developed to meet this objective. The system integrates the modified version of the Hough transform algorithm as the main recognition engine. The methods and procedures were tested on commercially produced beef burgers. 1. IntroductionFrom the incoming down to the packaging lines, locating, recognizing and handling food objects are very important in food processing industry. These tasks are performed routinely in food industry mainly for quality evaluation and product classification. Such tasks are very laboriously demanding and tend to rely heavily on role of the human operator 1. Hands of workers using raw materials of animal origin can heavily be contaminated with faecal and other micro-pathogenic organisms 2. The study by Trickett 3 has shown a strong link between food poisoning and the hygiene standards of food processors. Complete automation of food handling and packaging by means of robotic arm is the most effective means to eliminate influence of manual handling of microbiological quality of foods.Robots have successfully been applied in a wide range of food industries primarily dealing with well-defined processes and products not only because they are relatively clean and hygienic, also because of their flexibility, ruggedness and repeatability. This trend will continue to grow with the increasing scrutiny and regulatory enforcements such as and Hazard Analysis and Critical Control Points (HACCP) together with companies that are looking for ways to decrease or eliminate worker exposure to repetitive motion tasks and harsh environment. However there are problems and challenges associated with the use of robots in food industry 4.Firstly the food products, despite of the same type, differ in size, shape and other physical variables. This imposes special demands for machinery to handle them,requiring multiple sensory, manipulation and environmental capabilities beyond those available in robots designed to automate manufacturing tasks. Secondly the success of applying robots for food handlers, hinges upon the success of detecting, locating, recognizing and handling severely overlapping and occluding cases of similar food objects. Thirdly, food objects are often delicate and usually covered with either slippery or viscous substances, making high speed handling of such targets a very challenging task. The existing contact-based mechanisms such as the vacuum suctioning and the clamp gripping are not applicable because they can potentially cause injuries and bruising to food products. Hence further research is needed in order to solve these problems. This paper addresses some of the problems, focusing on the methods used to control the robot directly from the vision sensor, attempting to simulate the way that humans use their eyes to naturally control the motion of their arms. 2. Materials and Methods 2.1 Sample Preparation The chosen food for this study is a locally produced beef-burger. It possesses all the important characteristics which are unique to food products, such that they are very fragile and easily deformed. The average size of the beef-burgers is 8.5 mm in thickness and 46.1 mm radius and 69.3 gm in weight. Surface images of test samples were acquired using 8-bit robot vision system with uniform white background. The white background provides excellent contrast between the burger and the background. The chosen exposure was adjusted so that the image intensity histograms were approximately centered at mid-way of the full-scale range. The focal distance was selected to allow single as well as multiple samples to fit in the image frame. 2.2 Robot vision The robot vision systems used in this study is the Adept Cobra 600 4-DOF articulated scara robot, manufactured by Adept Tech., USA and equipped with Adept Vision Interface, MV-5 Adept controller and TM1001 CCD monochrome camera manufactured by Pulnix Inc., Canada. The camera was mounted onto link 2 of robot arm and illuminated using the warm white deluxe (WWX) fluorescent lighting. The camera is fitted with a C-mount adapter to permit the use of Tamron f/25.5 8-mm lens. The TM1001 camera is connected to the AVI card via a 12-pin Hirose type camera connector of Hirose Inc., Japan. The robot vision system was operated using Adepts AIMS v4.0 and programming libraries, running on 1.7 GHz and 255 MB RAM Pentium IV PC. Figure 1 shows the set-up of robot vision system. 2.3 Image Processing The objective of image processing in robot vision applications is mainly to extract meaningful and accurate information from the images, endowing the robots with more sophisticated position control capabilities through the use of vision feedback. The use of a simple geometric method such as introducing specially designed cues into the image scene will not work in this application since the burger images are generally complex, difficult to model and partially or extensively occluded depending on the viewing angle. Figure 2 shows the typical beef-burger image. In order to accurately translate burger positions to robot movements, the former geometric features must firstly be extracted and secondly matched to the robots workspace. In this application one of the useful features which uniquely characterize the pose of a burger in arbitrary locations is its centroid. This geometric descriptor is applicable since the shape of a burger is approximately circular. Furthermore this feature preserves variance to translation, rotation and scaling. Before computing the centroid of the real burger images, several preprocessing operations need to be performed on each image.Edge detection operation is carried out to detect the contour of the connected and isolated components, thereby, effectively transforming the original data into a form suitable for further processing. The edge results of Figure 2 computed using well-known Sobel and Robert operators 5 are shown in Figures 3(a),(b),(c)&(d). From these figures it can be seen that the edges determined by these operators comprised of many false edges, discontinuities and spurious spots resulting from uneven and irregular surface of the burger, non-uniform light reflection and shadows. These drawbacks are not acceptable for application described in this paper. A more sophisticated method is needed in order to obtain acceptable results. The method used to solve these problems was based on Canny edge detection operator 6. Interested readers are referred to this publication for detailed mathematical explanation of this relatively new edge detector. Here only the important principles are presented in order to facilitate discussion on robot vision applications on food handling. Canny method for edge detection is principally based on some general ideas.Firstly Canny was the first to demonstrate that convolving an image with a symmetric 2-D Gaussian filter and then, secondly, differentiating in the direction of the gradient form the edge magnitude image. The presence of edges in the original image gives rise to ridges in gradient magnitude image. The objective is to detect the true edge in the right place. This can be done using method known as non-maximal suppression technique. Essentially this method works by tracking along the top of the ridges, retaining only those points at the top of the ridge, whilst suppressing all others. The tracking process exhibits hysteresis controlled by two important parameters. They are the lower threshold value Tlow, and the upper threshold value Thigh. If the edge response is above Thigh, then this pixel definitely constitutes an edge and hence retained. Pixels less than Thigh but greater than greater Tlow are considered as weak edges. Finally tracking was done to bridge all discontinued edges as well as to eliminate the false ones.Weak edges are retained only if they are connected to the strong edge. The result of these operations is an image with thin lines of edge points with improved edge-to-noise ratio. Even though this method reduces the effect of noise, however, the overall quality of edges depends largely on the optimal selection of the standard deviation . which defines the Gaussian mask for Cannys edge detection. Experimentally the optimum value was set to 3. This corresponds to a 25 X 25 kernel. This value is fixed for given set of background illumination and image gain. Change in any of these external factors such as illumination, image gain, background colour will also affect the optimum value of . Figures 4(a)&(b) show results for canny edge detection with set to 1 and 3. Comparing Figure 3 and Figure 4, it can be seen clearly that the edges determined by Cannys operator are less corrupted compared to edges detected either by Sobel or Robert operator. The burger edges are more complete in Figure 4 whereas in Figure 3 they are only partially visible and more obscured. Furthermore the retention of major detail by the Canny operator is very evident. The presence of overlapping and partially occluding burgers are visually recognizable. Canny operator therefore has the ability to detect major features of interest in the burger image, allowing the geometric feature of the pick-and-place species of a burger be accurately determined. The algorithm for determining the pick-and-place specie is given in the following section.2.4 Centroid Detection Algorithm Once the edges of the burgers have been detected, the next step in image analysis is to retrieve and extract the geometric feature which uniquely defines the shape of a burger. One important criterion of this type of shape analysis and retrieval problems is that the method must be invariant to translation, scaling and translation of images or objects. The use of Hough transform seems to be adequate since this method achieves translation, scaling and rotation invariance by converting a global detection problem in an image space into a more easily solved local peak detection problem in the parameter space 7. More importantly the Hough transform allows segmentation of overlapping or semi-occluded objects which is critical for processing of burger images. However the original Hough transform works well if analytic equations of object borderlines are known and invariant. In the present context these conditions are very difficult to be fulfilled because the shape of the burger is not a perfect circle. This imperfection is mainly due to non-rigid properties of the burgers, causing them to be easily deformed when pressed or come in contact with any rigid surface such as the conveyor belts. A straightforward application of Hough transform will yield a multiple set of accumulated votes in the parameter space, corresponding to different shapes and sizes of the objects 7. This may result in many false alarms. Furthermore the ambiguity of indexing and insufficient description of point features may result in false solutions for the recognition of overlapping or semi-occluding objects. In this work we purpose a method to solve some of these problems by modifying the Hough transform employing the latest technique in object recognition based on centroid-contour distance (CCD) curve. The CCD method is given by Wang 8. Figure 5 CCD Curve of CircleThe basic idea of this technique can be explained using illustration in Figure 5. It shows a point Q lying on the contour of a circle which is characterized by a centroid C, and a radius R. The angle between the point Q and the centroid is given by Tracing a burger contour can be considered as circling around its centroid. The tracing path in clockwise or anticlockwise direction from fixed starting point represents a shape contour uniquely. In other words a contour point sequence corresponds to a shape uniquely if the starting point is fixed. Hence for a given C, R and a point Q on the contour will accurately satisfy the following criterion if the contour belongs to a perfect circle, i.e. Q=(RCos,RSin) (1)Since in this case a perfect match is impossible to obtain for reasons stated previously, therefore, a point Q is treated as a point belonging to the edge of a burger if it is bounded by maximum and minimum R values. Mathematically(RminCos,RminSin )Q(RmaxCos,RmaxSin ) (2) where Rmin R Rmax is the range of the burger radius as shown in Figure 6. This method works by firstly treating all edge pixels in the binarized image resulting from Canny edge detection, as probable centroids of the objects. For each centroid location, secondly, the CCDcurves are traced using Eq. 2. For tracing is varied between 0 360, thereby searching for all pixelswhich are bounded between these two contours. If the increment value for in Eq. 2 is kept as 1 then the maximum possible pixels that would satisfy as the circumference point for a centroid are 360. Generally smaller values above certain limit improves the computation time of the algorithm. For every pixel in the binarised image being considered as the centroid, it is assumed as the center of a circle.Next the number of instances the circumference points of that circle are also edges in the binarised image are determined. If the number of pixels are greater than a threshold, it implies that the centroid pixel being considered is the centroid of the burger. Practically when the above algorithm is applied to burger images, the total number of matches would never be the maximum even for a correct centroid. This is because of inevitable noise, irregular light reflection and burger surface shadows. Thus the threshold value for the number of matches has to be fixed below the maximum value 36.To determine the correct value of the threshold the algorithm was applied to sequence of 19 burger images. The criteria for correctly identifying which species of the burger is most likely to be lifted are that, that burger should be minimally overlapped or maximally exposed. As seen in Figure 2 the burger that lies on the top of the heap as well as on the side of the main pile could fulfill the above criterion, and hence, contributed also as the picked-and-place species. The algorithm is applied for the centroid locations from top-left to bottom-right pixels. Thus multiple burgers that satisfy the criteria to be lifted will be prioritized from top-left to bottom-right.By following this criterion the robot will be led to pick and place only those species, thereby reducing the likelihood of damaging the overlapped specie. Figure 7 shows the number of matches of each burger centroid in a sequence of images using Eq. 2. 3. Experimental Tests and Results The methods and procedures described in the previous sections were experimented using sequence of burger images. The objective of this experiment is to sort the burger individually by pick-and-place operations. In so doing the robot must first examine the present of burgers in the heap, and second, detect which species of burger that was most likely to be lifted-up. Prior to experiment, the camera was calibrated for a given mounting position,enabling robot pose with respect to the position and orientation of the burger be accurately mapped. Figures 8(b)-(f) show the sequence of centroids of minimally overlapped burgers revealed using modified Hough transform, starting with detection of the 1st burger and ending with detection of the 7th burger respectively. Only the first seven centroid locations were shown here even though the locations of a total of 19 burgers were successfully located. Each centroid location was fed into a controller which kinematically positioned and orientated the robots end-effecter in 3D space. In each detection round the pick-and-place species of burger was removed from the heap manually. Clearly from Figure 8 the location of a minimally overlapped or maximally exposed burger was accurately revealed in every picked and placed cycle. No partially overlapped or occluded burgers were detected. It can therefore be concluded that the proposed method works well for detecting minimally overlapping burgers which is important in ensuring a correct pick-and-place sequence of the robot. However, one drawback of this technique is that it is a very computationally intensive method, requiring approximately 3-4 seconds for every result. A time consuming yet accurate position detection algorithm may limit its applications in food industry. Hence, a special hardware for fast position detection is now being developed using Field Programmable Gate Array (FPGA) chip. Moreover a specially designed end-effecter is needed in meeting the need for robotic handling of beef-burgers. Clearly the use of conventional grippers is not suitable since they do not address the task of handling non-rigid materials and they can increase contamination problems of beef-burgers. In order to solve these problems a novel non-contact end effecter employing pneumatic levitation technique 9 is now being investigate

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论