一种新型前置式开沟机的研究设计含3张CAD图带开题
收藏
资源目录
压缩包内文档预览:
编号:81007346
类型:共享资源
大小:1.42MB
格式:ZIP
上传时间:2020-05-24
上传人:QQ14****9609
认证信息
个人认证
郭**(实名认证)
陕西
IP属地:陕西
25
积分
- 关 键 词:
-
一种
新型
前置
式开沟机
研究
设计
CAD
开题
- 资源描述:
-
一种新型前置式开沟机的研究设计含3张CAD图带开题,一种,新型,前置,式开沟机,研究,设计,CAD,开题
- 内容简介:
-
February 1, 2000MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROLL. TianMEMBERASAED. C. SlaughterMEMBERASAER. F. NorrisABSTRACTA machine vision system to detect and locate tomato seedlings and weed plants in acommercial agricultural environment was developed and tested. Images acquired in agriculturaltomato fields under natural illumination were studied extensively, and an environmentallyadaptive image segmentation algorithm was developed to improve machine recognition of plantsunder these conditions. The system was able to identify the majority of non-occluded targetplant cotyledons, and to locate plant centers even when the plant was partially occluded. Of allthe individual target crop plants 65% to 78% were correctly identified and less than 5% of theweeds were incorrectly identified as crop plants.Keywords. Machine vision, pattern recognition, tomato, weeds.INTRODUCTIONAgricultural production experienced a revolution in mechanization over the past century.However, due to the working environment, plant characteristics, or costs, there are still taskswhich have remained largely untouched by the revolution. Hand laborers in 1990s still mayhave to perform tedious field operations that have not changed for centuries. Identification ofindividual crop plants in the field and locating their exact position is one of the most importanttasks needed to further automate farming. Only with the technology to locate individual plants,can smart field machinery be developed to automatically and precisely perform treatmentssuch as weeding, thinning, and chemical application.Early studies of machine vision systems for outdoor field applications concentratedmainly on robotic fruit harvesting. Parrish and Goksel (1977) first studied the use of machinevision for fruit harvesting in 1977. In France, a vision system was developed at theCEMAGREF center to pick apples (Grand dEsnon et al., 1987). Slaughter and Harrel (1989)developed a machine vision system that successfully picked oranges in the grove. Fruitsgenerally have regular shapes and are often distinguishable by their unique color when comparedto the color of the background foliage. Less work has been done on outdoor plant identification.Jia et al. (1990) investigated the use of machine vision to locate corn plants by finding the mainleaf vein from a top view. Unfortunately this technique is not applicable to most dicot rowcrops. A group of researchers at the University of California at Davis have developed a machineL. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL2vision guided cultivator for between-row cultivation (Slaughter et al., 1996). This color machinevision system could identify the center of the crop row from field images of crops such astomato, lettuce or cotton even when weeds were present, however it did not identify plants on anindividual basis.Object shapes have proven to be one of the most important ways of describing biologicaltargets. Under a controlled indoor environment, some researchers have studied the problem ofusing machine vision to identify individual biological targets. Semantic shape description wasused in cytopathology to detect abnormal cells (Tucker, 1979). For non-occluded plant seedlingspecies identification, Guyer et al. (1986) used four semantic shape features in a classifier andshowed that up to 91 percent of the sample plants could be correctly recognized. Woebbecke etal. (1992) used a group of semantic shape features in their research on plant speciesidentification. Using five experimental plant species grown in pots in a greenhouse theyobserved that the performance of their features were functions of the plant growth stages. Theyalso found that the features were sensitive to plant species, with some features being useful foronly certain plant species. Using leaf shape as a source of information, Franz et al. (1991) triedto create a general description for plant leaves. Experiments showed that completely visibleleaves (no occlusion) could be identified by aligning a curvature function of each leaf with leafcurvature models. For the partially occluded leaves, a Fourier-Mellin correlation was used tocalculate re-sampling curvature functions which were then aligned with each model.Machine vision recognition of plant leaf shape is still at the stage of studying individualpotted plants which are viewed under a controlled indoor environment. Differences between theuncontrolled outdoor working environment of agriculture and the controlled environment ofindoor facilities requires that robotic systems designed for the agricultural sector have to be moreadaptable than their industrial counterparts and the sensors for such a robotic systems need to becapable of operating in these unstructured environments.The purpose of this research was to investigate the feasibility of using a machine visionsystem to distinguish individual crop plants from weeds in the natural outdoor environment of acommercial agricultural field. The processing field tomato plant (Lycopersicon esculentum L.)was selected as the target crop plant for this study. The seedling plant stage was selectedbecause it is appropriate for many labor intensive operations (e.g. weeding and thinning) andbecause plant leaf occlusion is at its minimum at this early stage.Many different weed species grow in tomato fields. The common species of weeds andtheir growth condition change from field to field. To simplify the problem of plantidentification, tomato plants were distinguished from all other plant species by the vision system.In this study, all non-tomato plants found in the field were classified as weeds. This approach isL. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL3consistent with the use of a machine vision system for automated weeding which would treat allweeds in the same manner independent of species.MATERIALS AND METHODSThe experimental data was collected in vivo under normal California commercial farmingconditions. The images for this study were taken from 13 different commercial tomato fields inthe spring of 1994. The commercial tomato fields selected for this study contained tomato plantswhich ranged in developmental age from cotyledon to first true leaf stage. Tomato varietiesincluded Halley 3155, Brigade and many other popular commercial varieties grown in NorthernCalifornia. The most common weeds in the images included Hairy Nightshade (Solanumsarrachoides Sendtner), Ground cherries (Physalis spp.), Jimsonweed (Datura stramonium L.),Yellow Nutsedge (Cyperus esculentus L.), Field Bindweed (Convolvulus arvensis L.),Johnsongrass (Sorghum halepense (L.) Pers.), and Little Mallow (Malva parviflora L.).The equipment setup for outdoor image collection is shown in figure 1. There were twocameras in the vision system: a front camera for row guidance and a rear camera for the in-rowdetection of individual plants. The front camera was tilted forward, facing the direction of travel,to allow a 1 to 2m section of the crop row to be viewed enabling the vision system to calculate anaccurate row center line (Slaughter et al., 1996) to position the rear camera directly above theseedline. The rear camera was mounted directly behind the vision guided toolbar with the lenspointed vertically down toward the plant row to capture a top view. The view of the imagecovered approximately an area of 130 x 95 mm of the bed. The longer edge of the image wasparallel to the seedline, which was also parallel to the direction of travel. A Sony CCD-VX3color camcorder (Sony Corp.) was used as the rear camera. Real-time video images wererecorded on Hi8 metal video tape. All images were digitized (off-line) into 24 bit 640 x 480pixel RGB color images using a Macintosh IIfx computer, a RasterOps (model 24XLTV) colorframe grabber, and Media Grabber 2.1 software.The crop rows were traversed in the same serpentine pattern in which they were planted.Thus angle of sunlight changed from row to row, field to field, and time to time. The camcordershutter speed was set to 1/500 of a second to prevent blurred images due not only to travel speed,but because outdoor field plants may oscillate at high frequencies due to wind.Knowledge Based Vision (KBV, America Artificial Intelligence, Inc., Amherst, MA)system software on a Sun Sparcstation (Sun Microsystems, Inc., model IPX) was used foralgorithm development and evaluation. Algorithms written in C, and Common LISP wereintegrated into the routine tasks of the KBV system for image analyis. SAS/STAT (SASL. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL4Institute Inc., Cary, NC) statistical proceedures were used for morphological feature selectionand in classical discriminant analysis for system evaluation.A tomato plant has a general green color, a highly irregular leaf shape, and an open plantstructure which contributes to its being a challenging crop plant to identify in the field. Toidentify tomato plants and distinguish them from weeds a two stage plant identificationalgorithm was developed. The first stage was a color pre-processing operation to segment allplants from the background. The second stage was a pattern recognition analysis to locate thecenter of and identify the type of each individual plant. These algorithms were trained with atraining set of 30 images. An independent set of 271 images was used to evaluate theperformance of the system. According to the lighting conditions and the plant leaf shapes, theseimages were divided into three different quality groups named high, fair, and poor. There wasno overlap between the images in the training set and those in the evaluation set and the imagesin the evaluation set included environmental conditions (e.g. lighting) outside those found in thetraining set.To distinguish plant materials from background objects in a color image, a colorsegmentation image processing step was conducted where objects were classified into one of twoclasses (plants and background) by their color difference in red, green, blue color space. In thisstudy, the changing illumination conditions encountered in the outdoor fields prevented the useof a static segmentation algorithm.The idea of an environmentally adaptive segmentation algorithm (EASA) was developedto simulate the human vision system which makes necessary adjustments to accommodate thechanging lighting environment when operating outdoors. The EASA was designed to learn fromlocal conditions in each field or for each time, including the specific lighting and colorconditions of the different crop plant varieties, the weeds, and the soil. The kernel of an EASAwas the adaptive or self-learning process. Since the general data structure properties of ourimage were known (e.g. the object should be green, the background anything but green, bothobject and the background classes are distributed close to the gray scale axis, and the object classis very close to a normal distribution, etc.), a modified clustering method called partially-supervised learning (PSL) was introduced. The PSL procedure started from a set of previouslyselected seed colors for each class. The pixels in the sample images were then clustered usingthe nearest neighbor technique (Duda et. al. 1973) until a required number of classes were found.Then the program would show the classification results to the operator by means of displayingeach class of objects in the image using a different color. The operator decided what group(s) ofcluster regions should be considered as “object” (tomato cotyledons). The mean and covarianceL. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL5matrices for each class were processed using a Bayesian classifier (Duda et. al. 1973) to developa lookup table (LUT) for real-time color image segmentation.The boundary of the objects in the image may have to be smoothed to eliminate noise.Logical image processing was used to eliminate noise, smooth irregular object boundaries, andseparate objects which slightly touch (Tian et al., 1993). To increase the identification accuracy,a special object partition algorithm called watershed (Vincent et al., 1991) was used forseparating partially occluded plant leaves (Tian et al., 1993).The 13 morphological features identified which appeared to have promise fordistinguishing between tomato cotyledons and weeds were:Perimeter (PRI) was the count of the number of boundary pixels,Centroid (CEN) was the center of the area of an object, the average location of all pixels.Pixel-count (PXC) was the area of an object in an image,Height (HET) was the difference between the largest and the smallest vertical coordinate plusone,Width (WID) was the difference between the largest and the smallest horizontal coordinateplus one,The major and minor axes (MJX and MNX) of the best fit ellipse for the object,The ratio of area to length (ATL) was defined as:ATL =PXCMJXThe compactness (CMP) was the ratio of area to the perimeter squared. In this study, it wasdefined as:CMP = 16PXCPRI2Elongation (ELG) was the measurement of how long and narrow an object was. It wascalculated as the differences between the lengths of the major and minor axes of the bestfit ellipse, divided by the sum of the lengths.ELG =MJX MNXMJX +MNXThe logarithm of the ratio of height to width (LHW) gave a symmetric measure of the aspectratio of the object. The definition was:LHW = log10HETWID L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL6The ratio of perimeter to broadness (PTB) was a measurement of a convex region. It wasdefined as:PTB =PRI2 HGT+ WID()The ratio of length to perimeter (LTP) was a measure of the 2-D distribution pattern of theboundary of an object. It was defined as:LHW =MJXPRITo facilitate their use in a real-time system the number of features was limited to lessthan 5. Because of complex intercorrelation between features, two heuristic methods were usedto select a final feature subset. Individual feature performance and classical discriminantanalysis (SAS) indicated that a subset of the following 4 features: CMP, ELG, LHW, and LTPout of the 13 studied would provide the best results (Tian, 1995).To increase the accuracy and to simplify the algorithm, a Bayesian classifier (Jain, 1989)was used to build a lookup table for real-time implementation. With this classifier, all theobjects in an image were classified into the cotyledon class or the weed class. If a cotyledon wasnot occluded by weeds or other cotyledons the classification was likely to be successful.Occluded cotyledons could only be recognized using more sophisticated processes.Identification of occluded cotyledons was found to be necessary in the interpretation of theposition of the whole plant.Once the cotyledons were found in an image, the whole plant identification process wasinitiated. Distance between cotyledons was the first property used to determine whichcotyledons should be considered as part of the same plant. Cotyledons from the same plant arenormally close to each other, typically less than 1/2 a cotyledon length apart. In a field, in whichthe plants were fairly well separated, a method using this criteria worked fairly well. However,in outdoor field conditions where the crop plants were planted very close to each other, and withweeds located anywhere, this method often led to incorrect cotyledon pairing. Figure 4e showsblack squares at the predicted stem locations of each plant when the definition of a whole plantwas based only upon the distance between the cotyledons. To overcome this problem, morestructural features like leaf size, relative position and orientation were used in a syntacticprocedure to describe the whole plant structure. An algorithm that uses whole plant properties offield plants can also overcome some of the problems of partial occlusion in an indirect way. Thealgorithm developed here was based on the following observations.L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL7Rule 1. There must be an initial tomato cotyledon (ITC) to begin with.Rule 2. The stem is always in the line of the extended major axis of a cotyledon.Rule 3. An occluded cotyledon always belongs to an object with a larger PXC value (binaryimage object area) as compared with that of the classified cotyledon.Rule 4. The occluded cotyledon is the nearest object in the weed sub-set in the immediateneighborhood of the classified cotyledon.Rule 5. Incomplete cotyledons caused by incorrect color, extreme position, or twisted shapealways have smaller PXC values than complete cotyledons.Rule 6. If there is another tomato cotyledon (ATC) within the near neighborhood (defined asa circle diameter of 1.5*MJX of ITC), the one to be paired with the ITC has thefollowing properties:1). The inward end point (EDP) must be closest to the MJX of ITC;2). The EDP is the closest to the CEN of ITC;3). The PXC is between 60 % to 130% of PXC of ITC;4). The angle () between MJXs is the smallest and not greater than 20 degrees, asshown in figure 2.Rule 7. If there is no cotyledon within the near neighborhood but a possible partiallyoccluded tomato cotyledon (OTC) exits, the one to be paired with the ITC has tohave the following characteristics:1). The occluded cotyledon to be paired is the one with a PXC bigger than that ofITC, and located near the EDP and within an angle 80 degrees as shownin in figure 3,2). The maximum distance, D in figure 3, between the two boundary intersectionpoints on the radial line from the nearer end of ITC is greater than 80% ofthe MJX of ITC. 20 MJXdegreeMJXITCATCEDPCEND 80 degreesMJXITCL. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL8Figure 2. Example of syntatic relationshipbetween the initial tomato cotyledon(ITC) and another tomato cotyledon(ATC) being considered for pairingwith the ITC.Figure 3. Example of syntatic relationshipbetween the initial tomato cotyledon(ITC) and a partially occludedtomato cotyledon (OTC) beingconsidered for pairing with the ITC.Rule 8. If there is no cotyledon (or possible occluded one) within the near neighborhood but apossible incomplete tomato cotyledon(s) (ETC), the one to be paired with the ITChas to be:1). The one with the CEN closest to the MJX of ITC.2). The one with the CEN closer to the EDP of ITC than to CEN of ITC.Rule 9. If there is no other object which will form a cotyledon pair within the nearneighborhood, pick a EDP randomly and extend the MJX by 25% from this EDP asthe stem.RESULTS AND DISCUSSIONThe color segmentation LUT was trained with one or two images from the same imagedata set being processed. This means they were from similar field conditions. This is equivalentto an on-demand training procedure in a real life system. Under real field conditions, the systemwould capture an image for training each new LUT when the lighting or field condition changed.In the laboratory system, a new LUT was created when a new image was found to be quitedifferent in its lighting and field conditions. Figure 4b shows the result of segmentation of a fieldimage using a color LUT. Examples of occluded leaves processed with the watershed algorithmare shown with the identified tomato cotyledons in black and the weeds in gray in figure 4d.Four morphological features (CMP, ELG, LHW, and LTP) were found to provide a highdegree of separation between the two classes (cotyledon leaves and non-cotyledon leaves), figure5. Experiments of leaf recognition were carried out on the training data set to test theperformance of the selected 4-feature subset. The results of this classifier were very promising.In the training set, more than 95 percent of tomato cotyledons were correctly classified. Morethan 98 percent of the non-cotyledons were assigned correctly to the weed class. The overallerror was 3.27 percent.An example of final stem location using the whole plant syntatic algorithm is shown inFigure 4f (predicted stems are shown with black squares). When compared with the results usingdistance-based pairing alone (figure 4e) the improvement in correct stem placement can be seen.This syntactic procedure frequently leads to the correct stem location even when the plants areclose to each other.L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL9More than 65% of the tomato seedlings and more than 95% of the weeds in the 271 fieldtrial images of the validation set were correctly recognized by the machine vision algorithm,Table 3. In Table 3, “cotyledon” means the results of leaf detection with the 4-feature classifierand “plant” means the percentage of successfully recognized whole tomato plants using thesyntactic procedure. In all three different image quality groups, the failures were mainly causedby heavily overlapped leaves, plant leaf positions which did not allow their morphologicalfeatures to be observed (i.e. a vertical leaf), or poor illumination conditions. Algorithm failurecaused by the first two reasons happened randomly. Algorithm failure caused by illuminationchanges would happen in a sequence of frames. This failure could be easily detected by thecomputer. In the prototype system, a new classifier would be trained and a new LUT createdwhen a sequence of failed frames was observed.Table 3. Overall plant recognition results.ImageQualityGroupCotyledon LeavesCorrectly Identified(%)Whole TomatoPlants CorrectlyIdentified (%)Error (%)(weeds identified astomato plants)high59.578.33.74fair45.5967.942.76poor4065.91.53To provide an estimate of the maximum crop stand and weed plants remaining along theseedline after a hypothetical weed control operation using the machine vision algorithm, thenumber of whole tomato plants correctly recognized and weed leaves incorrectly recognized permeter was calculated, Table 4. Theoretically an average of 1.2 weed leaves per meter would beleft after application of a weeding operation based upon this machine vision algorithm. In thewhole plant syntatic algorithm each leaf identified as a tomato cotyledon caused the system tolocate a whole plant in the field. So, a weed leaf incorrectly identified as a tomato would havebeen incorrectly allowed to survive once for every 0.82 m after weeding in this study. Ifthinning is done by removing half of the plants remaining, the number of weeds that mightsurvive would drop to one in every 1.6 m.Table 4. Plant recognition results along seedline.TomatoWeedsimage groupactual(plants/m)found(plants/m)actual(leaves/m)missed(leaves/m)high16.412.838.40.70fair12.18.2107.31.80L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL10poor13.48.870.51.09The percentage of correct tomato plant recognition was higher than that for thecotyledons. Many of the plants were identified based only on one cotyledon and the informationin the immediate area of that cotyledon. The over all percentage for successful recognition oftomato plants in the field was greater than 65 percent of the tomato plants in the images. This ismuch higher than the percentage of plants that typically remain after the current weeding andthinning operations. Theoretically, three tomato plants per meter would provide the desired yieldin a typical processing tomato field. If the prototype system was to be used in the field, thenumber of tomato plants remaining (per meter) after a machine vision based weeding operationwould be as is shown in Table 4. More than twice the total number of tomato plants needed forthe thinning operation would remain after weeding.CONCLUSIONThe feasibility of using an outdoor natural-light-only machine vision system todistinguish between tomato seedlings and weeds was demonstrated. An environmentallyadaptive image segmentation algorithm was used to reduced problems associated with variationsin illumination when changes took place due to time-of-day, field conditions, or even in the colorcharacteristics of the tomato plants. This algorithm extended the dynamic range of the visionsensing system to meet outdoor lighting conditions.Object partition methods were developed to minimize the occlusion problem whichfrequently occurs in a field when tomato and weed seedlings grow close together. The binarywatershed algorithm showed that cotyledons could be successfully separated from occludingplants when the overlap was small. Four semantic shape features were used to distinguishtomato cotyledons from weed leaves and a whole plant syntatic algorithm was used to predict thestem location of the whole plant. Using this technique more than 65% of the tomato plants couldbe successfully detected.ACKNOWLEDGMENTSThis research has been supported by California Tomato Research Institute (CTRI) and theUniversity of California Integrated Pest Management Project.REFERENCESDuda, R. O., and P. E. Hart. 1973. Pattern Classification and Scene Analysis. Wiley, NewYork.L. Tian, D.C. Slaughter, & R.F. Norris. MACHINE VISION IDENTIFICATION OFTOMATO SEEDLINGS FOR AUTOMATED WEED CONTROL11Grand dEsnon, A., R. Pellenc, G. Rabatel, A. Journeau, and M. J. Aldon. 1987. Magasi: a selfpropelled robot to pick apples. ASAE Paper No. 87037, ASAE, St. Joseph, MI.Franz, E., M.R. Gebhardt, and K.B. Unklesbay. 1991. Shape description of completely visibleand partially oc
- 温馨提示:
1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
2: 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
3.本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

人人文库网所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。