




已阅读5页,还剩19页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
人工神经网络毕业论文外文翻译对照 Textile ResearchJournal ArticleUse ofArtificial Neural Networks forDetermining theLeveling ActionPoint at the Auto-leveling DrawFrame AssadFarooq1and ChokriCherif Instituteof Textileand ClothingTechnology,Technische Universit?t Dresden.Dresden,Germany AbstractArtificial neural works with their abilityof learningfrom datahave beensuessfully appliedin the textile industry.The leveling action pointis oneof theimportant auto-leveling parametersof the drawing frameand stronglyinfluences the quality of the manufacturedyarn.This paperreports amethod of predicting theleveling action point using artificial neural works.Various leveling actionpointaffecting variableswere selectedas inputsfor trainingthe artificial neural workswith theaim tooptimize theauto-leveling bylimiting theleveling actionpoint searchrange.The LevenbergMarquardt algorithmis incorporatedinto theback-propagation toaelerate the training andBayesian regularization is applied to improvethe generalizationof the works.The resultsobtained arequite promising.Key words:artificial neural works;auto-lev-eling;draw frame;leveling actionpoint Theevenness of the yarnplays anincreasingly significantrole in thetextileindustry,while the sliver evenness is oneof thecritical factorswhen producingquality yarn.The sliverevennessis also themajor criteriafor theassessment of the operationof thedraw frame.In principle,there aretwo approachesto reduce the sliverirregularities.One isto studythe draftingmechanism andrecognize thecauses forirregularities,so thatmeans maybe foundto reduce them.The othermore valuableapproach isto useauto-levelers1,since inmost casesthe doublingis inadequateto correctthe variations in sliver.The controlof sliverirregularities canlower the dependence oncard sliveruniformity,ambient conditions,and frameparameters.At theauto-leveler drawframe(RSB-D40)the thicknessvariationsin the fed sliver arecontinually monitoredby amechanical device(a tongue-groove roll)and subsequentlyconverted intoelectrical signals.The measuredvalues aretransmitted toan electronicmemory witha variable,the timedelayed response.The timedelay allowsthe draftbetween themid-roll and the deliveryroll of thedrawframe toadjust exactlyat thatmoment whenthe defectivesliver piece,which hadbeen measuredby apair ofscanning rollers,finds itselfat apoint ofdraft.At thispoint,a servomotor operatesdepending upon the amountof variationdetected in the sliverpiece.The distancethat separatesthe scanningrollers pairand the point ofdraft iscalled thezero pointof regulationor thelevelingactionpoint(LAP)as shown in Figure1.This leadsto thecalculated correctionon thecorresponding defectivematerial2,3.In auto-leveling drawframes,especially in the caseof achange of fiber material,or batchesthe machine settings andprocess controllingparameters mustbe optimized.The LAPis themost importantauto-leveling parameterwhich isinfluenced byvarious parameterssuch asfeeding speed,material,break draftgauge,main draftgauge,feeding tension,break draft,and settingof the sliver guidingrollers etc.Use ofArtificial Neural Networks forDetermining theLeveling ActionPoint A.Farooq andC.Cherif Figure1Schematic diagram of anauto-leveler drawingframe.Previously,the sliversamples hadto beproduced withdifferent settings,taken to the laboratory,and examinedon theevenness testeruntil the optimum LAPwas found(manual search).Auto-leveler drawframe RSB-D40implements anautomatic search function for theoptimumdetermination of the LAP.During thisfunction,thesliver is automaticallyscanned byadjusting the different LAPstemporarily and the resultedvalues arerecorded.During thisprocess,thequalityparameters areconstantly monitoredand analgorithm automaticallycalculates theoptimum LAP by selectingthepointwith theminimum sliver CV%.At presenta searchrange of120mm isscanned,i.e.21points areexamined using100m ofsliver ineach case;therefore2100mofsliverisnecessary tocarry outthe searchfunction.This isa verytime-consuming methodaompanied by the materialand productionlosses,and hencedirectly affectingthe costparameters.In thiswork,we havetried tofind outthe possibilityofpredicting the LAP,using artificial neural-works,to limitthe automaticsearch spanand to reducetheabove-mentioned disadvantages.Artificial NeuralNetworks Themotivation ofusingartificial neural workslies in their flexibilityand powerof informationprocessing thatconventional putingmethods do not have.The neural work systemcan solvea problem“by experienceand learning”the inputoutput patternsprovided bythe user.In thefield oftextiles,artificialneural works(mostly usingback-propagation)have beenextensively studiedduring thelast twodecades46.In thefield ofspinning previousresearch hasconcentrated onpredictingtheyarn properties and the spinning processperformance usingthe fiberproperties ora binationoffiberpropertiesandmachinesettingsas the input ofneural works712.Back-propagation isa supervisedlearning techniquemost frequentlyused forartificialneural work training.The back-propagation algorithmis based on theWidrow-Hoff deltalearning rulein which the weightadjustment iscarried outthrough themean squareerror of the outputresponse to the sampleinput13.The setof thesesample patternsis repeatedlypresented to the workuntil the error valueis minimized.The back-propagation algorithmuses thesteepest descentmethod,which isessentially afirst-order methodto determinea suitabledirection ofgradient movement.Overfitting Thegoal ofneural worktraining isto producea workwhich producessmall errorson the training set,and whichalso respondsproperly tonovel inputs.When awork performsas wellon novelinputs ason trainingset inputs,the workis said to bewell generalized.The generalizationcapacity of the workis largelygoverned bythe workarchitecture(number of hidden neurons)and thisplays avital roleduring the training.A workwhich isnot plexenough to learn all the informationin the data issaid tobe underfitted,while awork thatis tooplex tofit the“noise”in the data leadsto overfitting.“Noise”means variationin thetarget valuesthat areunpredictable from the inputsof aspecific work.All standardneuralworkarchitectures suchas thefully connectedmulti-layer perceptronare proo overfitting.Moreover,it isvery difficultto acquirethe noisefree datafromthespinning industrydue tothedependenceof endproducts on the inherentmaterial variationsand environmentalconditions,etc.Early stoppingis themost monlyused techniqueto tacklethis problem.This involvesthe divisionof training data intothree sets,i.e.a trainingset,a validation set anda testset,with thedrawback thata largepart of the data(validationset)can neverbe thepart of thetraining.Regularization Theother solutionof overfittingis regularization,which is the methodof improving the generalizationby constrainingthe size of the work weights.Mackay14discussed apractical Bayesian framework forback-propagation works,which consistentlyproduced workswith goodgeneralization.The initialobjective of thetrainingprocess isto mini-mize the sum ofsquare errors:?i?niiDatE12)( (1)Where itare thetargets andia arethe neuralwork responsestotherespective targets.Typically,training aimsto reducethe sumof squarederrors F=Ed.However,regularization addsan additionalterm,the objectivefunction,WDEEF? (2)In equation (2),wE is thesumof squaresof the work weights,andandare objectivefunction parameters.The relativesizeof the objectivefunction parametersdictates theemphasis fortraining.If,training willemphasize weightsize reductionattheexpense of work errors,thus producinga smootherwork response15.The BayesianSchool ofstatistics isbasedona differentview ofwhat itmeans tolearn fromdata,in whichprobability isused torepresent theuncertainty aboutthe relationshipbeing learned.Before seeingany data,the prioropinions aboutwhat thetrue relationshipmight becan beexpressed ina probabilitydistribution overthe work weights thatdefine thisrelationship.After theprogram conceivesthe data,the revisedopinions arecaptured bya posteriordistribution overwork weights.Network weightsthat seemedplausible before,but whichdonotmatch the data verywell,will nowbe seenas beingmuch lesslikely,while the probability forvalues ofthe weightsthat dofit thedata wellwill haveincreased16.In theBayesianframeworkthe weightsofthe work areconsidered randomvariables.After thedata istaken,the posteriorprobability functionfor theweights can be updatedaording toBayesrule:),/?(),/?(),/(),?/(MDPMwPMwDPMDwP? (3)In equation (3),D representsthedataset,M is the particularneuralwork model used,and wisthevector ofworkweights.),/?(MwP isthe priorprobability,which representsour knowledgeoftheweights beforeany datais collected.),/(MwDP?isthelikelihood function,which istheprobabilityof dataourring,given theweights w.),/?(MwP?isanormalization factor,which guaranteesthat the total probabilityis115.In thisstudy,we employedthe MATLABNeuralNet-works Toolboxfunction“trainbr”which isan incorporationoftheLevenbergMarqaurdt algorithmand theBayesian regularization theorem(or Bayesianlearning)into back-propagation to train the neuralworktoreducethe putationaloverhead ofthe approximationoftheHessian matrixand toproduce goodgeneralization capabilities.This algorithmprovides ameasure ofthe workparameters(weights andbiases)being effectivelyused bythe work.The effectivenumber ofparameters shouldremain thesame,irrespective ofthetotalnumber ofparameters in thework.This eliminatesthe guessworkrequired indetermining theoptimum worksize.Experimental Theexperimental data was obtainedfrom Rieter,Ingolstadt,the manufacturerof drawframe RSB-D4017.For theseexperiments,the materialselection andexperimental designwas basedonthefrequency ofparticular materialuse inthespinningindustry.For example,Carded Cottonisthemost frequentlyused material,so itwas usedas astandard and the experimentswere performedon cardedcotton withall possiblesettings,which wasnot thecase withother materials.Also,owing tothe factthat allthe materialscould notbe processedwith sameroller pressureand draftsettings,different spinplans weredesigned.The materialswith theirprocessing plansare givenin Table1.The standardprocedure ofalimatization was appliedtoallthematerials and the standardprocedure forauto levelingsettings(sliver lineardensity,LAP,leveling intensity)was adopted.A parisonof manualand automaticsearches was performed and the betterCV%results wereachieved bythe automaticsearchfunctionfrom RSBD-40.Therefore the LAP searcheswere aomplishedbytheRieter QualityMonitor(RQM).An abstractdepiction ofthe experimentalmodel isshowninFigure2.Use ofArtificial NeuralNetworks forDetermining theLeveling ActionPoint A.Farooq andC.Cherif Figure2Abstract neuralwork model.Here thepoint tobe consideredis that there isno possibilityinthe machine toadjust themajor LAP influencing parameter,i.e.feeding speed.So feeding speed wasconsidered tobe relatedto delivery speed andnumber ofdoublings aordingto equation (4).The deliveryspeed wasvaried between300and1100m/min and the doublingswere5to7,to achievethedifferentvalues ofthe feedingspeed:Feeding Speed=Delivered CountDelivery speed/(DoublingsFeed Count) (4)Training andTesting SetsFor trainingthe neuralwork,the experimentaldata wasdivided intothree phases.The first phase includedthe experimentaldata for the initialpilation ofthedataand subsequentanalysis.The priorknowledge regardingthe parametersinfluencing LAP,i.e.feedingspeed,deliveryspeed,break draft,gauges ofbreak andmain draft,and thesettings ofthesliverguide,was usedto selectthedata.So thefirstphasecontained the experiments in whichthestandard settingswere takenasafoundation andthen oneLAPinfluencingparameter waschanged ineach experiment.In thesecond phase,the experimentswere selectedinwhichmore thanone influencingparameter waschanged andtheworkwas allowedtolearnthe plexinteractions.This selectionwas madeonthebasis ofascertained influencingparameters,with theaim toincrease ordecrease theLAP length.The thirdphase involvedtheexperimentsconducted onthe pilotscale machine.These pilotscale experimentswere carriedout bythemachinemanufacturer toget theresponse fordifferent settings.So theseresults were selected to assess theperformance ofthe neuralworks.Pre-processing ofdata Normalizingtheinputand targetvariables tendsto makethetrainingprocess betterbehaved byimprovingthenumerical conditionoftheproblem.Also itcan maketraining fasterand reducethe chancesof gettingstuck inlocal minima.Hence,for theneuralworktraining,because ofthe largespans ofthework-input data,the inputsand targetswere scaledfor betterperformance.At firstthe inputsand targetswere normalized between the interval1,1,which didnot showany promisingresults.Afterwards thedatawasnormalizedbetweentheinterval0,1andtheworks weretrained withsuess.NeuralNetworkTraining Wetrained fivedifferent neuralworks topredict theLAPbyincreasing thenumber of training data sets gradually.The data sets weredivided into training and test setsas shownin Table2.The trainingwas performedwith trainingsets and test setswere reservedto judgethe predictionperformance oftheneuralwork inthe formof error.Figure3depicts thetraining performanceoftheneuralworkNN5.Figure3Training performanceof NN5.Results andDiscussion Asalready mentionedin Table2,different binationsofthedata setswere usedtotrainfive workskeeping aclose lookon theirtest performance,i.e.performance onthe unseen data.As Bayesianregularizationissaidtoeliminate theguess workfortheappropriate number of hiddenlayers andhidden neurons,the numbersofhiddenneurons,ranging from14to22,wereselectedin twolayers totrain theworks.The basicidea behindthe selectionofworktopology isthatthework shouldbe plexenough tolearn allthe relationshipsinthedata asthe possibilityof overfittingwas tackledwith regularization.The behaviorduring training and performanceonthetest setscanbeseen fromFigures48.Figure4Testing performanceof NN1.Use ofArtificial NeuralNetworks forDetermining theLeveling ActionPoint A.Farooq andC.Cherif Figure5Testing performanceofNN2.Figure6Testing performanceof NN3.Figure7Testing performanceof NN4.Figure8Testing performanceof NN5.Mean absoluteerror wascalculated topare thepredicted andactual values,both duringtraining and testing,andtheresults arepresented inFigure9.The overallresults canbe explainedby consideringthe presenceor absenceof inputoutput interactionsinthetrainingdataandthe increase inthe predictionperformance withtheincreaseoftrainingdata sets.Figure9clarifies theincrease intesting error,even withan increaseinthenumber oftraining,data sets,when training andtestingwasperformedon datasets fromdifferent phases(NN2&NN4,see Table2).However,theerrorshowed adownwards trendwhen partofthephase datawas usedtotraintheworkandtheremainder wasused for testing,as inNN3and NN5in parisonwith NN2and NN4,respectively.The presenceof differentinputoutput interactionsin differentphases explainsthis trend.The exceptionalbehavior of NN1with respecttotheabove-mentioned factis attributedtotherelatively smallnumberofdatasetsin phase1.Figure9Error parisonbetween trainingandtest sets.In ordertoassessthe goodnessof fit of NN5,a10-fold cross-validation waspreformed,i.e.using90%ofthedata fortrainingand10%fortesting,repeating thetraining10times andtesting thework eachtime on10%oftheunseendata.An average2R=0.9622was reported.The sameprocedure wasadopted forthe80%trainingand20%testsetsandthecalculated valueof2R was0.9470.This decreaseintheperformance isdue tothe80%datasetsavailable fortraining.However,these valuesconfirm avery goodfitofthe NN5model.Conclusion Theartificialneuralworkmodelwas developedandtheworks weretrained atthe Instituteof Textileand ClothingTechnology,Technische UniversitatDresden.The useof Bayesianregularization toreducethetesting errorforthepractical applicationshas shownquite promisingresults.From thetesting performanceof NN5as shownin Table3,a maximumdeviation ofabout2mm isobservable,which fallswell withinthe3mm negligiblerange fordetermination oftheLAP.This concludesthat neuralworks canbe appliedinthefuture forquick putationoftheLAP,withtheadvantages offast adjustmentand savingof materialand time.The auracyin putationcan leadto bettersliverCV%and betteryarn quality.Table3Test performanceof NN5.摘要用他们的人工神经网络从数据中学习的能力已经成功的应用在纺织行业,在并条机自调匀整参数中匀整作用点是一个重要的参数,并且强烈的影响到纱线的质量,本文主要是讲述的是用人工神经网络来检测匀整作用点,各种各样的匀整作用点会影响导致不同的变量,我们把这些变量输入到人工神经网络进行测试,目的是为了通过限制匀整作用点改变范围来优化并条机的自调匀整,Levenberg-Marquardt的算法是纳入反向传播加速系和贝叶斯正则化方法,改进了网络的泛化。 得到的结果是非常的精确的。 关键词人工神经网络;自调匀整;并条机;匀整作用点纱线的均匀度在纺织行业中发挥着越来越重要的角色,而棉条均匀度是产生高质量纱线的重要因素之一。 棉条均匀度也是并条机运转的主要评价标准。 在原则上,有两种方法可以减少棉条不均匀。 一种途径是研究牵伸机构和认识棉条不均的原因,所以这意味着可能会发现可以降低不均匀度的方法。 其他更有有效的方法是使用自调匀整,因为在大多数情况下并条机是不足以正确的改变纱线的均匀度。 长片段不匀控制在一个较低水平,取决于梳棉棉条均匀性、环境条件、和帧参数。 在自调匀整并条机(RSB-40)喂入的棉条厚度的不断变化变化可以通过一种机械检测设备来监测其变化并且将监测到的信号转换为电子信号,测量数据被传输到一个可变电子记忆器里,时间延迟响应。 当棉条片段有缺陷的时候可以在一定的时间延迟允许并条机牵伸的中间罗拉和传送罗拉之间来进行棉条的不均匀度精确调整。 当已被一对检测罗拉测量后,发现在本身的一个牵伸点,在这一点上,伺服电机运行取决于在棉条部分被检测的数量的变化,一对分离的检测罗拉和牵伸点之间的距离,称为零点调节或匀整作用点(LAP),如下图1所示,这在计算相应有缺陷的材料应该进行相应的修正( 2、3),在并条机的自调匀整装置上,特别是在纤维材料改变的情况下,或者是各种不同型号机器选择和过程控制参数必须优化,匀整作用点是自调匀整中最重要的参数,它被许多的因素影响,比喻喂条速度,棉条材料,前牵伸测量计,喂入张紧力,前牵伸和棉条导条罗拉设置等。 Variation infedsliver:均匀度变化的棉条Main Draftzone:主要牵伸区Scanning Rollers:检测罗拉Break Draftzone:前牵伸区Pressuere Bar:压力棒levelingactionpoint匀整作用点此前,棉条样品必须在实验室在不同的条件下进行,在测试仪上进行检测棉条均匀度,直到找到最好的匀整作用点(手动进行检测),RSB-40并条机的自调匀整具有自动搜索确定最好的匀整作用点这一功能,棉条在进行此功能期间,棉条被自动检测是通过调整临时的匀整作用点和被记录的有用的数据,在此过程中,不断的监测的质量参数和通过选择棉条的最低点的棉条的CV%,通过一种算法自动计算出最佳的自调匀整作用点,目前的一个搜索检测范围是120毫米,例如在每检查100m的棉条是21点,因此2100m的棉条必须进行搜索功能,这是一种比较费时的方法,材料和生产的损失会直接影响成本,在这项工作中,我们试图找出一种能够预测出匀整的作用点,利用人工神经网络、限制自动搜索范围和减少上述缺点。 人工神经网络利用人工神经网络的目的在于他们的灵活性和信息处理能力,传统的计算方法不具备。 神经网络系统可以由用户提供的输入-输出模式通过“经验和学习”这种方式来解决问题,在纺织品领域,人工神经网络(主要使用反向传播)在过去20年已经在广泛的研究,在纺
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 计算机四级常考点试卷【突破训练】附答案详解
- 2024-2025学年事业单位招聘题库及参考答案详解(轻巧夺冠)
- 自卸汽车司机安全培训课件
- 2024年安全员考试考前冲刺测试卷及1套完整答案详解
- 石油石化职业技能鉴定考试试卷(黄金题型)附答案详解
- 2024年自考专业(会计)高频难、易错点题带答案详解(达标题)
- 2025年河北省泊头市中考数学通关题库附答案详解(基础题)
- 广东中山市东凤镇第二中学招聘临聘教师笔试高频难、易错点备考题库及答案详解一套
- 2024-2025学年昆明卫生职业学院单招《语文》模拟题库及答案详解(必刷)
- 2024-2025学年度银行岗位试卷附完整答案详解【全优】
- 学校课后延时服务费分配细则
- 2025年化工自动化控制仪表考试题模拟考试题库及答案
- 2025中煤电力有限公司总部及所属企业招聘16人笔试参考题库附带答案详解(10套)
- 幼儿园仓库卫生管理制度
- 钾离子结合剂在慢性肾脏病患者高钾血症治疗中应用的中国专家共识重点2025
- 别墅整装交付管理办法
- 护理专业全面解析
- 【化学 云南卷】2025年云南省高考招生统一考试真题化学试卷(含答案)
- 《风景谈》课件-课件
- 实验室6S培训资料
- 小米之家培训课件
评论
0/150
提交评论