




已阅读5页,还剩16页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1Machine-LearningResearchFourCurrentDirectionsThomasG.DietterichMachine-learningresearchhasbeenmakinggreatprogressinmanydirections.Thisarticlesummarizesfourofthesedirectionsanddiscussessomecurrentopenproblems.Thefourdirectionsare(1)theimprovementofclassificationaccuracybylearningensemblesofclassifiers,(2)methodsforscalingupsupervisedlearningalgorithms,(3)reinforcementlearning,and(4)thelearningofcomplexstochasticmodels.Thelastfiveyearshaveseenanexplosioninmachine-learningresearch.Thisexplosionhasmanycauses:First,separateresearchcommunitiesinsymbolicmachinelearning,computationlearningtheory,neuralnetworks,statistics,andpatternrecognitionhavediscoveredoneanotherandbeguntoworktogether.Second,machine-learningtechniquesarebeingappliedtonewkindsofproblem,includingknowledgediscoveryindatabases,languageprocessing,robotcontrol,andcombinatorialoptimization,aswellastomoretraditionalproblemssuchasspeechrecognition,facerecognition,handwritingrecognition,medicaldataanalysis,andgameplaying.Inthisarticle,Iselectedfourtopicswithinmachinelearningwheretherehasbeenalotofrecentactivity.ThepurposeofthearticleistodescribetheresultsintheseareastoabroaderAIaudienceandtosketchsomeoftheopenresearchproblems.Thetopicareasare(1)ensemblesofclassifiers,(2)methodsforscalingupsupervisedlearningalgorithms,(3)reinforcementlearning,and(4)thelearningofcomplexstochasticmodels.Thereadershouldbecautionedthatthisarticleisnotacomprehensivereviewofeachofthesetopics.Rather,mygoalistoprovidearepresentativesampleoftheresearchineachofthesefourareas.Ineachoftheareas,therearemanyotherpapersthatdescriberelevantwork.IapologizetothoseauthorswhoseworkIwasunabletoincludeinthearticle.EnsemblesofClassifiersThefirsttopicconcernsmethodsforimprovingaccuracyinsupervisedlearning.Ibeginbyintroducingsomenotation.Insupervisedlearning,alearningprogramisgiventrainingexamplesoftheform(x1,y1),(xm,ym)forsomeunknownfunctiony=f(x).Thexivaluesaretypicallyvectorsoftheformwhosecomponentsarediscreteorrealvalued,suchasheight,weight,color,andage.ThesearealsocalledthefeatureofXi,IusethenotationXijto.referto2thejthfeatureofXi.Insomesituations,Idroptheisubscriptwhenitisimpliedbythecontext.Theyvaluesaretypicallydrawnfromadiscretesetofclasses1,kinthecaseofclassificationorfromthereallineinthecaseofregression.Inthisarticle,Ifocusprimarilyonclassification.Thetrainingexamplesmightbecorruptedbysomerandomnoise.GivenasetSoftrainingexamples,alearningalgorithmoutputsaclassifier.Theclassifierisahypothesisaboutthetruefunctionf.Givennewxvalues,itpredictsthecorrespondingyvalues.Idenoteclassifiersbyh1,,hi.Anensembleofclassifierisasetofclassifierswhoseindividualdecisionsarecombinedinsomeway(typicallybyweightedorunweightedvoting)toclassifynewexamples.Oneofthemostactiveareasofresearchinsupervisedlearninghasbeenthestudyofmethodsforconstructinggoodensemblesofclassifiers.Themaindiscoveryisthatensemblesareoftenmuchmoreaccuratethantheindividualclassifiersthatmakethemup.Anensemblecanbeemoreaccuratethanitscomponentclassifiersonlyiftheindividualclassifiersdisagreewithoneanother(HansenandSalamon1990).Toseewhy,imaginethatwehaveanensembleofthreeclassifiers:h1,h2,h3,andconsideranewcasex.Ifthethreeclassifiersareidentical,thenwhenh1(x)iswrong,h2(x)andh3(x)arealsowrong.However,iftheerrorsmadebytheclassifiersareuncorrelated,thenwhenh1(x)iswrong,h2(x)andh3(x)mightbecorrect,sothatamajorityvotecorrectlyclassifiesx.Moreprecisely,iftheerrorratesofLhypotheseshiareallequaltopL/2andiftheerrorsareindependent,thentheprobabilitythatbinomialdistributionwheremorethanL/2hypothesesarewrong.Figure1showsthisareaforasimulatedensembleof21hypotheses,eachhavinganerrorrateof0.3.Theareaunderthecurvefor11ormorehypothesesbeingsimultaneouslywrongis0.026,whichismuchlessthantheerrorrateoftheindividualhypotheses.Ofcourse,iftheindividualhypothesesmakeuncorrelatederrorsatratesexceeding0.5,thentheerrorrateofthevotedensembleincreasesasaresultofthevoting.Hence,thekeytosuccessfulensemblemethodsistoconstructindividualclassifierswitherrorratesbelow0.5whoseerrorsareatleastsomewhatuncorrelated.MethodsforConstructingEnsemblesManymethodsforconstructingensembleshavebeendeveloped.Somemethodsaregeneral,andtheycanbeappliedtoanylearningalgorithm.Othermethodsarespecifictoparticularalgorithms.Ibeginbyreviewingthegeneraltechniques.SubsamplingtheTrainingExamplesThefirstmethodmanipulatesthetrainingexamplestogeneratemultiple3hypotheses.Thelearningalgorithmisrunseveraltimes,eachtimewithadifferentsubsetofthetrainingexamples.Thistechniqueworksespeciallywellforunstablelearningalgorithms-algorithmswhoseoutputclassifierundergoesmajorchangesinresponsetosmallchangesinthetrainingdata.Decisiontree,neuralnetwork,andrule-learningalgorithmsareallunstable.Linear-regression,nearest-neighbor,andlinear-thresholdalgorithmsaregenerallystable.Themoststraightforwardwayofmanipulatingthetrainingsetiscalledbagging.Oneachrun,baggingpresentsthelearningalgorithmwithatrainingsetthatconsistofasampleofmtrainingexamplesdrawnrandomlywithreplacementfromtheoriginaltrainingsetofmitems.Suchatrainingsetiscalledabootstrapreplicateoftheoriginaltrainingset,andthetechniqueiscalledbootstrapaggregation(Breiman1996a).Eachbootstrapreplicatecontains,ontheaverage,63.2percentoftheoriginalset,withseveraltrainingexamplesappearingmultipletimes.Anothertraining-setsamplingmethodistoconstructthetrainingsetsbyleavingoutdisjointsubsets.Then,10overlappingtrainingsetscanbedividedrandomlyinto10disjointsubsets.Then,10overlappingtrainingsetscanbeconstructedbydroppingoutadifferentisusedtoconstructtrainingsetsfortenfoldcross-validation;so,ensemblesconstructedinthiswayaresometimescalledcross-validatedcommittees(Parmanto,Munro,andDoyle1996).ThethirdmethodformanipulatingthetrainingsetisillustratedbytheADABOOSTalgorithm,developedbyFreundandSchapire(1996,1995)andshowninfigure2.Likebagging,ADABOOSTmanipulatesthetrainingexamplestogeneratemultiplehypotheses.ADABOOSTmaintainsaprobabilitydistributionpi(x)overthetrainingexamples.Ineachiterationi,itdrawsatrainingsetofsizembysamplingwithreplacementaccordingtotheprobabilitydistributionpi(x).Thelearningalgorithmisthenappliedtoproduceaclassifierhi.Theerrorrateiofthisclassifieronthetrainingexamples(weightedaccordingtopi(x)iscomputedandusedtoadjusttheprobabilitydistributiononthetrainingexamples.(Infigure2,notethattheprobabilitydistributionisobtainedbynormalizingasetofweightswi(i)overthetrainingexamples.)Theeffectofthechangeinweightsistoplacemoreweightonexamplesthatweremisclassifiedbyhiandlessweightonexamplesthatwerecorrectlyclassified.Insubsequentiterations,therefore,ADABOOSTconstructsprogressivelymoredifficultlearningproblems.Thefinalclassifier,hiisconstructsbyaweightedvoteoftheindividualclassifiers.Eachclassifierisweightedaccordingtoitsaccuracyforthedistributionpithatitwastrainedon.Inline4oftheADABOOSTalgorithm(figure2),thebaselearningalgorithmLearniscalledwiththeprobabilitydistributionpi.IfthelearningalgorithmLearncanusethisprobabilitydistributiondirectly,4thenthisproceduregenerallygivesbetterresults.Forexample,Quinlan(1996)developedaversionofthedecisiontree-learningprogramc4.5thatworkswithaweightedtrainingsample.Hisexperimentsshowedthatitworkedextremelywell.Onecanalsoimagineversionsofbackpropagationthatscaledthecomputedoutputerrorfortrainingexample(Xi,Yi)bytheweightpi(i).Errorsforimportanttrainingexampleswouldcauselargergradient-descentstepsthanerrorsforunimportant(low-weight)examples.However,ifthealgorithmcannotusetheprobabilitydistributionpidirectly,thenatrainingsamplecanbeconstructedbydrawingarandomsamplewithreplacementinproportiontotheprobabilitiespi.ThisproceduremakesADABOOSTmorestochastic,butexperimentshaveshownthatitisstilleffective.Figure3comparestheperformanceofc4.5toc4.5withADABOOST.M1(usingrandomsampling).Onepointisplottedforeachof27testdomainstakenfromtheIrvinerepositoryofmachine-learningdatabases(MerzandMurphy1996).Wecanseethatmostpointslieabovetheliney=x,whichindicatesthattheerrorrateofADABOOSTislessthantheerrorrateofc4.5.Figure4comparestheperformanceofbagging(withc4.5)toc4.5alone.Again,weseethatbaggingproducessizablereductionsintheerrorrateofc4.5formanyproblems.Finally,figure5comparesbaggingwithboosting(bothusingc4.5astheunderlyingalgorithm).Theresultsshowthatthetwotechniquesarecomparable,althoughboostingappearstostillhaveanadvantageoverbagging.Wecanseethatmostpointslieabovetheliney=x,whichindicatesthattheerrorrateofADABOOSTislessthantheerrorrateofc4.5.Figure4comparestheperformanceofbagging(withc4.5)toc4.5alone.Again,weseethatbaggingproducessizabler
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2025至2030年中国全自动鞋底尘污清洁机市场分析及竞争策略研究报告
- 2025至2030年中国书椅市场分析及竞争策略研究报告
- 2025至2030年中国PTFE绝缘套市场分析及竞争策略研究报告
- 儿童疫苗的副作用与安全性分析
- 续费话术培训
- 腰突症护理要点
- 交通辅警培训
- 浅析中美贸易摩擦对国际经济的影响
- 2025年安徽电气工程职业技术学院单招语文测试模拟题库
- 声带小结护理查房
- CSR法律法规及其他要求清单(RBA)2024.3
- 二年级100以内加减法混合运算题库
- 设备预验收报告
- 国家开放大学《钢结构(本)》期末复习指导参考答案
- 小学美术奇怪的梦课件
- 头颈部肿瘤放疗中危及器官与正常组织勾画课件
- 广州市退休人员个人情况登记表
- 切格瓦拉完整
- 智能门锁采购投标方案
- 课程设计DLP4-13型锅炉中硫烟煤烟气袋式除尘湿式脱硫系统设计
- 中学生如何正确交友主题班会
评论
0/150
提交评论