外文翻译--机器学习的研究.doc
1Machine-LearningResearchFourCurrentDirectionsThomasG.DietterichMachine-learningresearchhasbeenmakinggreatprogressinmanydirections.Thisarticlesummarizesfourofthesedirectionsanddiscussessomecurrentopenproblems.Thefourdirectionsare(1)theimprovementofclassificationaccuracybylearningensemblesofclassifiers,(2)methodsforscalingupsupervisedlearningalgorithms,(3)reinforcementlearning,and(4)thelearningofcomplexstochasticmodels.Thelastfiveyearshaveseenanexplosioninmachine-learningresearch.Thisexplosionhasmanycauses:First,separateresearchcommunitiesinsymbolicmachinelearning,computationlearningtheory,neuralnetworks,statistics,andpatternrecognitionhavediscoveredoneanotherandbeguntoworktogether.Second,machine-learningtechniquesarebeingappliedtonewkindsofproblem,includingknowledgediscoveryindatabases,languageprocessing,robotcontrol,andcombinatorialoptimization,aswellastomoretraditionalproblemssuchasspeechrecognition,facerecognition,handwritingrecognition,medicaldataanalysis,andgameplaying.Inthisarticle,Iselectedfourtopicswithinmachinelearningwheretherehasbeenalotofrecentactivity.ThepurposeofthearticleistodescribetheresultsintheseareastoabroaderAIaudienceandtosketchsomeoftheopenresearchproblems.Thetopicareasare(1)ensemblesofclassifiers,(2)methodsforscalingupsupervisedlearningalgorithms,(3)reinforcementlearning,and(4)thelearningofcomplexstochasticmodels.Thereadershouldbecautionedthatthisarticleisnotacomprehensivereviewofeachofthesetopics.Rather,mygoalistoprovidearepresentativesampleoftheresearchineachofthesefourareas.Ineachoftheareas,therearemanyotherpapersthatdescriberelevantwork.IapologizetothoseauthorswhoseworkIwasunabletoincludeinthearticle.EnsemblesofClassifiersThefirsttopicconcernsmethodsforimprovingaccuracyinsupervisedlearning.Ibeginbyintroducingsomenotation.Insupervisedlearning,alearningprogramisgiventrainingexamplesoftheform(x1,y1),(xm,ym)forsomeunknownfunctiony=f(x).Thexivaluesaretypicallyvectorsoftheform<Xi,1,Xi,2,.,Xi,n>whosecomponentsarediscreteorrealvalued,suchasheight,weight,color,andage.ThesearealsocalledthefeatureofXi,IusethenotationXijto.referto2thejthfeatureofXi.Insomesituations,Idroptheisubscriptwhenitisimpliedbythecontext.Theyvaluesaretypicallydrawnfromadiscretesetofclasses1,kinthecaseofclassificationorfromthereallineinthecaseofregression.Inthisarticle,Ifocusprimarilyonclassification.Thetrainingexamplesmightbecorruptedbysomerandomnoise.GivenasetSoftrainingexamples,alearningalgorithmoutputsaclassifier.Theclassifierisahypothesisaboutthetruefunctionf.Givennewxvalues,itpredictsthecorrespondingyvalues.Idenoteclassifiersbyh1,,hi.Anensembleofclassifierisasetofclassifierswhoseindividualdecisionsarecombinedinsomeway(typicallybyweightedorunweightedvoting)toclassifynewexamples.Oneofthemostactiveareasofresearchinsupervisedlearninghasbeenthestudyofmethodsforconstructinggoodensemblesofclassifiers.Themaindiscoveryisthatensemblesareoftenmuchmoreaccuratethantheindividualclassifiersthatmakethemup.Anensemblecanbeemoreaccuratethanitscomponentclassifiersonlyiftheindividualclassifiersdisagreewithoneanother(HansenandSalamon1990).Toseewhy,imaginethatwehaveanensembleofthreeclassifiers:h1,h2,h3,andconsideranewcasex.Ifthethreeclassifiersareidentical,thenwhenh1(x)iswrong,h2(x)andh3(x)arealsowrong.However,iftheerrorsmadebytheclassifiersareuncorrelated,thenwhenh1(x)iswrong,h2(x)andh3(x)mightbecorrect,sothatamajorityvotecorrectlyclassifiesx.Moreprecisely,iftheerrorratesofLhypotheseshiareallequaltop<L/2andiftheerrorsareindependent,thentheprobabilitythatbinomialdistributionwheremorethanL/2hypothesesarewrong.Figure1showsthisareaforasimulatedensembleof21hypotheses,eachhavinganerrorrateof0.3.Theareaunderthecurvefor11ormorehypothesesbeingsimultaneouslywrongis0.026,whichismuchlessthantheerrorrateoftheindividualhypotheses.Ofcourse,iftheindividualhypothesesmakeuncorrelatederrorsatratesexceeding0.5,thentheerrorrateofthevotedensembleincreasesasaresultofthevoting.Hence,thekeytosuccessfulensemblemethodsistoconstructindividualclassifierswitherrorratesbelow0.5whoseerrorsareatleastsomewhatuncorrelated.MethodsforConstructingEnsemblesManymethodsforconstructingensembleshavebeendeveloped.Somemethodsaregeneral,andtheycanbeappliedtoanylearningalgorithm.Othermethodsarespecifictoparticularalgorithms.Ibeginbyreviewingthegeneraltechniques.SubsamplingtheTrainingExamplesThefirstmethodmanipulatesthetrainingexamplestogeneratemultiple3hypotheses.Thelearningalgorithmisrunseveraltimes,eachtimewithadifferentsubsetofthetrainingexamples.Thistechniqueworksespeciallywellforunstablelearningalgorithms-algorithmswhoseoutputclassifierundergoesmajorchangesinresponsetosmallchangesinthetrainingdata.Decisiontree,neuralnetwork,andrule-learningalgorithmsareallunstable.Linear-regression,nearest-neighbor,andlinear-thresholdalgorithmsaregenerallystable.Themoststraightforwardwayofmanipulatingthetrainingsetiscalledbagging.Oneachrun,baggingpresentsthelearningalgorithmwithatrainingsetthatconsistofasampleofmtrainingexamplesdrawnrandomlywithreplacementfromtheoriginaltrainingsetofmitems.Suchatrainingsetiscalledabootstrapreplicateoftheoriginaltrainingset,andthetechniqueiscalledbootstrapaggregation(Breiman1996a).Eachbootstrapreplicatecontains,ontheaverage,63.2percentoftheoriginalset,withseveraltrainingexamplesappearingmultipletimes.Anothertraining-setsamplingmethodistoconstructthetrainingsetsbyleavingoutdisjointsubsets.Then,10overlappingtrainingsetscanbedividedrandomlyinto10disjointsubsets.Then,10overlappingtrainingsetscanbeconstructedbydroppingoutadifferentisusedtoconstructtrainingsetsfortenfoldcross-validation;so,ensemblesconstructedinthiswayaresometimescalledcross-validatedcommittees(Parmanto,Munro,andDoyle1996).ThethirdmethodformanipulatingthetrainingsetisillustratedbytheADABOOSTalgorithm,developedbyFreundandSchapire(1996,1995)andshowninfigure2.Likebagging,ADABOOSTmanipulatesthetrainingexamplestogeneratemultiplehypotheses.ADABOOSTmaintainsaprobabilitydistributionpi(x)overthetrainingexamples.Ineachiterationi,itdrawsatrainingsetofsizembysamplingwithreplacementaccordingtotheprobabilitydistributionpi(x).Thelearningalgorithmisthenappliedtoproduceaclassifierhi.Theerrorrateiofthisclassifieronthetrainingexamples(weightedaccordingtopi(x)iscomputedandusedtoadjusttheprobabilitydistributiononthetrainingexamples.(Infigure2,notethattheprobabilitydistributionisobtainedbynormalizingasetofweightswi(i)overthetrainingexamples.)Theeffectofthechangeinweightsistoplacemoreweightonexamplesthatweremisclassifiedbyhiandlessweightonexamplesthatwerecorrectlyclassified.Insubsequentiterations,therefore,ADABOOSTconstructsprogressivelymoredifficultlearningproblems.Thefinalclassifier,hiisconstructsbyaweightedvoteoftheindividualclassifiers.Eachclassifierisweightedaccordingtoitsaccuracyforthedistributionpithatitwastrainedon.Inline4oftheADABOOSTalgorithm(figure2),thebaselearningalgorithmLearniscalledwiththeprobabilitydistributionpi.IfthelearningalgorithmLearncanusethisprobabilitydistributiondirectly,4thenthisproceduregenerallygivesbetterresults.Forexample,Quinlan(1996)developedaversionofthedecisiontree-learningprogramc4.5thatworkswithaweightedtrainingsample.Hisexperimentsshowedthatitworkedextremelywell.Onecanalsoimagineversionsofbackpropagationthatscaledthecomputedoutputerrorfortrainingexample(Xi,Yi)bytheweightpi(i).Errorsforimportanttrainingexampleswouldcauselargergradient-descentstepsthanerrorsforunimportant(low-weight)examples.However,ifthealgorithmcannotusetheprobabilitydistributionpidirectly,thenatrainingsamplecanbeconstructedbydrawingarandomsamplewithreplacementinproportiontotheprobabilitiespi.ThisproceduremakesADABOOSTmorestochastic,butexperimentshaveshownthatitisstilleffective.Figure3comparestheperformanceofc4.5toc4.5withADABOOST.M1(usingrandomsampling).Onepointisplottedforeachof27testdomainstakenfromtheIrvinerepositoryofmachine-learningdatabases(MerzandMurphy1996).Wecanseethatmostpointslieabovetheliney=x,whichindicatesthattheerrorrateofADABOOSTislessthantheerrorrateofc4.5.Figure4comparestheperformanceofbagging(withc4.5)toc4.5alone.Again,weseethatbaggingproducessizablereductionsintheerrorrateofc4.5formanyproblems.Finally,figure5comparesbaggingwithboosting(bothusingc4.5astheunderlyingalgorithm).Theresultsshowthatthetwotechniquesarecomparable,althoughboostingappearstostillhaveanadvantageoverbagging.Wecanseethatmostpointslieabovetheliney=x,whichindicatesthattheerrorrateofADABOOSTislessthantheerrorrateofc4.5.Figure4comparestheperformanceofbagging(withc4.5)toc4.5alone.Again,weseethatbaggingproducessizablereductionsintheerrorrateofc4.5formanyproblems.Finally,figure5comparesbaggingwithboosting(bothusingc4.5astheunderlyingalgorithm).Theresultsshowthatthetwotechniquesarecomparable,althoughboostingappearstostillhaveanadvantageoverbagging.ManipulatingtheInputFeaturesAsecondgeneraltechniqueforgeneratingmultipleclassifiersistomanipulatethesetofinputfeaturesavailabletothelearningalgorithm.Forexample,inaprojecttoidentifyvolcanoesonVenus,Cherkauer(1996)trainedensembleof32neuralnetworks.The32networkswerebasedon8differentsubsetsofthe119availableinputfeaturesand4differentnetworksizes.Theinput-featuressubsetswereselected(byhand)togroupfeaturesthatwerebasedondifferentimage-processingoperations(suchasprincipalcomponentanalysisandthefastFouriertransform).Theresultingensembleclassifierwasabletomatchtheperformanceofhumanexpertsinidentifyingvolcanoes.TumerandGhosh(1996)appliedasimilartechniquetoasonardatasetwith25inputfeatures.However,theyfound