第三章_数据预处理.ppt_第1页
第三章_数据预处理.ppt_第2页
第三章_数据预处理.ppt_第3页
第三章_数据预处理.ppt_第4页
第三章_数据预处理.ppt_第5页
已阅读5页,还剩47页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

DataPreprocessing,SchoolofSoftware,NanjingUniversity,KnowledgeDiscoveryinDatabases,Chapter3:DataPreprocessing,Whypreprocessthedata?DatacleaningDataintegrationandtransformationDatareductionDiscretizationandconcepthierarchygenerationSummary,WhyDataPreprocessing?,Dataintherealworldisdirtyincomplete:lackingattributevalues,lackingcertainattributesofinterest,orcontainingonlyaggregatedatanoisy:containingerrorsoroutliersinconsistent:containingdiscrepanciesincodesornamesNoqualitydata,noqualityminingresults!QualitydecisionsmustbebasedonqualitydataDatawarehouseneedsconsistentintegrationofqualitydata,MajorTasksinDataPreprocessing,DatacleaningFillinmissingvalues,smoothnoisydata,identifyorremoveoutliers,andresolveinconsistenciesDataintegrationIntegrationofmultipledatabases,datacubes,orfilesDatatransformationNormalizationandaggregationDatareductionObtainsreducedrepresentationinvolumebutproducesthesameorsimilaranalyticalresultsDatadiscretizationPartofdatareductionbutwithparticularimportance,especiallyfornumericaldata,Formsofdatapreprocessing,Chapter3:DataPreprocessing,Whypreprocessthedata?DatacleaningDataintegrationandtransformationDatareductionDiscretizationandconcepthierarchygenerationSummary,DataCleaning,DatacleaningtasksFillinmissingvaluesIdentifyoutliersandsmoothoutnoisydataCorrectinconsistentdata,MissingData,DataisnotalwaysavailableE.g.,manytupleshavenorecordedvalueforseveralattributes,suchascustomerincomeinsalesdataMissingdatamaybeduetoequipmentmalfunctioninconsistentwithotherrecordeddataandthusdeleteddatanotenteredduetomisunderstandingcertaindatamaynotbeconsideredimportantatthetimeofentrynotregisterhistoryorchangesofthedataMissingdatamayneedtobeinferred.,HowtoHandleMissingData?,Ignorethetuple:usuallydonewhenclasslabelismissing(assumingthetasksinclassificationnoteffectivewhenthepercentageofmissingvaluesperattributevariesconsiderably.Fillinthemissingvaluemanually:tedious+infeasible?Useaglobalconstanttofillinthemissingvalue:e.g.,“unknown”,anewclass?!UsetheattributemeantofillinthemissingvalueUsetheattributemeanforallsamplesbelongingtothesameclasstofillinthemissingvalue:smarterUsethemostprobablevaluetofillinthemissingvalue:inference-basedsuchasBayesianformulaordecisiontree,NoisyData,Noise:randomerrororvarianceinameasuredvariableIncorrectattributevaluesmayduetofaultydatacollectioninstrumentsdataentryproblemsdatatransmissionproblemstechnologylimitation(e.g.Inputcachecapacity)inconsistencyinnamingconventionOtherdataproblemswhichrequiresdatacleaningduplicaterecordsincompletedatainconsistentdata,HowtoHandleNoisyData?,Binningmethod:firstsortdataandpartitioninto(equi-depth)binsthenonecansmoothbybinmeans,smoothbybinmedian,smoothbybinboundaries,etc.ClusteringdetectandremoveoutliersCombinedcomputerandhumaninspectiondetectsuspiciousvaluesandcheckbyhumanRegressionsmoothbyfittingthedataintoregressionfunctions,SimpleDiscretizationMethods:Binning,Equal-width(distance)partitioning:ItdividestherangeintoNintervalsofequalsize:uniformgridifAandBarethelowestandhighestvaluesoftheattribute,thewidthofintervalswillbe:W=(B-A)/N.ThemoststraightforwardButoutliersmaydominatepresentationSkeweddataisnothandledwell.Equal-depth(frequency)partitioning:ItdividestherangeintoNintervals,eachcontainingapproximatelysamenumberofsamplesGooddatascalingManagingcategoricalattributescanbetricky.,BinningMethodsforDataSmoothing,*Sorteddataforprice(indollars):4,8,9,15,21,21,24,25,26,28,29,34*Partitioninto(equi-depth)bins:-Bin1:4,8,9,15-Bin2:21,21,24,25-Bin3:26,28,29,34*Smoothingbybinmeans:-Bin1:9,9,9,9-Bin2:23,23,23,23-Bin3:29,29,29,29*Smoothingbybinboundaries:-Bin1:4,4,4,15-Bin2:21,21,25,25-Bin3:26,26,26,34,ClusterAnalysis,Regression,x,y,y=x+1,X1,Y1,Y1,Chapter3:DataPreprocessing,Whypreprocessthedata?DatacleaningDataintegrationandtransformationDatareductionDiscretizationandconcepthierarchygenerationSummary,DataIntegration,Dataintegration:combinesdatafrommultiplesourcesintoacoherentstoreSchemaintegrationintegratemetadatafromdifferentsourcesEntityidentificationproblem:identifyrealworldentitiesfrommultipledatasources,e.g.,A.cust-idB.cust-#Detectingandresolvingdatavalueconflictsforthesamerealworldentity,attributevaluesfromdifferentsourcesaredifferentpossiblereasons:differentrepresentations,differentscales,e.g.,metricvs.Britishunits,HandlingRedundantDatainDataIntegration,RedundantdataoccuroftenwhenintegrationofmultipledatabasesThesameattributemayhavedifferentnamesindifferentdatabasesOneattributemaybea“derived”attributeinanothertable,e.g.,annualrevenueRedundantdatamaybeabletobedetectedbycorrelationalanalysisCarefulintegrationofthedatafrommultiplesourcesmayhelpreduce/avoidredundanciesandinconsistenciesandimproveminingspeedandquality,DataTransformation,Smoothing:removenoisefromdataAggregation:summarization,datacubeconstructionGeneralization:concepthierarchyclimbingNormalization:scaledtofallwithinasmall,specifiedrangemin-maxnormalizationz-scorenormalizationnormalizationbydecimalscalingAttribute/featureconstructionNewattributesconstructedfromthegivenones,DataTransformation:Normalization,min-maxnormalizationz-scorenormalizationnormalizationbydecimalscaling,WherejisthesmallestintegersuchthatMax(|)1,Samples,Min-maxnormalizationAnattribute:income,havingvaluesfrom12000to98000Ifwewanttomapavalue73000toanewscope0.0,1.0Then(73000-12000)/(98000-12000)(1.0-0)=0.716Z-scorenormalizationIftheaverageoftheattribute“income”is54000,andthestandarddeviationis16000Then(73000-54000)/16000=1.225NormalizationbydecimalscalingGivenanattributeA,havingvaluesfrom986to987,themaximumabsolutionis987,sowegetj=3(thatis1000)-986willbetransformedto0.986,Chapter3:DataPreprocessing,Whypreprocessthedata?DatacleaningDataintegrationandtransformationDatareductionDiscretizationandconcepthierarchygenerationSummary,DataReductionStrategies,Warehousemaystoreterabytesofdata:Complexdataanalysis/miningmaytakeaverylongtimetorunonthecompletedatasetDatareductionObtainsareducedrepresentationofthedatasetthatismuchsmallerinvolumebutyetproducesthesame(oralmostthesame)analyticalresultsDatareductionstrategiesDatacubeaggregationDimensionalityreductionNumerosityreductionDiscretizationandconcepthierarchygeneration,DataCubeAggregation,ThelowestlevelofadatacubeTheaggregateddataforanindividualentityofintereste.g.,acustomerinaphonecallingdatawarehouse.MultiplelevelsofaggregationindatacubesFurtherreducethesizeofdatatodealwithReferenceappropriatelevelsUsethesmallestrepresentationwhichisenoughtosolvethetaskQueriesregardingaggregatedinformationshouldbeansweredusingdatacube,whenpossible,DimensionalityReduction,Featureselection(i.e.,attributesubsetselection):Selectaminimumsetoffeaturessuchthattheprobabilitydistributionofdifferentclassesgiventhevaluesforthosefeaturesisascloseaspossibletotheoriginaldistributiongiventhevaluesofallfeaturesreduce#ofpatternsinthepatterns,easiertounderstandThereare2dpossiblesub-featuresofdfeaturesTotestallthesesubsetsisprobablyimpossibleifthenumberoffeaturesistoohugeSoheuristicmethodsisoftenusedtosolvetheproblem,HeuristicFeatureSelectionMethods,Severalheuristicfeatureselectionmethods:Bestsinglefeaturesunderthefeatureindependenceassumption:choosebysignificancetests.Beststep-wisefeatureselection:Thebestsingle-featureispickedfirstThennextbestfeatureconditiontothefirst,.Step-wisefeatureelimination:RepeatedlyeliminatetheworstfeatureBestcombinedfeatureselectionandeliminationDecisiontree:ID3,C4.5,etc.,ExampleofDecisionTreeInduction,Initialattributeset:A1,A2,A3,A4,A5,A6,A4?,A1?,A6?,Class1,Class2,Class1,Class2,Reducedattributeset:A1,A4,A6,DataCompression,StringcompressionThereareextensivetheoriesandwell-tunedalgorithmsTypicallylosslessButonlylimitedmanipulationispossiblewithoutexpansionAudio/videocompressionTypicallylossycompression,withprogressiverefinementSometimessmallfragmentsofsignalcanbereconstructedwithoutreconstructingthewhole,DataCompression,OriginalData,CompressedData,lossless,OriginalDataApproximated,lossy,NumerosityReduction,ParametricmethodsAssumethedatafitssomemodel,estimatemodelparameters,storeonlytheparameters,anddiscardthedata(exceptpossibleoutliers)RegressionLog-linearmodels:obtainvalueatapointinm-DspaceastheproductonappropriatemarginalsubspacesNon-parametricmethodsDonotassumemodelsMajorfamilies:histograms,clustering,sampling,RegressionandLog-LinearModels,Linearregression:DataaremodeledtofitastraightlineOftenusestheleast-squaremethodtofitthelineMultipleregression:allowsaresponsevariableYtobemodeledasalinearfunctionofmultidimensionalfeaturevectorLog-linearmodel:approximatesdiscretemultidimensionalprobabilitydistributions,Linearregression:Y=+XTwoparameters,andspecifythelineandaretobeestimatedbyusingthedataathand.usingtheleastsquarescriteriontotheknownvaluesofY1,Y2,X1,X2,.Multipleregression:Y=b0+b1X1+b2X2.Manynonlinearfunctionscanbetransformedintotheabove.Log-linearmodels:Themulti-waytableofjointprobabilitiesisapproximatedbyaproductoflower-ordertables.Probability:p(a,b,c,d)=abacadbcd,RegressAnalysisandLog-LinearModels,Histograms,ApopulardatareductiontechniqueDividedataintobucketsandstoreaverage(sum)foreachbucketCanbeconstructedoptimallyinonedimensionusingdynamicprogrammingRelatedtoquantizationproblems.,Clustering,Partitiondatasetintoclusters,andonecanstoreclusterrepresentationonlyCanbeveryeffectiveifdataisclusteredbutnotifdatais“smeared”Canhavehierarchicalclusteringandbestoredinmulti-dimensionalindextreestructuresTherearemanychoicesofclusteringdefinitionsandclusteringalgorithms,furtherdetailedinChapter8,Sampling,Allowaminingalgorithmtorunincomplexitythatispotentiallysub-lineartothesizeofthedataChoosearepresentativesubsetofthedataSimplerandomsamplingmayhaveverypoorperformanceinthepresenceofskewDevelopadaptivesamplingmethodsStratifiedsampling:Approximatethepercentageofeachclass(orsubpopulationofinterest)intheoveralldatabaseUsedinconjunctionwithskeweddata,Sampling,SRSWOR(simplerandomsamplewithoutreplacement),SRSWR,RawData,Sampling,RawData,Cluster/StratifiedSample,Chapter3:DataPreprocessing,Whypreprocessthedata?DatacleaningDataintegrationandtransformationDatareductionDiscretizationandconcepthierarchygenerationSummary,Discretization,Threetypesofattributes:NominalvaluesfromanunorderedsetOrdinalvaluesfromanorderedsetContinuousrealnumbersDiscretization:dividetherangeofacontinuousattributeintointervalsSomeclassificationalgorithmsonlyacceptcategoricalattributes.ReducedatasizebydiscretizationPrepareforfurtheranalysis,DiscretizationandConcepthierachy,Discretizationreducethenumberofvaluesforagivencontinuousattributebydividingtherangeoftheattributeintointervals.Intervallabelscanthenbeusedtoreplaceactualdatavalues.Concepthierarchiesreducethedatabycollectingandreplacinglowlevelconcepts(suchasnumericvaluesfortheattributeage)byhigherlevelconcepts(suchasyoung,middle-aged,orsenior).,Discretizationandconcepthierarchygenerationfornumericdata,Binning(seesectionsbefore)Histogramanalysis(seesectionsbefore)Clusteringanalysis(seesectionsbefore)Entropy-baseddiscretization(willbeintroducedlater)Segmentationbynaturalpartitioning,Entropy-BasedDiscretization,GivenasetofsamplesS,ifSispartitionedintotwointervalsS1andS2usingboundaryT,theentropyafterpartitioningisTheboundarythatminimizestheentropyfunctionoverallpossibleboundariesisselectedasabinarydiscretization.Theprocessisrecursivelyappliedtopartitionsobtaineduntilsomestoppingcriterionismet,e.g.,ExperimentsshowthatitmayreducedatasizeandimproveclassificationaccuracySeethechapter”conceptdescriptionanddiscriminationmining”,Segmentationbynaturalpartitioning,3-4-5rulecanbeusedtosegmentnumericdataintorelativelyuniform,“natural”intervals.*Ifanintervalcovers3,6,7or9distinctvaluesatthemostsignificantdigit,partitiontherangeinto3equi-widthintervals*Ifitcovers2,4,or8distinctvaluesatthemostsignificantdigit,partitiontherangeinto4intervals*Ifitcovers1,5,or10distinctvaluesatthemostsignificantdigit,partitiontherangeinto5intervals,Exampleof3-4-5rule,(-$400-$5,000),Step4:,Concepthierarchygenerationforcategoricaldata,SpecificationofapartialorderingofattributesexplicitlyattheschemalevelbyusersorexpertsSpecificationofaportionofahierarchybyexplicitdatagroupingSpecificationofasetofattributes,butnotoftheirpartialorderingSpecificationofonlyapartialsetofattributes,Specificationofasetofattributes,Concepthierarchy

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论