8-1数据仓库与数据挖掘_第1页
8-1数据仓库与数据挖掘_第2页
8-1数据仓库与数据挖掘_第3页
8-1数据仓库与数据挖掘_第4页
8-1数据仓库与数据挖掘_第5页
已阅读5页,还剩33页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、Chapter 20: Data Analysis Chapter 20:Data AnalysisDecisionSupport SystemsData WarehousingData MiningClassificationAssociationRulesClusteringDecisionSupport SystemsDecision-support systemsareusedtomake business decisions,oftenbasedondatacollectedbyon-linetransaction-processing systems.Examplesofbusin

2、essdecisions:What items to stock?What insurancepremium to change?Towhom to sendadvertisements?Examplesofdata usedfor makingdecisionsRetailsalestransactiondetailsCustomerprofiles(income,age, gender,etc.)Decision-Support Systems: OverviewData analysistasksaresimplifiedbyspecializedtoolsandSQL extensio

3、nsExample tasksForeachproduct category andeach region,whatwere thetotalsalesinthelastquarter andhowdotheycompare withthe samequarterlast yearAsabove,for eachproductcategoryandeachcustomercategoryStatisticalanalysispackages(e.g.,:S+) canbeinterfaced withdatabasesStatisticalanalysisisa large field,but

4、not coveredhereData miningseekstodiscoverknowledgeautomaticallyintheformofstatisticalrulesandpatternsfromlargedatabases.Adata warehousearchivesinformationgatheredfrom multiple sources, andstoresitunderaunified schema,atasinglesite.Importantfor large businessesthatgeneratedata frommultipledivisions,p

5、ossiblyatmultiplesitesData mayalso be purchasedexternallyData WarehousingData sourcesoftenstoreonlycurrent data, nothistorical dataCorporatedecisionmaking requires aunifiedview of allorganizationaldata,includinghistoricaldataAdata warehouseisa repository(archive) of information gathered frommultiple

6、sources,stored under aunifiedschema, at asingle siteGreatly simplifiesquerying, permitsstudyofhistoricaltrendsShiftsdecisionsupportqueryload awayfromtransactionprocessing systemsData WarehousingDesignIssuesWhen andhowtogather dataSourcedriven architecture: datasourcestransmitnewinformationtowarehous

7、e,eithercontinuously or periodically(e.g.,atnight)Destinationdrivenarchitecture: warehouseperiodicallyrequestsnew information fromdatasourcesKeeping warehouseexactly synchronizedwith datasources(e.g.,usingtwo-phasecommit)istooexpensiveUsually OK to haveslightlyout-of-datedataatwarehouseData/updatesa

8、re periodicallydownloaded formonline transaction processing(OLTP) systems.What schematouseSchemaintegrationMore WarehouseDesignIssuesData cleansingE.g.,correct mistakes in addresses(misspellings,zipcodeerrors)Mergeaddress lists fromdifferent sourcesandpurgeduplicatesHowtopropagate updatesWarehousesc

9、hema maybea (materialized) viewofschema fromdatasourcesWhat datatosummarizeRawdatamaybetoo large to store on-lineAggregatevalues (totals/subtotals)oftensufficeQueries on rawdata canoftenbetransformedbyqueryoptimizertouse aggregatevaluesWarehouseSchemasDimensionvalues areusually encodedusingsmallinte

10、gersand mappedtofull valuesviadimension tablesResultantschema is calledastar schemaMore complicated schemastructuresSnowflakeschema: multiple levelsofdimensiontablesConstellation: multiple facttablesData WarehouseSchemaData MiningData miningistheprocessofsemi-automaticallyanalyzing large databasesto

11、find usefulpatternsPredictionbasedonpast historyPredict if acredit cardapplicant poses agoodcreditrisk,basedonsomeattributes (income, jobtype,age, .)andpasthistoryPredict if apatternofphonecalling cardusageislikely to be fraudulentSome examples of predictionmechanisms:ClassificationGivena newitem wh

12、ose class is unknown, predicttowhichclassitbelongsRegressionformulaeGivena setofmappingsforanunknownfunction,predictthefunctionresult fora newparametervalueData Mining(Cont.)DescriptivePatternsAssociationsFind books thatare often boughtby“similar”customers.Ifanewsuchcustomerbuys onesuch book, sugges

13、tthe otherstoo.Associationsmay be usedasafirststep in detectingcausationE.g.,associationbetween exposure to chemical Xand cancer,ClustersE.g.,typhoid cases wereclustered in an areasurroundingacontaminatedwellDetectionofclustersremainsimportantindetecting epidemicsClassificationRulesClassificationrul

14、eshelp assignnewobjectstoclasses.E.g.,givena newautomobile insuranceapplicant, shouldheorshebeclassifiedaslowrisk,medium riskorhighrisk?Classificationrulesforaboveexamplecoulduseavariety of data, suchaseducationallevel, salary,age,etc.personP,P.degree =mastersandP.income 75,000P.credit= excellentper

15、sonP,P.degree =bachelorsand(P.income25,000and P.income75,000)P.credit= goodRulesarenot necessarily exact:theremaybesomemisclassificationsClassificationrulescanbeshowncompactly as adecisiontree.DecisionTreeConstructionofDecisionTreesTrainingset: adatasampleinwhichthe classification is alreadyknown.Gr

16、eedytopdowngeneration of decision trees.Each internal nodeofthe treepartitionsthedatainto groupsbasedonapartitioningattribute, andapartitioningconditionforthe nodeLeafnode:all(or most) of theitemsatthenodebelongtothe sameclass, orallattributeshave beenconsidered,and no furtherpartitioning is possibl

17、e.Best SplitsPick bestattributesandconditionsonwhichtopartitionThepurity of aset Softraininginstances canbemeasuredquantitativelyinseveral ways.Notation:number of classes=k,numberofinstances =|S|,fractionofinstances in classi=pi.TheGinimeasure of purityisdefined asGini (S)= 1-When allinstancesare in

18、 asingle class,theGinivalueis0Itreaches itsmaximum (of1 1 /k) if eachclassthe samenumber of instances. ki- 1p2iBest Splits(Cont.)Another measureofpurity is theentropymeasure,whichisdefined asentropy (S)= When aset Sissplitintomultiplesets Si,I=1, 2, , r, we canmeasure thepurityofthe resultantsetofse

19、tsas:purity(S1, S2, .,Sr) =TheinformationgainduetoparticularsplitofS intoSi, i=1,2,.,rInformation-gain(S, S1,S2, .,Sr) =purity(S) purity (S1,S2, Sr)ri= 1|Si|S|purity (Si)ki- 1pilog2 piBest Splits(Cont.)Measure of “cost”ofa split:Information-content(S, S1,S2, .,Sr)=Information-gain ratio=Information-

20、gain (S,S1,S2, ,Sr)Information-content (S, S1,S2, .,Sr)Thebestsplitistheone thatgivesthe maximuminformationgainratiolog2ri- 1|Si|S|Si|S| Finding BestSplitsCategoricalattributes (with no meaningfulorder):Multi-waysplit, onechildforeachvalueBinarysplit: tryallpossiblebreakupofvaluesintotwosets,and pic

21、kthe bestContinuous-valuedattributes (canbesorted in ameaningfulorder)Binarysplit:Sort values,try eachasasplitpointE.g.,ifvaluesare 1, 10,15,25, split at1, 10, 15Pick thevaluethat gives bestsplitMulti-waysplit:A seriesofbinarysplits on thesame attributehasroughlyequivalent effectDecision-Tree Constr

22、uctionAlgorithmProcedureGrowTree(S)Partition(S);ProcedurePartition(S)if(purity(S) por|S| s)thenreturn;foreachattributeAevaluatesplitsonattributeA;Usebestsplitfound(acrossallattributes)topartitionSintoS1, S2, .,Sr,fori= 1, 2, .,rPartition(Si);OtherTypesofClassifiersNeuralnet classifiers arestudied in

23、 artificialintelligence andarenot coveredhereBayesianclassifiersuseBayestheorem, which saysp(cj|d) =p(d| cj)p(cj)p(d)wherep(cj|d) =probabilityofinstancedbeinginclasscj,p(d| cj) =probabilityofgeneratinginstancedgivenclasscj,p(cj)= probability of occurrenceofclasscj, andp(d) =probabilityofinstancedocc

24、uringNaveBayesianClassifiersBayesianclassifiersrequirecomputationofp(d| cj)precomputationofp(cj)p(d) canbeignored since it is thesame forallclassesTosimplifythetask,naveBayesianclassifiersassumeattributeshave independent distributions, andthereby estimatep(d|cj) =p(d1|cj) *p(d2|cj) *.* (p(dn|cj)Each

25、 of thep(di|cj) canbeestimatedfroma histogramondivaluesfor eachclasscjthehistogram is computed fromthe training instancesHistograms on multiple attributesare moreexpensive to computeand storeRegressionRegression deals withthe predictionofavalue,rather thanaclass.Givenvaluesfor aset of variables,X1,

26、X2, , Xn, we wishtopredictthevalueofavariableY.Oneway is to infer coefficientsa0, a1, a1, , ansuch thatY=a0+a1*X1+a2*X2+ +an*XnFinding suchalinearpolynomialiscalledlinearregression.Ingeneral,theprocessoffinding acurvethatfits thedata is alsocalledcurvefitting.Thefit mayonly be approximatebecause of

27、noise in thedata,orbecause therelationshipisnot exactlyapolynomialRegression aimstofindcoefficientsthatgive thebest possible fit.AssociationRulesRetailshopsare often interestedinassociations betweendifferent items thatpeople buy.Someone whobuys bread is quite likelyalso to buymilkA personwhobought t

28、hebookDatabaseSystemConceptsisquitelikelyalsotobuythe bookOperatingSystem Concepts.Associationsinformationcan be usedinseveralways.E.g.,when acustomerbuysa particularbook,anonline shopmay suggestassociatedbooks.Associationrules:breadmilkDB-Concepts,OS-Concepts NetworksLeft handside:antecedent,righth

29、and side:consequentAnassociationrule musthaveanassociatedpopulation; thepopulation consists of aset ofinstancesE.g.,each transaction (sale)ata shopisaninstance, andtheset of alltransactionsisthe populationAssociationRules(Cont.)Ruleshave an associatedsupport,aswellasanassociated confidence.Supportis

30、a measureofwhatfractionofthepopulationsatisfiesboththeantecedentandthe consequentofthe rule.E.g.,suppose only0.001percentofallpurchases includemilkandscrewdrivers.Thesupportforthe ruleismilkscrewdriversislow.Confidenceisa measureofhow often theconsequent is truewhentheantecedentistrue.E.g.,therulebr

31、eadmilkhasaconfidence of 80 percentif80percentofthepurchases thatincludebreadalso includemilk.Finding Association RulesWearegenerally onlyinterestedinassociationruleswith reasonablyhighsupport (e.g.,support of 2% or greater)NavealgorithmConsiderall possible setsofrelevantitems.Foreachsetfinditssuppo

32、rt(i.e.,counthow manytransactionspurchaseallitemsinthe set).Largeitemsets: setswithsufficientlyhighsupportUselargeitemsetstogenerateassociationrules.From itemsetAgeneratetheruleA- b bforeachbA.Support of rule=support (A).Confidence of rule=support (A) /support(A- b)Finding SupportDeterminesupportofi

33、temsetsviaasinglepassonsetoftransactionsLargeitemsets:setswith ahighcountattheend of thepassIfmemorynot enoughtohold allcountsfor allitemsetsusemultiplepasses,consideringonly someitemsetsineachpass.Optimization: Onceanitemsetiseliminated becauseits count (support)istoosmallnoneofitssupersets needs t

34、o be considered.Thea prioritechniquetofindlargeitemsets:Pass 1: count supportofall setswithjust 1item.Eliminatethoseitemswith lowsupportPassi:candidates: every setofiitemssuch thatall itsi-1item subsetsare largeCountsupport of allcandidatesStop if there arenocandidatesOtherTypesofAssociationsBasicas

35、sociationruleshave severallimitationsDeviations fromthe expected probability aremore interestingE.g.,ifmany peoplepurchasebread,and manypeople purchase cereal,quiteafewwouldbeexpectedtopurchasebothWeareinterestedinpositiveaswell asnegativecorrelationsbetween setsofitemsPositivecorrelation:co-occurre

36、nceishigherthanpredictedNegativecorrelation:co-occurrenceislowerthan predictedSequenceassociations/correlationsE.g.,wheneverbondsgoup,stockprices go downin2daysDeviations fromtemporalpatternsE.g.,deviationfroma steadygrowthE.g.,salesofwinterweargodown in summerNotsurprising,partofa known pattern.Loo

37、k fordeviationfromvaluepredictedusingpastpatternsClusteringClustering:Intuitively,findingclustersofpointsinthe given datasuchthat similarpoints lieinthesameclusterCanbeformalizedusingdistancemetrics in severalwaysGrouppointsintoksets (foragivenk) suchthattheaveragedistanceofpointsfromthecentroidofth

38、eirassignedgroupisminimizedCentroid:pointdefinedbytakingaverageofcoordinatesineach dimension.Another metric:minimizeaveragedistancebetween every pairofpoints in aclusterHasbeenstudied extensively in statistics, butonsmalldata setsData miningsystems aimatclustering techniquesthatcanhandle verylargeda

39、tasetsE.g.,theBirchclusteringalgorithm(moreshortly)HierarchicalClusteringExample frombiologicalclassification(the wordclassificationheredoes notmean apredictionmechanism)chordatamammaliareptilialeopardshumanssnakescrocodilesOtherexamples:Internetdirectory systems(e.g., Yahoo,more on thislater)Agglom

40、erative clusteringalgorithmsBuildsmallclusters,thencluster small clusters intobigger clusters,andsoonDivisiveclustering algorithmsStartwith allitemsina singlecluster,repeatedly refine(break) clusters intosmalleronesClustering AlgorithmsClustering algorithmshavebeen designed to handlevery large datasetsE.g.,theBirchalgorithmMain idea: useanin-memoryR-tree to store pointsthat arebeingclusteredInsertpoints oneata timeintotheR

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

最新文档

评论

0/150

提交评论