第三章-数据分析方法课件_第1页
第三章-数据分析方法课件_第2页
第三章-数据分析方法课件_第3页
第三章-数据分析方法课件_第4页
第三章-数据分析方法课件_第5页
已阅读5页,还剩154页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Chapter4TheMethodsofDataAnalysis6.1Datanormalization

Datanormalizationisthebasisforcomparingexperimentswithinlargeserieswhenexperimentalconditionsmaynotbeidentical.Normalizationensuresthattheexperimentalqualityofthedataiscomparableand,soundmathematicalalgorithmshavebeenemployed.Normalizationincludesvariousoptionstostandardizedataandtoadjustbackgroundlevelsandcorrectgradients.Thecommonlyusednormalizationfunctionsareasfollows:

Linearnormalization:

(6.1)

Rationormalization:

(6.2)

Z-scorenormalization:

(6.3)Generally,linearnormalizationisrecommended(ifX’max

=1andX’min

=0,x’i

isnormalizedinpercentagebyformula(6.1)).AfterRationormalization,thesumofnormalizedvariableswillbeequalto1.Z-scoreassumesxi

obeysGaussiandistribution.Ifxi

hasadifferentdistribution,thenthenormalizationwilltwistthepattern(variancewillbefarawayfromthestandarddeviation)andleadstoincorrectpatternrecognition.

σIspopulationstandarddeviation,ingeneral,itcanbeapproximatedbysamplestandarddeviation(S)

6.2SimpleLinearRegressionLearningObjectives:1. DescribetheLinearRegressionModel2.ExplainOrdinaryLeastSquares3. ComputeRegressionCoefficients4. Evaluatethelinearregressionmodel5. PredictResponseVariable6.2.1DescribetheLinearRegressionModel

RegressionModels:1. Answer‘WhatIstheRelationshipBetweentheVariables?’2. EquationUsed1NumericalDependent(Response)Variable1orMoreNumericalorCategoricalIndependent(Explanatory)Variables3. UsedMainlyforPrediction&EstimationTypesof

RegressionModelsRegressionModelsTypesof

RegressionModelsRegressionModelsSimple1ExplanatoryVariableTypesof

RegressionModelsRegressionModels2+ExplanatoryVariablesSimpleMultiple1ExplanatoryVariableTypesof

RegressionModelsRegressionModelsLinearSimpleMultiple1ExplanatoryVariable2+ExplanatoryVariableTypesof

RegressionModelsRegressionModelsLinearNon-LinearSimpleMultiple1ExplanatoryVariable2+ExplanatoryVariableTypesof

RegressionModelsRegressionModelsLinearNon-LinearSimpleMultipleLinear1ExplanatoryVariable2+ExplanatoryVariableTypesof

RegressionModelsRegressionModelsLinearNon-LinearSimpleMultipleLinearNon-Linear1ExplanatoryVariable2+ExplanatoryVariableLinearEquationsHighSchoolTeacherYXiii

01LinearRegressionModel

AssumesthattherelationshipbetweenvariablesisalinearfunctionDependent(Response)Variable

(e.g.,properties)Independent(Explanatory)Variable(e.g.,structurerepresentation)PopulationSlopePopulation

Y-InterceptRandomErrorLinearRegressionModelObservedvalueObservedvalue

i

=RandomerrorSimpleLinearRegressionModelUnsampledobservation

i

=RandomerrorObservedvalue^02040600204060XY6.2.2ExplainOrdinaryLeastSquaresScatterGraph:1. Plotofall(Xi,Yi)pairs2. SuggestshowwellmodelwillfitThinkingChallengeHowwouldyoudrawalinethroughthepoints?Howdoyoudeterminewhichline‘fitsbest’?ThinkingChallengeHowwouldyoudrawalinethroughthepoints?Howdoyoudeterminewhichline‘fitsbest’?ThinkingChallengeHowwouldyoudrawalinethroughthepoints?Howdoyoudeterminewhichline‘fitsbest’?ThinkingChallengeHowwouldyoudrawalinethroughthepoints?Howdoyoudeterminewhichline‘fitsbest’?ThinkingChallengeHowwouldyoudrawalinethroughthepoints?Howdoyoudeterminewhichline‘fitsbest’?ThinkingChallengeHowwouldyoudrawalinethroughthepoints?Howdoyoudeterminewhichline‘fitsbest’?ThinkingChallengeHowwouldyoudrawalinethroughthepoints?Howdoyoudeterminewhichline‘fitsbest’?LeastSquares(LS)1. ‘BestFit’MeansDifferenceBetweenActualYValues&PredictedYValuesAreaMinimumButPositiveDifferencesOffSetNegative2. LSMinimizestheSumoftheSquaredDifferences(SSE)LeastSquaresGraphically6.2.3ComputeRegressionCoefficientsGoal:Minimizesquarederror:(6.4)(6.5)(6.6)Where:6.2.4PredictResponseVariableComputationTable1ComputationTable2CoefficientEquationsSampleSlopeSampleY-interceptRegressionEquationExample1:1.

维尼纶纤维的耐热水性能好坏可以用指标“缩醛化度”来衡量,此指标越高,耐热水性能也越好。下表为测得的一组数据,分别计算出回归系数和相关系数。甲醛浓度(g/L)18202224262830缩醛化度

(mol%)26.8628.3528.7528.8729.7530.0030.36

把聚乙烯醇溶解于水中﹐经干法纺丝或湿法纺丝合成纤维。聚乙烯醇纤维用甲醛处理制成聚乙烯醇缩甲醛纤维﹐即通常所称的维尼纶。聚乙烯醇缩醛化反应可得到重要的高分子产品缩甲醛:维尼纶缩丁醛:良好的玻璃粘合剂Table1:Table2:6.2.5Evaluatethelinearregressionmodel

MeasuresofVariationinRegression

1. TotalSumofSquares(SSyy)MeasuresVariationofObservedYiAroundtheMean

Y2. ExplainedVariation(SSR)VariationDuetoRelationshipBetween

X&Y3. UnexplainedVariation (SSE)VariationDuetoOtherFactorsVariationMeasuresTotalsumofsquares(Yi-

Y)2

Unexplainedsumofsquares(Yi-

Yi)2

^Explainedsumofsquares(Yi-

Y)2

^Yi

1.

ProportionofVariation‘Explained’byRelationshipBetweenX&YCoefficientofDetermination0

r2

1=0.9211CoefficientofDeterminationExamplesr2=1r2=1r2=.8r2=02. PearsonProductMomentCoefficientofCorrelation,r:SimpleCoefficientofCorrelation

=0.9597CoefficientofCorrelationValues-1.0+1.00-.5+.5-1.0+1.00-.5+.5NoCorrelation-1.0+1.00Increasingdegreeofnegativecorrelation-.5+.5NoCorrelation-1.0+1.00-.5+.5PerfectNegativeCorrelationNoCorrelation-1.0+1.00-.5+.5PerfectNegativeCorrelationNoCorrelationIncreasingdegreeofpositivecorrelation-1.0+1.00PerfectPositiveCorrelation-.5+.5PerfectNegativeCorrelationNoCorrelationCoefficientofCorrelationExamplesr=1r=-1r=.89r=06.2.6IntroductiontoNon-linearregression基本概念非线性模型及其线性化方法非线性回归1. 因变量y与x之间不是线性关系2. 可通过变量代换转换成线性关系用最小二乘法求出参数的估计值并非所有的非线性模型都可以化为线性模型几种常见的非线性模型

指数函数线性化方法两端取对数得:lny

=ln

+

x令:y'=lny,则有y'

=ln

+

x基本形式:图像

<几种常见的非线性模型

幂函数线性化方法两端取对数得:lgy=lg

+

lgx令:y'=lgy,x'=lgx,则y'

=lg

+x'基本形式:图像0<<1

1

=1-1<

<0

<-1

=-1几种常见的非线性模型

双曲线函数线性化方法令:y'=1/y,x'=1/x,则有y'

=

+

x'基本形式:图像

<0

>0几种常见的非线性模型

对数函数线性化方法x'=lgx,则有y'

=

+

x'基本形式:图像

0

<0几种常见的非线性模型

S型曲线线性化方法令:y'=1/y,x'=e-x,则有y'

=

+

x'基本形式:图像非线性回归

(实例)

【例】为研究生产率与废品率之间的关系,记录数据如下表。试拟合适当的模型。废品率与生产率的关系生产率(周/单位)x1000200030003500400045005000废品率(%)y5.26.56.88.110.210.313.0非线性回归

生产率与废品率的散点图非线性回归

(实例)用线性模型:y

=

0

1x+

,有y=2.671+0.0018x用指数模型:y=

x

,有y

=4.05

(1.0002)x3.用指数模型:y=4.003e0.000219x4.比较5.直线的残差平方和=5.3371<指数模型2的残差平方和=6.11。直线模型略好于指数模型6.指数模型3的残差平方和=2.8459,因此指数模型3最吻合。6.3MultipleLinearRegressionLearningObjectives:DescribetheMultipleLinearRegressionModelExplainOrdinaryLeastSquaresand ComputeRegressionCoefficientsEvaluatethelinearregressionmodelPredictResponseVariableAssumesthattheregressionfunctionE(Y|X)islinearLinearmodelsareoldtoolsbut…StillveryusefulSimpleAllowaneasyinterpretationofregressorseffectsVerywidesinceXi’scanbeanyfunctionofothervariables(quantitativeorqualitative)Usefultounderstandbecausemostothermethodsaregeneralizationsofthem.6.3.1DescribetheMultipleLinearRegressionModel6.3.2ExplainOrdinaryLeastSquaresandComputeRegressionCoefficientsXisn(p+1)ofinputvectorsyisthen-vectorofoutputs(labels)

isthe(p+1)-vectorofparametersMatrixNotation:Lesastsquaresestimation:Thelinearregressionmodelhastheform theβj’sareunknownparametersorcoefficients.Typicallywehaveasetoftrainingdata(x1,y1),…,(xn,yn)fromwhichwewanttoestimatetheparametersβ.ThemostpopularestimationmethodisleastsquaresLeastSquares:findsolution,,byminimizingtheresidualsumofsquares(RSS):Trainingsamplesarerandom,OR,yi’sareconditionallyindependentgivenxiLinearregressionandleastsquares:Reasonablecriterionwhen…GeometricalviewofleastsquaresSimplyfindthebestlinearfittothedataeiistheresidualofobservationiOnecovariateTwocovariatesSolvingLeastSquaresAccordingtothemathematicsextremetheories:Then,SettingtheFirstDerivativetoZero:6.3.3PredictResponseVariableAssumingthat(XTX)isnon-singular,thenormalequationsgivestheuniqueleastsquaressolution:LeastsquarespredicitonsWhen(XTX)issingulartheleastsquarescoefficientsarenolongeruniquelydefined.Somekindofalternativestrategyisneededtoobtainasolution:Recodingand/ordroppingredundantcolumnsFilteringControlfitbyregularizationGeometricalinterpretationofleastsquaresestimatesPredictedoutcomes

ŷaretheorthogonalprojectionofyontothecolumnspaceofX(thatspansasubspaceofRn).TechniqueforMultipleRegressionComputingdirectlyhaspoornumericpropertiesQRDecompositionofXDecomposeX=QRwhereQisN(p+1)orthogonalmatrix(QTQ=I(p+1))Risan(p+1)(p+1)uppertriangularmatrixThen1)ComputeQTy2)SolveR=QTybyback-substitutionSubsetselectionGoal:toeliminateunnecessaryvariablesfromthemodel.Wewillconsiderthreeapproaches:BestsubsetregressionChoosesubsetofsizekthatgiveslowestresidualsumofsquares(RSS).ForwardstepwiseselectionContinuallyaddfeatureswiththelargestF-ratioBackwardstepwiseselectionRemovefeatureswithsmallF-ratioGreedytechniques–notguaranteedtofindthebestmodelBestsubsetregressionForeachfindthesubsetofsizekthatgivesthesmallestRSS.Leapsandboundsprocedureworkswithp

≤40.Howtochoosek?Choosemodelthatminimizespredictionerror(notatopichere).Whenpislargesearchingthroughallsubsetsisnotfeasible.Canweseekagoodpaththroughsubsetsinstead?ForwardStepwiseselectionMethod:Startwithinterceptmodel. SequentiallyincludevariablethatmostimprovetheRSS(β)basedontheFstatistic:StopwhennonewvariableimprovesfitsignificantlyBackwardStepwiseselectionMethod:StartwithfullmodelSequentiallydeletepredictorsthatproducesthesmallestvalueoftheFstatistic,i.e.increasesRSS(β)least.StopwheneachpredictorinthemodelproducesasignificantvalueoftheFstatisticHybridsbetweenforwardandbackwardstepwiseselectionexistsSubsetselectionProducesmodelthatisinterpretableandhaspossiblylowerpredictionerrorForcessomedimensionsofXtozero,thusprobablydecreaseVar(β)Optimalsubsetmustbechosentominimizepredicionerror(modelselection:notatopichere)6.3.4Evaluatethelinearregressionmodel1.determinationcoefficient(Rsquare)2.multiplecorrelationcoefficient3.修正多重判定系数

(adjustedmultiplecoefficientofdetermination)

用样本容量n和自变量的个数p去修正R2得到计算公式为避免增加自变量而高估R2意义与R2类似数值一般小于R2Example2:根据给定的数据进行多元线性回归,分别计算出回归系数和相关系数。其中C1,C2,C3分别为混合物各组分的浓度,已知它们与粘度之间成线性关系。C1C2C3粘度0.4020.1530.0580.6250.5030.3010.1830.8260.3060.1090.2241.1820.2960.3650.0091.9440.3090.4050.1092.3720.0550.1530.1563.243y(粘度)=2.5447-6.8794*C1+4.2240*C2+2.0547*C3R=0.9926,R2=0.9852

Theindependentvariablesarenotnormalized:Theindependentvariablesarenormalized,andthenormalizedintervalis[-1,1]:y(粘度)=1.9503-1.5410*C1+0.6252*C2+0.2209*C3R=0.9926,R2=0.9852=0.963,n=6,p=36.4PrincipalComponentAnalysisPrincipalpurpose:Reducingdimensionality:largebodyofdatatomanageableset.Rotatesmultivariatedatasetintoanewconfigurationwhichiseasiertointerpret.simplifydatalookatrelationshipsbetweenvariableslookatpatternsofresearchsystemsPCA:GeneralmethodologyFromporiginalvariables:x1,x2,...,xp: Producek(k<=p)newvariables:y1,y2,...,yk:

y1=a11x1+a12x2+...+a1pxp

y2=a21x1+a22x2+...+a2pxp ...

yk=ak1x1+ak2x2+...+akpxpPCA:Generalmethodologysuchthat:yk'sareuncorrelated(orthogonal)y1explainsasmuchaspossibleoforiginalvarianceindatasety2explainsasmuchaspossibleofremainingvarianceetc.PrincipalComponentsAnalysis1stPrincipalComponent,y12ndPrincipalComponent,y2PrincipalComponentsAnalysis{a11,a12,...,a1p}is1stEigenvectorof correlation/covariancematrix,andcoefficientsoffirstprincipalcomponent

{a21,a22,...,a2p}is2ndEigenvectorof correlation/covariancematrix,andcoefficientsof2ndprincipalcomponent…{ak1,ak2,...,akp}iskthEigenvectorof correlation/covariancematrix,and coefficientsofkthprincipalcomponentPrincipalComponentsAnalysisSo,principalcomponentsaregivenby:

y1=a11x1+a12x2+...+a1pxp

y2=a21x1+a22x2+...+a2pxp ...

yk=ak1x1+ak2x2+...+akpxpxj’sarestandardizedifcorrelationmatrixisused(mean0.0,SD1.0)PrincipalComponentsAnalysisScoreofithunitonjthprincipalcomponent yi,j=aj1xi1+aj2xi2+...+ajkxipPCAScoresxi2xi1yi,1yi,2PrincipalComponentsAnalysisAmountofvarianceaccountedforby:1stprincipalcomponent,λ1,1steigenvalue2ndprincipalcomponent,λ2,2ndeigenvalue...

λ1

>

λ2

>

λ3

>λ4

>...Averageλj=1(correlationmatrix)PrincipalComponentsAnalysis:Eigenvaluesλ1λ2PCA:Terminologyjthprincipalcomponentisjtheigenvectorof correlation/covariancematrixcoefficients,ajk,areelementsofeigenvectorsandrelateoriginalvariables(standardizedifusingcorrelationmatrix)tocomponentsscoresarevaluesofunitsoncomponents(producedusingcoefficients)amountofvarianceaccountedforbycomponentisgivenbyeigenvalue,λjproportionofvarianceaccountedforbycomponentisgivenbyλj/Σλjloadingofkthoriginalvariableonjthcomponentisgivenbyajk%λj--correlationbetweenvariableandcomponent

Howmanycomponentstouse?

Ifλj<1thencomponentexplainslessvariancethanoriginalvariable(correlationmatrix)Use2components(or3)forvisualeaseScorediagram:PrincipalComponentsAnalysison:

CovarianceMatrix:VariablesmustbeinsameunitsEmphasizesvariableswithmostvarianceMeaneigenvalue…1.0Usefulinmorphometrics,afewothercasesCorrelationMatrix:Variablesarestandardized(mean0.0,SD1.0)VariablescanbeindifferentunitsAllvariableshavesameimpactonanalysisMeaneigenvalue=1.0RotationsofPrincipalComponents(ExploratoryFactorAnalysis)Factorsarerotatedcomponents(justrotateafewprincipalcomponents)Varimax:triestomaximizevarianceofsquaredloadingsforeachfactor(orthogonal):linesupfactorswithoriginalvariablesimprovesinterpretabilityoffactorsQuartimax:triestominimizesumsofsquaresofproductsofloadings(orthogonal)Procedureforprincipalcomponentsanalysis1.Decidewhethertousecorrelationorcovariancematrix2.Findeigenvectors(components)andeigenvalues(varianceaccountedfor)3.Decidehowmanycomponentstousebyexaminingeigenvalues(perhapsusingscreediagram)4.Examineloadings(perhapsvectorloadingplot)5.Plotscores6.Tryrotation--gotostep4Singularvaluedecomposition(SVD)TheSVDofthematrixhastheform whereandareorthogonalmatricesandD=diag(d1,…..,dr)arethenon-zerosingularvaluesofX.r≤min(n,p)istherankofXTheeigenvectorsviarecalledtheprincipalcomponentsofX.1.

Example3:

下表为n个组分5个不同浓度的混合物在10个波长下测定的吸光光度值,利用主成分分析来确定它的特征值,并以此判断该混合物的主成分数。5.31.16.14.112.117.440.626.353.260.418.657.327.572.665.621.952.231.368.774.37.255.215.263.240.28.339.414.545.333.218.475.429.590.871.720.611.126.324.357.328.425.336.944.580.227.336.335.153.580.6c1c2c3c4c5特征值分别为:96610.99,4308.51,3.7543,2.5971,1.7262,主成分数为2。Firstpc:96610.99/(99610.99+4308.51+3.7543+2.5971+1.7262)=0.9572Secondpc:4308.51/(96610.99+4308.51+3.7543+2.5971+1.7262)=0.0427Pc1+pc2=0.99996.5PrincipalComponentRegression

PCR,orprincipalcomponentregression,isasimpleextensionofMLRandPCA.Inthefirststep,theprincipalcomponentsarecalculated.ThescoresofthemostimportantprincipalcomponentsareusedasthebasisforthemultiplelinearregressionwiththetargetdatayThemostimportantpointinPCRistheproperselectionoftheeigenvectorstobeincluded.Aplotoftheeigenvaluesusuallyindicatestothe"best"numberofeigenvectors.AdvantagesofPCRoverMLR:

Noiseremainsintheresiduals,sincetheeigenvectorswithloweigenvaluesrepresentonlypartsofthedatawithlowvariance.Theregressioncoefficientsaremorestable.Thisisduetothefactthattheeigenvectorsareorthogonaltoeachother.TheflowchartofPCR:LinearregressionbySVDAgeneralsolutiontoy=XβcanbewrittenasThefilterfactorsωideterminestheextentofshrinking,0≤ωi≤1,orstretching,ωi>1,alongthesingulardirectionsuiFortheOLSsolutionωi=1,i=1,…,p,i.e.allthedirectionsuicontributeequallyPCRUselinearcombinationszm=Xvasnewfeaturesvjistheprincipalcomponent(columnofV)correspondingtothejthlargestelementofD,e.g.

thedirectionsofmaximalsamplevarianceForsomeM≤pformthederivedinputvectors [z1…zM]=[Xv1……XvM]Regressyonz1…zM,givesthesolutionwherePCRcontinuedThem’thprincipalcomponentdirectionvmsolves:Filterfactorsbecome

e.g.itdiscardsthep-MsmallesteigenvaluecomponentsfromtheOLSsolution.Ifp=MitgivestheOLSsolution6.6IntroductionofOrigin6.6.1introduction

Originisprofessionalgraphinganddataanalysissoftwareforscientistsandengineers.Origin,hasbeengrowinginpopularityamongscientistsandengineersasaseriousdataanalysisandgraphingsoftwaresince1991.Originisusedinhundredsoflargecorporationsandaroundathousandcollegesanduniversitiesworldwide.Therearevariousversion,itremainscommittedtothemissionofmakingOriginthebestscientificgraphingsoftwareanddataanalysissoftware.Alongwithitseasy-to-usegraphicalinterface,Originoffersintuitive,yetpowerful,researchtoolsforthedailyneedsoftheresearcher.ThelatestversionisOrigin7.5.MenusandMenuCommands

Origin'smenubarprovidescommandstoperformoperationsontheactivewindowandtoperformgeneraloperationssuchasopeningaHelpfileorturningonthedisplayofatoolbar.Themenubarchangesasyouchangetheactivewindow.Forexample,thefollowingfigurescomparetheworksheetandgraphmenubars.

Origin是美国OriginLab公司(其前身为Microcal公司)开发的图形可视化和数据分析软件,是科研人员和工程师常用的高级数据分析和制图工具。自1991年问世以来,由于其操作简便,功能开放,很快就成为国际流行的分析软件之一,是公认的快速、灵活、易学的工程制图软件。在国内,其使用范围也越来越广泛,目前的最高版本为Origin7.5。当前流行的图形可视化和数据分析软件有Matlab,Mathmatica和Maple等。这些软件功能强大,可满足科技工作中的许多需要,但使用这些软件需要一定的计算机编程知识和矩阵知识,并熟悉其中大量的函数和命令。而使用Origin就像使用Excel和Word那样简单,只需点击鼠标,选择菜单命令就可以完成大部分工作,获得满意的结果。

像Excel和Word一样,Origin是个多文档界面应用程序。它将所有工作都保存在Project(*.OPJ)文件中。该文件可以包含多个子窗口,如Worksheet,Graph,Matrix,Excel等。各子窗口之间是相互关联的,可以实现数据的即时更新。子窗口可以随Project文件一起存盘,也可以单独存盘,以便其他程序调用。

Origin具有两大主要功能:数据制图和数据分析。Origin数据制图主要是基于模板的,提供了50多种2D和3D图形模板。用户可以使用这些模板制图,也可以根据需要自己设置模板。Origin数据分析包括排序、计算、统计、平滑、拟合和频谱分析等强大的分析工具。这些工具的使用也只是单击工具条按钮或选择菜单命令。

在Origin7.0的基础上,OriginLab公司开发了Originpro和附加模块(Addonmodules)。用户可以在Originpro中建立自己需要的特殊工具。Originpro的灵活界面使用起来快捷方便,这样用户可以将精力集中到图形的数据分析上,而不是处理图形本身。Addonmodules为Origin和Originpro添加了特殊的高级数据分析功能,可以弥补Origin7.0相对Matlab和Mathmatica的不足。用户可以自定义数学函数和制图模板,添加菜单命令和命令按钮,调用OriginC和NAG函数。Origin界面6.6.2数据分析绘图工具Origin§1概述§2数据文件的建立§3数据的编辑§4绘制图形§5图形的编辑和格式化§6Tools工具栏的使用§7数据分析——曲线拟合§8Origin图形文件的输出1.1Origin的主要功能由数据或函数作图图形的拟合1.2Origin的工作界面(Workspace)工作表窗口子窗口工程管理器图形窗口1.标题栏

2.菜单栏3.工具栏工具栏的开启方法4.子窗口子窗口种类5.工程管理器(ProjectExplorer):

TheProjectExplorerisatooltohelp

youorganizeyourOriginprojects

6.状态栏Origin的工作界面工具栏的开启方法:selectView:Toolbars

fromtheOriginmenubar.Whenaworkbook(Excel)

isactive,select

Window:OriginToolbars.Toolbarsdialogbox

子窗口的种类主要有:TheWorksheetWindow工作表窗口TheExcelWorkbookWindowExcel工作表窗口TheGraphWindow图形窗口

TheFunctionGraphWindow函数图形窗口TheLayoutPageWindow版面编排窗口Attention:Eachchildwindowhasitsownmenustructure,whichisdisplayedwhenthewindowisactive.

1.使用菜单中的相应命令2.使用工具按钮3.右击鼠标,在弹出的快捷菜单中

选相应命令4.选定对象后双击,打开对话框1.3基本操作方法1.Origin的启动桌面快捷图标“开始”→“程序”→“MicrocalOrigin6.0”→快捷图标1.4Origin的启动和退出2.Origin的退出方法有两种:单击右上角的关闭按扭;单击Origin窗口菜单的“File”→“Exit”注意:要区分Origin的退出和子窗口的退出Saveaproject

保存工程Saveachildwindowseparatelyasafile

单独将子窗口作为一个文件保存Saveatemplateasafile

保存为模板文件可保存的文件类型及文件扩展名1.5Origin文件的保存文件类型及文件扩展名(fileextension)

Project——OPJ

ItcannotsaveastemplateGraphWorksheet——OGW

TemplateextensionisOTWExcelWorkbook——XLS

ItcannotsaveastemplateLayoutPage——OTPItcannotsaveasfileMatrix——OGMTemplateextensionisOTMFunctionGraph——OGGTemplateextensionisOTPNotes——TXTItcannotsaveastemplate

Originprovidesseveralwaystoadddatatotheworksheet1.Enteringdatausingthekeyboard.键盘输入2.Importingafile.导入文件3.PastingdatafromanotherapplicationusingtheClipboard.4.Pastingdatafromanother(orthesame)Originworksheet

usingtheClipboard.(3)和(4)是粘贴数据5.UsingExcelWorkbookWindow.打开或创建Excel工作表6.Usingafunctiontosetcolumnvalues.用函数设置列的值§2数据文件的建立

§3数据的编辑3.1工作表简介3.2工作表的选定操作3.3数据的编辑修改3.4列的插入、删除及重排3.5行的插入和删除3.6删除工作表3.7格式化数据表(Worksheet)3.1工作表简介Theworksheetwindowisorganizedinto

verticalcolumnsandhorizontalrows.

工作表由垂直的列和水平的行组成Attheintersectionofeachcolumnandrowisacell.

列与行的交叉处称为单元格Eachcellcancontainasinglenumeric,text,numericandtext,date,ortimevalue.

每个单元格内可包含数、文本、日期、时间等Originprojectscancontainmultipleworksheets.

一个Originprojects中可以包含多个工作表3.2工作表的选定操作1.

选定若干单元格(SelectingCells)Click-and-dragtoselectthecells.2.选定若干行(SelectingRows)Selecttherowheading,dragthemouseOrSelecttherowheading,Click+SHIFTkey

3.选相邻的列(SelectingAdjacentColumns)Selectthefirstcolumnheading,Click-and-dragOrSelectthecolumnheading,Click+SHIFTkey4.选不相邻的列(SelectingNonadjacentColumns)Selectthecolumnheading,

+CTRLkey

1.数据的修改Tochangeavalue,selectthecellandtypethecorrectvalue.(Originautomaticallyoverwritesthevalueinthecell)

替换单元格中的数据,点击该单元格,输入新的值Toeditacellvalue,pressF2orclickatthedesiredposition

修改单元格中的数据,点击该单元格后,在拟修改的位置单击鼠标Delete:Deleteonevaluetotherightofthecursor,

ordeleteallhighlightedvalues.3.3数据的编辑修改2.在列中插入数据(InsertingDatawithinaColumn)①Toinsertacellinacolumn,selectthecell

thatisdirectlybelowwhereyouwantto

insertthenewcell.

选定拟插入新单元格下方的单元格②ThenselectEdit:Insert

orright-click→selectInsertfromtheshortcutmenu.

执行编辑菜单中的插入命令③Thenewcellinsertsabovetheselectedcell.

新单元格将插在所选定单元格的上方3.删除数据(DeletingData)Tocleartheentirecontentsofaworksheet,

selectEdit:ClearWorksheet.删除整个工作表中的内容Todeletethecontentsofarangeofcellsfromtheworksheet,

selectEdit:Clear.删除单元格或单元格区域中的内容,格子保留Todeletearangeofcellsfromtheworksheet,

selectEdit:Delete.内容和单元格同时删除Attention:Delete(1)TheEdit:Delete

deletesaselectedvaluesandcells(2)Thekeyboard:Deletedeletestheworksheetvaluesonly.4.列的插入、删除和重排AddingColumns:增加列Performoneofthefollowingoperations:

SelectColumn:AddNewColumns.ClicktheAddNewColumnsbuttonontheStandardtoolbar.Right-clickinsidetheworksheetwindowbuttotherightoftheworksheetgrid.SelectAddNewColumnfromtheshortcutmenu.InsertingColumns:插入列SelectEdit:InsertRight-click→selectInsertfromtheshortcutmenu..DeletingColumns:删除列SelectEdit:DeleteRight-click→selectDelete.Note:Toclearthecolumnvaluesbutremainthecolumns,selectEdit:Clear.MovingColumns:移动列

SelectColumn:MovetoFirst.Column:MovetoLast.先选定后操作5.行的插入和删除InsertingRows:插入行selectEdit:Insert

orright-clickandselectInsert

DeletingRows:删除行Edit:Deleteorright-clickandselectDelete.先选定后操作6.删除工作表(DeletingaWorksheetfromaProject)

Todeleteaworksheetfromtheproject,performoneofthefollowingoperations:ClicktheCloseWindowbutton

intheupper-rightcorneroftheworksheet.

点击工作表右上方的关闭窗口按扭Right-clickontheworksheetwindowiconinProjectExplorerandselectDeleteWindowfromtheshortcutmenu.

在工程管理器中右击工作表图标,在快捷菜单中选DeleteWindowClickontheworksheetwindowiconinProjectExplorerandthenpressDelete.

在工程管理器中点击工作表图标,按Delete键作用:1.

改变列的名称(ColumnName)2.改变列的标识(PlotDesignation)3.改变数据的类型(Display)4.改变数的格式(Format)5.改变数的显示格式(NumericDisplay)6.改变列宽(ColumnWidth)7.为列标签添加说明(ColumnLabel)7.数据表的格式化方法:双击工作表的列标签打开WorksheetColumnFormat5.改变数的显示格式(NumericDisplay)4.改变数的格式(Format)十进制格式科学记数格式工程记数格式有千位分隔符的十进制格式默认的十进制显示数据设置小数点的位置设置有效数字的位数§4绘制图形4.1图形窗口中的基本术语4.2绘图的方法4.1图形窗口中的基本术语Page:Eachgraphwindowcontainsasingleeditable

page.Thepageserves

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论