正态分布检验的3大步骤及结果处理 spss.doc_第1页
正态分布检验的3大步骤及结果处理 spss.doc_第2页
正态分布检验的3大步骤及结果处理 spss.doc_第3页
正态分布检验的3大步骤及结果处理 spss.doc_第4页
全文预览已结束

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Statistics Spring 2008Lab #1 Data Screening7. Normality Below, I describe five steps for determining and dealing with normality. However, the bottom line is that almost no one checks their data for normality; instead they assume normality, and use the statistical tests that are based upon assumptions of normality that have more power (ability to find significant results in the data). First, what is normality? A normal distribution is a symmetric bell-shaped curve defined by two things: the mean (average) and variance (variability). Second, why is normality important? The central idea behind statistical inference is that as sample size increases, distributions will approximate normal. Most statistical tests rely upon the assumption that your data is “normal”. Tests that rely upon the assumption or normality are called parametric tests. If your data is not normal, then you would use statistical tests that do not rely upon the assumption of normality, call non-parametric tests. Non-parametric tests are less powerful than parametric tests, which means the non-parametric tests have less ability to detect real differences or variability in your data. In other words, you want to conduct parametric tests because you want to increase your chances of finding significant results. Third, how do you determine whether data are “normal”? There are three interrelated approaches to determine normality, and all three should be conducted. First, look at a histogram with the normal curve superimposed. A histogram provides useful graphical representation of the data. SPSS can also superimpose the theoretical “normal” distribution onto the histogram of your data so that you can compare your data to the normal curve. To obtain a histogram with the superimposed normal curve: 1. Select Analyze - Descriptive Statistics - Frequencies.2. Move all variables into the “Variable(s)” window.3. Click “Charts”, and click “Histogram, with normal curve”.4. Click OK. Output below is for “system1”. Notice the bell-shaped black line superimposed on the distribution. All samples deviate somewhat from normal, so the question is how much deviation from the black line indicates “non-normality”? Unfortunately, graphical representations like histogram provide no hard-and-fast rules. After you have viewed many (many!) histograms, over time you will get a sense for the normality of data. In my view, the histogram for “system1” shows a fairly normal distribution. Second, look at the values of Skewness and Kurtosis. Skewness involves the symmetry of the distribution. Skewness that is normal involves a perfectly symmetric distribution. A positively skewed distribution has scores clustered to the left, with the tail extending to the right. A negatively skewed distribution has scores clustered to the right, with the tail extending to the left. Kurtosis involves the peakedness of the distribution. Kurtosis that is normal involves a distribution that is bell-shaped and not too peaked or flat. Positive kurtosis is indicated by a peak. Negative kurtosis is indicated by a flat distribution. Descriptive statistics about skewness and kurtosis can be found by using either the Frequencies, Descriptives, or Explore commands. I like to use the “Explore” command because it provides other useful information about normality, so1. Select Analyze - Descriptive Statistics - Explore.2. Move all variables into the “Variable(s)” window.3. Click “Plots”, and unclick “Stem-and-leaf”4. Click OK. Descriptives box tells you descriptive statistics about the variable, including the value of Skewness and Kurtosis, with accompanying standard error for each. Both Skewness and Kurtosis are 0 in a normal distribution, so the farther away from 0, the more non-normal the distribution. The question is “how much” skew or kurtosis render the data non-normal? This is an arbitrary determination, and sometimes difficult to interpret using the values of Skewness and Kurtosis. Luckily, there are more objective tests of normality, described next. Third, the descriptive statistics for Skewness and Kurtosis are not as informative as established tests for normality that take into account both Skewness and Kurtosis simultaneously. The Kolmogorov-Smirnov test (K-S) and Shapiro-Wilk (S-W) test are designed to test normality by comparing your data to a normal distribution with the same mean and standard deviation of your sample:1. Select Analyze - Descriptive Statistics - Explore.2. Move all variables into the “Variable(s)” window.3. Click “Plots”, and unclick “Stem-and-leaf”, and click “Normality plots with tests”.4. Click OK. “Test of Normality” box gives the K-S and S-W test results. If the test is NOT significant, then the data are normal, so any value above .05 indicates normality. If the test is significant (less than .05), then the data are non-normal. In this case, both tests indicate the data are non-normal. However, one limitation of the normality tests is that the larger the sample size, the more likely to get significant results. Thus, you may get significant results with only slight deviations from normality. In this case, our sample size is large (n=327) so the significance of the K-S and S-W tests may only indicate slight deviations from normality. You need to eyeball your data (using histograms) to determine for yourself if the data rise to the level of non-normal. “Normal Q-Q Plot” provides a graphical way to determine the level of normality. The black line indicates the values your sample should adhere to if the distribution was normal. The dots are your actual data. If the dots fall exactly on the black line, then your data are normal. If they deviate from the black line, your data are non-normal. In this case, you can see substantial deviation from the straight black line. Fourth, if your data are non-normal, what are your options to deal with non-normality? You have four basic options. a. Option 1 is to leave your data non-normal, and conduct the parametric tests that rely upon the assumptions of normality. Just because your data are non-normal, does not instantly invalidate the parametric tests. Normality (versus non-normality) is a matter of degrees, not a strict cut-off point. Slight deviations from normality may render the parametric tests only slightly inaccurate. The issue is the degree to which the data are non-normal. b. Option 2 is to leave your data non-normal, and conduct the non-parametric tests designed for non-normal data. c. Option 3 is to conduct “robust” tests. There is a growing branch of statistics called “robust” tests that are just as powerful as parametric tests but account for non-normality of the data. d. Option 4 is to transform the data. Transforming your data involving using mathematical formulas to modify the data into normality. Fifth, how do you transform your data into “normal” data? There are different types of transformations based upon the type of non-normality. For example, see handout “Figure 8.1” on the last page of this document that shows six types of non-normality (e.g., 3 positive skew that are moderate, substantial, and severe; 3 negative skew that are moderate, substantial, and severe). Figure 8.1 also shows the type of transfor

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论