模式识别3课件_第1页
模式识别3课件_第2页
模式识别3课件_第3页
模式识别3课件_第4页
模式识别3课件_第5页
已阅读5页,还剩50页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、1,Pattern Recognition,2,Character Recognition,3,Feature space,字符的圆形度,字符的 长宽比,Mapping the input pattern onto points in a feature space,Feature Space,4,Character Recognition,Once mapping the input pattern onto points in a feature space, the purpose of classification is to :,assign each point in the sp

2、ace with a class label,5,模式识别系统,6,宽度,特 征 空 间,长度,decision regions,decision boundaries,discriminant function,7,Pattern Recognition,Assign each point in the space with a class label,statistical methods,artificial neural network methods,structure methods,8,Bayes theorem in general,9,Bayes Decision: erro

3、r minimum,The probability of misclassification is minimized by selecting the class having the largest posterior probability,10,Bayes Decision: risk minimum,If the likelihood ratio of class and exceeds a threshold value (that is independent of the input pattern ), the optimal action is to decide .,11

4、,Discriminant functions,Bayesian decision Normal density,How about the discriminant function?,12,Discriminant functions,Case 1,Linear Discriminant Function(LDF),13,Discriminant functions,Case 2,Linear Discriminant Function(LDF),14,Discriminant functions,Case 3,Quadratic Discriminant Function(QDF),15

5、,Overview,Pattern classification problem Feature space, feature point in space Classification - Bayesian decision theory - Discriminant function - Decision region, Decision boundary,16,Example,分别写出在以下两种情况下的最小错误率贝叶斯决策规则:,A:,(1) If,(2) If,17,Example,对两类问题,证明最小风险贝叶斯决策规则可表示为:,A:,18,Example,对两类问题,证明最小风险贝

6、叶斯决策规则可表示为:,A:,19,Example,对两类问题,证明最小风险贝叶斯决策规则可表示为:,A:,20,Example,说明在0-1损失的情况下最小风险贝叶斯决策规则与最小错误率贝叶斯决策规则相同。,A:,Minimum,Maximum,Chapter 3 Maximum-likelihood and Bayesian parameter estimation,Introduction Maximum-likelihood Bayesian parameter estimation Gaussian classifier,22,Introduction,we could design

7、 an optional classifier if we knew the priori probabilities and the class-conditional densities,Unfortunately, we rarely, if ever, have this kind of completely knowledge about the probabilistic structure of the problem,23,Probability Density Estimation,Three alternative approaches to density estimat

8、ion - parametric methods - non- parametric methods - semi- parametric methods,24,Probability Density Estimation,Parametric methods - a specific functional form is assumed - a number of parameters which are then optimized by fitting the model of the data set,Drawbacks - form of function chosen might

9、be incapable,25,Probability Density Estimation,Non- parametric methods - not assume the form - to be determined entirely by the data Drawbacks - the number of parameters grows with the size of the data - slow,26,Probability Density Estimation,Semi- parametric methods - use the very general function

10、form - the number of adaptive parameters can be increased in a systematic way - build a more flexible models Neural network,27,Probability Density Estimation,Parametric methods - Maximum likelihood estimation - Bayesian estimation,28,Maximum Likelihood Estimation,c Data sets samples having been draw

11、n independently according to the probability law We assume that has known parametric form, and therefore determined uniquely by the value of a parameter vector .,29,Maximum Likelihood Estimation,Suppose that D contains n samples . Because the samples are drawn independently, we have:,30,Maximum Like

12、lihood Estimation,is called as the likelihood of .,The maximum-likelihood estimation of is the value that maximize .,31,Maximum Likelihood Estimation,32,Example,The Gaussian case: Unknown,33,The Gaussian case: Unknown,Example,34,Example,The Gaussian case: Unknown and,35,Example,The Gaussian case: Un

13、known and,36,Example,The Gaussian case: Unknown and,37,38,39,40,41,42,Gaussian Mixture,43,Bayes Estimation,Whereas in maximum-likelihood methods, we view the true parameter vector to be fixed, in Bayesian method, we consider to be a random variable, and the training data allows us to convert a distr

14、ibution on this variable into a posterior probability density.,44,模式识别系统,Gaussian density,Gaussian Classifier,Estimate mean vector and covariance matrix,45,Gaussian Classifiers,概率密度函数 分类函数,46,假设独立等方差,Nearest distance (nearest mean)同时也是线性鉴别函数 (LDF),Gaussian Classifiers,47,假设等协方差矩阵 Linear discriminant

15、 function (LDF),Gaussian Classifiers,48,Gaussian Classifiers,假设任意协方差矩阵且等先验概率 Quadratic discriminant function (QDF) Decision surface,49,Parameter Estimation of Gaussian Density Maximum Likelihood (ML),Gaussian Classifiers,50,Parameter Estimation of Gaussian Density,Gaussian Classifiers,51,共享协方差距阵的情况,

16、Gaussian Classifiers,52,Parametric分类器不好用吗 - 实际中很多类别的概率分布近似Gaussian - 即使概率分布偏离Gaussian比较大,当特征维数高而训练样本少(Curse of dimensionality)时,Parametric分类器仍然比较好 有时LDF甚至比QDF更好 ML估计的好处: - 训练计算量小(与类别数和样本数成线性关系) - 高维情况下降维(特征选择、变换)经常是有益的,Gaussian Classifiers,53,Gaussian分类器的改进 QDF的问题 参数太多:与维数的平方成正比 训练样本少时协方差矩阵奇异 即使不奇异ML估计的泛化性能也不好 Regularized discriminant analysis (RDA) 通过平滑协方差矩阵克服奇异,同时提高泛化性能,Gaussian Classifiers,We could design an optional classifier if we knew the prior probabilities an

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

最新文档

评论

0/150

提交评论