




已阅读5页,还剩14页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
Selection of our books indexed in the Book Citation Index in Web of Science Core Collection (BKCI) Interested in publishing with us? Contact book.department Numbers displayed above are based on latest data collected. For more information visit Open access books available Countries delivered to Contributors from top 500 universities International authors and editors Our authors are among the most cited scientists Downloads We are IntechOpen, the worlds leading publisher of Open Access books Built by scientists, for scientists 12.2% 116,000125M TOP 1%154 4,200 Chapter The Bayesian Posterior Estimators under Six Loss Functions for Unrestricted and Restricted Parameter Spaces Ying-Ying Zhang Abstract In this chapter, we have investigated six loss functions. In particular, the squared error loss function and the weighted squared error loss function that penalize overestimation and underestimation equally are recommended for the unrestricted parameter space ?; Steins loss function and the power-power loss function, which penalize gross overestimation and gross underestimation equally, are recommended for the positive restricted parameter space 0; the power-log loss function and Zhangs loss function, which penalize gross overestimation and gross underestimation equally, are recommended for 0;1. Among the six Bayesian esti- mators that minimize the corresponding posterior expected losses (PELs), there exist three strings of inequalities. However, a string of inequalities among the six smallest PELs does not exist. Moreover, we summarize three hierarchical models where the unknown parameter of interest belongs to 0;, that is, the hierarchical normal and inverse gamma model, the hierarchical Poisson and gamma model, and the hierar- chical normal and normal-inverse-gamma model. In addition, we summarize two hierarchical models where the unknown parameter of interest belongs to 0;1, that is, the beta-binomial model and the beta-negative binomial model. For empirical Bayesian analysis of the unknown parameter of interest of the hierarchical models, we use two common methods to obtain the estimators of the hyperparameters, that is, the moment method and the maximum likelihood estimator (MLE) method. Keywords: Bayesian estimators, power-log loss function, power-power loss function, restricted parameter spaces, Steins loss function, Zhangs loss function 1. Introduction In Bayesian analysis, there are four basic elements: the data, the model, the prior, and the loss function. A Bayesian estimator minimizes some posterior expected loss (PEL) function. We confine our interests to six loss functions in this chapter: the squared error loss function (well known), the weighted squared error loss function (1, p. 78), Steins loss function 210, the power-power loss func- tion 11, the power-log loss function 12, and Zhangs loss function 13. It is worthy to note that among the six loss functions, the first and second loss functions are defined on ?;, and they penalize overestimation and 1 underestimation equally. The third and fourth loss functions are defined on 0;, and they penalize gross overestimation and gross underestimation equally, that is, an action a will suffer an infinite loss when it tends to 0 or . The fifth and sixth loss functions are defined on 0;1, and they penalize gross overestimation and gross underestimation equally, that is, an action a will suffer an infinite loss when it tends to 0 or 1. The squared error loss function and the weighted squared error loss function have been used by many authors for the problem of estimating the variance, 2, based on a random sample from a normal distribution with mean unknown (see, for instance, 14, 15). As pointed out by 16, the two loss functions penalize equally for overestimation and underestimation, which is fine for the unrestricted parameter space ?;. For 0;, the positive restricted parameter space, where 0 is a natural lower bound and the estimation problem is not symmetric, we should not choose the squared error loss function and the weighted squared error loss function but choose a loss function which can penalize gross overestimation and gross underes- timation equally, that is, an action a will suffer an infinite loss when it tends to 0 or . Steins loss function owns this property, and thus it is recommended for 0; by many researchers (e.g., see 210). Moreover, 11 proposes the power-power loss function which not only penalizes gross overestimation and gross underestimation equally but also has balanced convergence rates or penalties for its argument too large and too small. Therefore, Steins loss function and the power- power loss function are recommended for 0;. Analogously, for a restricted parameter space 0;1, where 0 and 1 are two natural bounds and the estimation problem is not symmetric, we should not select the squared error loss function and the weighted squared error loss function but select a loss function which can penalize gross overestimation and gross underesti- mation equally, that is, an action a will suffer an infinite loss when it tends to 0 or 1. It is worthy to note that Steins loss function and the power-power loss function are also not appropriate in this case. The power-log loss function proposed by 12 has this property. Moreover, they propose six properties for a good loss function on 0;1. Specifically, the power-log loss function is convex in its argument, attains its global minimum at the true unknown parameter, and penalizes gross overestimation and gross underestimation equally. Apart from the six properties, 13 proposes the seventh property, that is, balanced convergence rates or penalties for the argument too large and too small, for a good loss function on 0;1. Therefore, the power-log loss function and Zhangs loss function are recommended for 0;1. The rest of the chapter is organized as follows. In Section 2, we obtain two Bayesian estimators for ?; under the squared error loss function and the weighted squared error loss function. In Section 3, we obtain two Bayesian estimators for 0; under Steins loss function and the power-power loss function. In Section 4, we obtain two Bayesian estimators for 0;1 under the power-log loss function and Zhangs loss function. In Section 5, we summarize three strings of inequalities in a theorem. Some conclusions and discussions are provided in Section 6. 2. Bayesian estimation for (?,) There are two loss functions which are defined on ?; and penalize overestimation and underestimation equally, that is, the squared error loss function (well known) and the weighted squared error loss function (see 1, p. 78). 2 Bayesian Inference 2.1 Squared error loss function The Bayesian estimator under the squared error loss function (well known), 2 x , minimizes the posterior expected squared error loss (PESEL), E L2;ajx?, that is, 2 x argmin aA E L2;ajx?,(1) where A a x : a x ?;fg is the action space, a a x ?; is an action (estimator), L2;a ? a2(2) is the squared error loss function, and ?; is the unknown parameter of interest. The PESEL is easy to obtain (see 16): PESEL ;ajx E L2;ajx? a2? 2aE jx E 2jx ?: (3) It is found in 16 that 2 x E jx(4) by taking partial derivative of the PESEL with respect to a and setting it to 0. 2.2 Weighted squared error loss function The Bayesian estimator under the weighted squared error loss function, w2 x , minimizes the posterior expected weighted squared error loss (PEWSEL) (see 1), E Lw2;ajx?, that is, w2 x argmin aA E Lw2;ajx?,(5) where A a x : a x ?;fg is the action space, a a x ?; is an action (estimator), Lw2;a 1 2 ? a2(6) is the weighted squared error loss function, and ?; is the unknown parameter of interest. The PEWSEL is easy to obtain (see 1): PEWSEL ;ajx E Lw2;ajx? a2E 1 2 jx ? ? 2aE 1 jx ? 1:(7) It is found in 1 that w2 x E 1 jx ? E 1 2 jx ?(8) by taking partial derivative of the PEWSEL with respect to a and setting it to 0. 3. Bayesian estimation for (0,) There are many hierarchical models where the parameter of interest is 0;. As pointed out in the introduction, we should calculate and use the 3 The Bayesian Posterior Estimators under Six Loss Functions for Unrestricted and Restricted DOI: /10.5772/intechopen.88587 Bayesian estimator of the parameter under Steins loss function or the power- power loss function because they penalize gross overestimation and gross underes- timation equally. We list several such hierarchical models as follows. Model (a) (hierarchical normal and inverse gamma model). This hierarchical model has been investigated by 17, 16, 10. Suppose that we observe X1,X2,Xn from the hierarchical normal and inverse gamma model: Xi? iid N ;, i 1,2,n, ? IG ;, ( (9) where ?0 are known constants, is the unknown parameter of interest, N ; is the normal distribution, and IG ; is the inverse gamma distribution. It is worthy to note that the problem of finding the Bayesian rule under a conjugate prior is a standard problem and the problem is treated in almost every text on mathematical statistics. The idea of selecting an appropriate prior from the conjugate family was put forward by 18. Specifically, Bayesian estimation of under the prior IG ; is studied in Example 4.2.5 (p. 236) of 17 and in Exercise 7.23 (p. 359) of 16. However, they only calculate the Bayesian estimator with respect to IG ; prior under the squared error loss, 2 x E jx. Model (b) (hierarchical Poisson and gamma model). This hierarchical model has been investigated by 19, 16, 1, 20. Suppose that X1,X2,Xnare observed from the hierarchical Poisson and gamma model: Xi? iid P , i 1,2,n, ? G ;, ( (10) where 0 and 0 are hyperparameters to be determined, P is the Poisson distribution with an unknown mean 0, and G ; is the gamma distribution with an unknown shape parameter and an unknown rate parameter . The gamma prior G ; is a conjugate prior for the Poisson model, so that the posterior distri- bution of is also a gamma distribution. The hierarchical Poisson and gamma model (10) has been considered in Exercise 4.32 (p. 196) of 4. It has been shown that the marginal distribution of X is a negative binomial distribution if is a positive integer. The Bayesian estimation of under the gamma prior is studied in 19 and in Tables 3.3.1 (p. 121) and 4.2.1 (p. 176) of 1. However, they only calculated the Bayesian posterior estimator of under the squared error loss function. Model (c) (hierarchical normal and normal-inverse-gamma model). This hierarchical model has been investigated by 2, 21, 22. Let the observations X1,X2,Xnbe from the hierarchical normal and normal-inverse-gamma model: Xi ;? iid N ;, i 1,2,n, ? N 0;=0, ? IG v0=2;v02 0=2 ?, ( (11) where ?0, and 00 are known hyperparameters, N ; is a normal distribution with an unknown mean and an unknown variance , is N 0,=0 which is a normal distribution, and is IG v0=2;v02 0=2 ? which is an inverse gamma distribution. More specifically, with a joint conjugate prior ; ? N ? IG 0;0;v0;2 0 ?, which is the normal-inverse-gamma distribution, the posterior distribution of was studied in Example 1.5.1 (p. 20) of 21 and Part I (pp. 6970) of 22. However, they did not provide any Bayesian posterior 4 Bayesian Inference estimator of . Moreover, the normal distribution with a normal-inverse-gamma prior which assumes that is unknown is more realistic than the normal distribu- tion with an inverse gamma prior investigated by 10 which assumes that is known. 3.1 Steins loss function 3.1.1 One-dimensional case The Bayesian estimator under Steins loss function, s x , minimizes the poste- rior expected Steins loss (PESL) (see 1, 10, 16), E Ls;ajx?, that is, s x argmin aA E Ls;ajx?,(12) whereA a x : a x 0fg isthe action space, a a x 0 isanaction(estimator), Ls;a a ? 1 ? log a (13) is Steins loss function, and 0 is the unknown parameter of interest. The PESL is easy to obtain (see 10): PESL ;ajx E Ls;ajx? aE 1 jx ? ? 1 ? loga E logjx:(14) It is found in 10 that s x 1 E 1 jx ?(15) by taking partial derivative of the PESL with respect to a and setting it to 0. The PESLs evaluated at the Bayesian estimators are (see 10) PESLs;x E Ls;ajx?ja s x , PESL2;x E Ls;ajx?ja 2 x , (16) where 2 x E jx is the Bayesian estimator under the squared error loss function. For the variance parameter of the hierarchical normal and inverse gamma model (9), 10 recommends and analytically calculates the Bayesian estimator: s x 1 ,(17) where n 2 and 1 1 2 X n i1 xi? 2 #?1 ,(18) with respect to IG ; prior under Steins loss function. This estimator mini- mizes the PESL. 10 also analytically calculates the Bayesian estimator, 5 The Bayesian Posterior Estimators under Six Loss Functions for Unrestricted and Restricted DOI: /10.5772/intechopen.88587 2 x E jx 1 ? 1 ,(19) with respect to IG ; prior under the squared error loss, and the corresponding PESL. 10 notes that E logjx ?log ? ,(20) which is essential for the calculation of PESLs;x log? (21) and PESL2;x 1 ? 1 log ? 1 ? ,(22) depends on the digamma function ? . Finally, the numerical simulations exemplify that PESLs;x and PESL2;x depend only on and n and do not depend on , , and x; the estimators s x are unanimously smaller than the estimators 2 x ; and PESLs;x are unanimously smaller than PESL2;x. For the hierarchical Poisson and gamma model (43), 20 first calculates the posterior distribution of , jx, and the marginal pmf of x, x , in Theorem 1 of their paper. 20 then calculates the Bayesian posterior estimators s x and 2 x , and the PESLs PESLs;x and PESL2;x, and they satisfy two inequalities. After that, the estimators of the hyperparameters of the model (10) by the moment method 1n and 1n are summarized in Theorem 2 of their paper. Moreover, the estimators of the hyperparameters of the model (10) by the maximum likelihood estimator (MLE) method 2n and 2n are summarized in Theorem 3 of their paper. Finally, the empirical Bayesian estimators of the param- eter of the model (10) under Steins loss function by the moment method and the MLE method are summarized in Theorem 4 of their paper. In numerical simulations of 20, they have illustrated the two inequalities of the Bayesian posterior estima- tors and the PESLs, the moment estimators and the MLEs are consistent estimators of the hyperparameters, and the goodness of fit of the model to the simulated data. The numerical results indicate that the MLEs are better than the moment estimators when estimating the hyperparameters. Finally, 20 exploits the attendance data on 314 high school juniors from two urban high schools to illustrate their theoretical studies. For the variance parameter of the normal distribution with a normal-inverse- gamma prior (11), 23 recommends and analytically calculates the Bayesian poste- rior estimator, s x , with respect to a conjugate prior ? N 0,=0, and ? IG v0=2;v02 0=2 ? under Steins loss function which penalizes gross overestimation and gross underestimation equally. This estimator minimizes the PESL. As comparisons, the Bayesian posterior estimator, 2 x E jx, with respect to the same conjugate prior under the squared error loss function, and the PESL at 2 x , are calculated. The calculations of s x , 2 x , PESLs;x, and PESL2;x depend only on E jx, E ?1jx ?, and E logjx . The numerical simu- lations exemplify their theoretical studies that the PESLs depend only on v0and n, but do not depend on 0, 0, 0, and especially x. The estimators 2 x are unani- mously larger than the estimators s x , and PESL2;x are unanimously larger 6 Bayesian Inference than PESLs;x. Finally, 23 calculates the Bayesian posterior estimators and the PESLs of the monthly simple returns of the Shanghai Stock Exchange (SSE) Com- posite Index, which also exemplify the theoretical studies of the two inequalities of the Bayesian posterior estimators and the PESLs. 3.1.2 Multidimensional case For estimating a covariance matrix which is assumed to be positive definite, many researchers exploit the multidimensional Steins loss function (e.g., see 2, 8, 2431). The multidimensional Steins loss function (see 2) is originally defined to estimate the p ? p unknown covariance matrix by with the loss function: L ; ? tr?1 ? logdet?1 ? p:(23) When p 1, the multidimensional Steins loss function reduces to Ls2;a ? a 2 ? log a 2 ? 1,(24) which is in the form of (13), the one-dimensional Steins loss function. 3.2 Power-power loss function The Bayesian estimator under the power-power loss function, p x , minimizes the posterior expected power-power loss (PEPL) (see 11), E Lp;ajx ?, that is, p x argmin aA E Lp;ajx ?, (25) whereA a x : a x 0fg isthe action space, a a x 0 isanaction(estimator), Lp;a a a ? 2(26) is the power-power loss function, and 0 is the unknown parameter of inter- est. The PEPL is easy to obtain (see 11): PEPL ;ajx E Lp;ajx ? aE 1 jx ? 1 a E jx ? 2:(27) It is found in 11 that p x ffi ffi ffi ffi ffi ffi ff
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 人才日活动策划方案模板
- 东莞入企咨询方案
- 传媒艺考活动策划方案
- 水产养殖行业技术规范总结
- 软件著作权转让协议
- 2025结构工程师检测卷(历年真题)附答案详解
- 2025国家统一法律职业资格考试考试真题及答案
- 中考语文小说阅读赏练-梁晓声小说(含解析)
- 诚信与谎言话题作文(13篇)
- 2025计算机一级考前冲刺练习试题及完整答案详解(夺冠系列)
- 家具制造业2025年原材料价格波动对行业市场发展趋势影响报告
- 2025-2030农业传感器网络部署模式与精准农业实践案例
- 接手烂尾项目的合同范本
- 物业客服人员培训
- 2025-2026学年地质版(2024)小学体育与健康二年级(全一册)教学设计(附目录P173)
- 茶百道培训课件
- 2025至2030年中国制药装备行业市场全景分析及投资前景展望报告
- 2025北京京剧院招聘工作人员10人考试备考题库及答案解析
- 检修现场管理培训课件
- 2025年食品安全人员在线考试试题及答案
- 多重耐药菌感染患者的护理LP
评论
0/150
提交评论