免费预览已结束,剩余20页可下载查看
下载本文档
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
本章要下一个James P. LeSage计量的MATLAB工具箱:下载地址:/html/recent.html 他写的手册Applied Econometrics using MATLAB很好。第八节 线性联立方程的识别与估计本章可参见于俊年的计量书7.1 线性联立方程的识别7.1.1 识别的阶条件必要条件 模型中任一方程可识别的必要条件是:该方程所不包含的前定变量数大于等于它所包含的内生变量数减1。(其中,不包含的前定变量数整个模型中包含的前定变量数减去该方程包含的前定变量数),取等号时,是恰好识别,不取等号时,过度识别。7.1.2 识别的阶条件充要条件 某个方程可识别的充要条件是该方程不包含而为其它方程所包含的那些变量(包括内生变量和前定变量)的系数矩阵的秩等于总的方程个数减1。恒等式(包括定义方程和恒等式),不需要进行识别判断。对于联立方程的识别,一般可先利用识别的秩条件,判断是否某方程是否是可识别,再利用识别的阶条件,判断可识别的方程是否是过度识别,还是恰好识别。引用国产软件马克威帮助里的一个例子:例: 下面为一个包含 3个方程的中国宏观经济模型。此模型包含 3个内生变量:国内生产总值Y、居民消费总额Ct和投资总额I;2个前定变量:政府消费(将净出口也包含其中,为了实现数据的平衡)G、前期居民消费总额Ct1。完备的结构式模型为:1、先利用秩条件识别:列出所有变量(包括内生变量和前定变量)系数矩阵:对于第一个方程:它的秩为:r=rank(-1 0;1 1)r = 2等于312,可识别第二个方程:它的秩:syms a2 把a2符号化r=rank(-1 0 a2;1 1 0)r =2或:r=rank(sym(-1 0 a2;1 1 0)r =2等于312,可识别det(sym(-1 a2;1 0)ans =-a22、再利用阶条件识别:对于第一个方程:第一个方程不包括前定变量的个数为1,包括内生变量的个数为2,121,恰好识别。对于第二个方程:第二个方程不包括前定变量的个数为2,包括内生变量的个数为2,22-1,过度识别。 因此,整个模型可识别。7.2 线性联立方程的估计7.2.1 线性联立方程估计的几种方法1、单方程估计法有:(1)普通最小二乘法(OLS)结构模型中的同时方程模型用OLS估计时,回归参数的估计量是有偏和非一致的。结构模型中的递归模型可以用OLS估计,回归参数的估计量是无偏和一致的。(2)间接最小二乘法(ILS)应用条件:a被估计的结构方程必须是恰好识别,因为只有恰好识别才能由简约型参数推导出唯一的一组结构参数。b每个简约型方程的随机扰动项都应满足最小二乘法经典假定。c前定变量之间不存在高度的多重共线性。ILS产生的回归估计量将是有偏但是一致性。(3)工具变量法(IV法)工具变量法的主要步聚:第一步,选择适当的工具变量在联立方程模型中,所选择的工具变量应满足以下条件:(1) 它必须与方程中所考虑的内生说明变量强相关。(2) 它必须是真正的前定变量,因而与结构方程中的随机项不相关。(3) 它必须同结构方程中的其它前定变量相关性很小,以避免多重共线性。(4) 如果在同一结构方程中使用了一个以上的工具变量,这些工具变量之间不存在很强多重共线性。模型中的前定变量一般都能满足上述条件。所以每一个前定变量都可以作为内生说明变量的备选工具变量。工具变量的个数必须与内生说明变量个数相等。如果结构方程中含有前定变量,则可选择这些前定变量本身作自己的工具变量。见古扎拉蒂的P637 (17.9.1)式第二步,分别用工具变量去乘结构方程,并对所有的样本观测求和,得到与未知参数一样多的线性方程,组成方程组,解这相方程组,得结构参数的估计值。工具变量法得估计量是有偏但是为一致估计量。工具变量法只适用于恰好识别(4)二阶段最小二乘法(2SLS法)2SLS适用于过度识别和恰好识别方程,恰好识别时,2SLS与间接最小二乘法得出相同的估计量。2SLS的估计量是有偏是一致估计量。2SLS的应用条件:1、 结构方程是可识别2、 结构方程的随机扰动项必须满足经典回归分析的基本假定。由于简约式方程的随机扰动项是结构方程的随机扰动项的线性组合。因而简约式方程的随机扰动项也是满足经典假定。3、 全部前定变量必须是正确设定的,并且前定变量之间不存高度的多重线性。(5) 有限信息最大似然法(LIML)LIML的估计量是有偏但一致2、系统估计法有:(1)三阶段最小二乘法(3SLS)3SLS是广义最小二乘的方法,估计量是有偏但为一致估计量。3SLS比2SLS更有效,因为在估计过程中使用的信息比2SLS法多。应用条件:(1) 模型的每一个方程都是正确制定,而且都是可识别的。(2) 模型中随机扰项满足经典假定,并且不同方程不同期之间随机扰动不相关。(3) 去掉联立方程中任何恒等式。(定义方程和平衡方程)(2)完全信息最大似然法(FIMI) FIMI有偏但一致。命令: 二阶段最小二乘法function results=tsls(y,y1,x1,xall)% PURPOSE: computes Two-Stage Least-squares Regression%-% USAGE: results = tsls(y,yendog,xexog,xall)% where: y = dependent variable vector (nobs x 1)% yendog = endogenous variables matrix (nobs x g)% xexog = exogenous variables matrix for this equation% xall = all exogenous and lagged endogenous variables in the system%-% RETURNS: a structure% results.meth = tsls% results.bhat = bhat estimates% results.tstat = t-statistics% results.yhat = yhat predicted values% results.resid = residuals% results.sige = e*e/(n-k)% results.rsqr = rsquared% results.rbar = rbar-squared% results.dw = Durbin-Watson Statistic% results.nobs = nobs,% results.nendog = # of endogenous% results.nexog = # of exogenous% results.nvar = results.nendog + results.nexog% results.y = y data vector% -% NOTE: you need to put a constant term in the x1 and xall matrices% -% SEE ALSO: prt_reg(results), plt_reg(results)%-% written by:% James P. LeSage, Dept of Economics% University of Toledo% 2801 W. Bancroft St,% Toledo, OH 43606% if (nargin = 4); error(Wrong # of arguments to tsls); end;results.meth = tsls;nobs1 g = size(y1);nobs2 k = size(x1);nobs3 l = size(xall);results.nendog = g; results.nexog = k; results.nvar = k+g;if nobs1 = nobs2; if nobs2 = nobs3 nobs = nobs1; end;elseerror(tsls: # of observations in yendog, xexog, xall not the same);end;results.y = y; results.nobs = nobs;% xall contains all explanatory variables% x1 contains exogenous% y1 contains endogenousxapxa = inv(xall*xall);% form xpxxpx = y1*xall*xapxa*xall*y1 y1*x1 x1*y1 x1*x1;xpy = y1*xall*xapxa*xall*y x1*y ;xpxi = inv(xpx); results.beta = xpxi*xpy; % bhatresults.yhat = y1 x1*results.beta; % yhatresults.resid = y - results.yhat; % residualssigu = results.resid*results.resid;results.sige = sigu/(nobs-k-g); % sigetmp = results.sige*(diag(xpxi);results.tstat = results.beta./(sqrt(tmp);ym = y - ones(nobs,1)*mean(y);rsqr1 = sigu;rsqr2 = ym*ym;results.rsqr = 1.0 - rsqr1/rsqr2; % r-squaredrsqr1 = rsqr1/(nobs-k-g);rsqr2 = rsqr2/(nobs-1.0);results.rbar = 1 - (rsqr1/rsqr2); % rbar-squaredediff = results.resid(2:nobs) - results.resid(1:nobs-1);results.dw = (ediff*ediff)/sigu; % durbin-watson于俊年的计量书:P356 二、例:供求模型QPY23023.680026025.4812.630030823.434032.68303513385038234859.640535.2900.443538.495051240.2963.862342.6100071242.3102680245.41340.688848.21350936501432.4102350.815001134521502.8Z=230 23.6 800; 260 25.4 812.6; 300 30 823.4; 340 32.6 830; 351 33 850; 382 34 859.6; 405 35.2 900.4; 435 38.4 950; 512 40.2 963.8; 623 42.6 1000; 712 42.3 1026; 802 45.4 1340.6; 888 48.2 1350; 936 50 1432.4; 1023 50.8 1500; 1134 52 1502.8;Q=Z(:,1);P=Z(:,2);Y=Z(:,3);result1=tsls(Q,P,ones(16,1) Y,ones(16,1) Y);Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 9.269204e-020. In tsls at 62result1 = meth: tsls nendog: 1 nexog: 2 (包括常数项) nvar: 3 y: 16x1 double nobs: 16 beta: 3x1 double yhat: 16x1 double resid: 16x1 double sige: 2.5612e+003 tstat: 3x1 double rsqr: 0.9744 rbar: 0.9705 dw: 1.3746vname1 = strvcat(Q,P variable,constant,Y)prt(result1,vname1)Two Stage Least-squares Regression Estimates Dependent Variable = Q R-squared = 0.9744 Rbar-squared = 0.9705 sigma2 = 2561.2036 Durbin-Watson = 1.3746 Nobs, Nvars = 16, 3 *Variable Coefficient t-statistic t-probability P variable 16.500000 0.000003 0.999998 constant -664.000000 -0.000019 0.999985 Y 0.562500 0.000003 0.999997sigma2为:norm(result1.yhat-Q)2/(16-3)ans = 2.5612e+003因此,Q-664+16.5P+0.5625Yresult2=tsls(Q,P,ones(16,1) ,ones(16,1) Y);result2=tsls(Q,P,ones(16,1) ,ones(16,1) Y)result2 = meth: tsls nendog: 1 nexog: 1 nvar: 2 y: 16x1 double nobs: 16 beta: 2x1 double yhat: 16x1 double resid: 16x1 double sige: 7.7680e+003 tstat: 2x1 double rsqr: 0.9165 rbar: 0.9105 dw: 0.4122vname2 = strvcat(Q,P variable,constant);prt(result2,vname2)Two Stage Least-squares Regression Estimates Dependent Variable = Q R-squared = 0.9165 Rbar-squared = 0.9105 sigma2 = 7767.9983 Durbin-Watson = 0.4122 Nobs, Nvars = 16, 2 *Variable Coefficient t-statistic t-probability P variable 34.796702 12.614743 0.000000 constant -773.106431 -7.043554 0.000006因此,Q-773.106431+34.796702P 与书上的P358差不多,当模型恰好识别时,间接最小二乘与二阶段最小二乘估计量相等。再举个例子:于俊年的书P365页 (12.4.7)式tQPYW111208.142216188.458311128.535414218.546513278.84161726956714258.948815279.450912309.5391018289.952Z=11 20 8.1 42;16 18 8.4 58;11 12 8.5 35;14 21 8.5 46;13 27 8.8 41;17 26 9 56;14 25 8.9 48;15 27 9.4 50;12 30 9.5 39;18 28 9.9 52;Q=Z(:,1);P=Z(:,2);Y=Z(:,3);W=Z(:,4);result1=tsls(Q,P,ones(10,1) Y ,ones(10,1) Y W)result1 = meth: tsls nendog: 1 nexog: 2 nvar: 3 y: 10x1 double nobs: 10 beta: 3x1 double yhat: 10x1 double resid: 10x1 double sige: 329.5857 tstat: 3x1 double rsqr: -42.6125 rbar: -55.0732 dw: 1.7052 vname1=strvcat(Q ,P variable,constant,Y variable); prt(result1,vname1)Two Stage Least-squares Regression Estimates Dependent Variable = Q R-squared = -42.6125 Rbar-squared = -55.0732 sigma2 = 329.5857 Durbin-Watson = 1.7052 Nobs, Nvars = 10, 3 *Variable Coefficient t-statistic t-probability P variable 4.377929 0.323694 0.755626 constant 182.768309 0.309809 0.765725 Y variable -30.462006 -0.300019 0.772876因此,Q182.768309+4.377929P-30.462006Yresult2=tsls(Q,P,ones(10,1) W ,ones(10,1) Y W)result2 = meth: tsls nendog: 1 nexog: 2 nvar: 3 y: 10x1 double nobs: 10 beta: 3x1 double yhat: 10x1 double resid: 10x1 double sige: 1.9060 tstat: 3x1 double rsqr: 0.7478 rbar: 0.6757 dw: 1.6296vname2=strvcat(Q ,P variable,constant,W variable); prt(result2,vname2)Two Stage Least-squares Regression Estimates Dependent Variable = Q R-squared = 0.7478 Rbar-squared = 0.6757 sigma2 = 1.9060 Durbin-Watson = 1.6296 Nobs, Nvars = 10, 3 *Variable Coefficient t-statistic t-probability P variable 0.214952 1.887887 0.100988 constant -2.726769 -0.797706 0.451248 W variable 0.252610 3.945183 0.005566因此,Q2.726769+0.214952P+0.252610WMonte Carlo 试验:osl与tsls的比较% - Example 8.2 Monte Carlo study of ols() vs. tsls()nobs = 200;x1 = randn(nobs,1); x2 = randn(nobs,1);b1 = 1.0; b2 = 1.0; iota = ones(nobs,1);y1 = zeros(nobs,1); y2 = zeros(nobs,1);evec = randn(nobs,1);% create simultaneously determined variables y1,y2for i=1:nobs;y1(i,1) = iota(i,1) + x1(i,1)*b1 + evec(i,1);y2(i,1) = iota(i,1) + y1(i,1) + x2(i,1)*b2 + evec(i,1);end;% use all exogenous in the system as instrumentsxall = iota x1 x2;niter = 100; % number of Monte Carlo loopsbols = zeros(niter,3); % storage for ols resultsb2sls = zeros(niter,3); % storage for 2sls resultsdisp(patience - doing 100 2sls regressions);for iter=1:niter; % do Monte Carlo loopingy1 = zeros(nobs,1); y2 = zeros(nobs,1); evec = randn(nobs,1);% create simultaneously determined variables y1,y2for i=1:nobs;y1(i,1) = iota(i,1)*1.0 + x1(i,1)*b1 + evec(i,1);y2(i,1) = iota(i,1)*1.0 + y1(i,1)*1.0 + x2(i,1)*b2 + evec(i,1);end;result1 = ols(y2,y1 iota x2); % do ols regressionresult2 = tsls(y2,y1,iota x2,xall); % do tsls regressionbols(iter,:) = result1.beta;b2sls(iter,:) = result2.beta;end; % end Monte Carlo looping% find means and std deviations over the niter runsbolsm = mean(bols); b2slsm = mean(b2sls);bolss = std(bols); b2slss = std(b2sls);% print resultsfprintf(OLS results over ,num2str(niter), runsn);in.rnames = strvcat(Coefficients,b1,b2,b3);ames = strvcat(Mean,std dev);mprint(bolsm bolss,in);fprintf(TSLS results over ,num2str(niter), runsn);mprint(b2slsm b2slss,in);patience - doing 100 2sls regressionsOLS results over 100 runsCoefficients Mean std dev b1 1.4965 0.0241 b2 0.5402 0.0390 b3 1.0293 0.0388 TSLS results over 100 runsCoefficients Mean std dev b1 0.9972 0.0685 b2 1.0108 0.0974 b3 0.9978 0.0798命令: 三阶段最小二乘法PURPOSE: computes Three-Stage Least-squares Regressionfor a model with neqs-equations-USAGE: results = thsls(neqs,y,Y,X)where:neqs = # of equationsy = an eq structure containing dependent variablese.g. y(1).eq = y1; y(2).eq = y2; y(3).eq = y3;Y = an eq structure containing RHS endogenouse.g. Y(1).eq = ; Y(2).eq = y1 y3; Y(3).eq = y2;X = an eq structure containing exogenous/lagged endogenouse.g. X(1).eq = iota x1 x2;X(2).eq = iota x1;X(3).eq = iota x1 x2 x3;-NOTE: X(i), i=1,.,G should include a constant vectorif you want one in the equation-RETURNS a structure:result.meth = thslsresult(eq).beta = bhat for each equationresult(eq).tstat = tstat for each equationresult(eq).tprob = tprobs for each equationresult(eq).resid = residuals for each equationresult(eq).yhat = yhats for each equationresult(eq).y = y for each equationresult(eq).rsqr = r-squared for each equationresult(eq).rbar = r-squared adj for each equationresult(eq).nvar = nvar in each equationresult(eq).sige = ee/nobs for each equationresult(eq).dw = Durbin-Watsonresult.nobs = nobsresult.neqs = neqsresult.sigma = sig(i,j) across equationsresult.ccor = correlation of residuals across equations-SEE ALSO: prt, prt_eqs, plt-% - Example 8.3 Three-stage least-squaresnobs = 100; neqs = 3;x1 = randn(nobs,1); x2 = randn(nobs,1); x3 = randn(nobs,1);b1 = 1.0; b2 = 1.0; b3 = 1.0; iota = ones(nobs,1);y1 = zeros(nobs,1); y2 = zeros(nobs,1); y3 = zeros(nobs,1);e = randn(nobs,3);e(:,2) = e(:,3) + randn(nobs,1); % create cross-eqs corr% create simultaneously determined variables y1,y2for i=1:nobs;y1(i,1) = iota(i,1)*10.0 + x1(i,1)*b1 + e(i,1);y2(i,1) = iota(i,1)*10.0 + y1(i,1)*1.0 + x2(i,1)*b2 + e(i,2);y3(i,1) = iota(i,1)*10.0 + y2(i,1)*1.0 + x2(i,1)*b2 + x3(i,1)*b3 + e(i,3);end;vname1 = strvcat(y1-LHS,constant,x1 var);vname2 = strvcat(y2-LHS,y1-RHS,constant,x2 var);vname3 = strvcat(y3-LHS,y2-RHS,constant,x2 var,x3 var);% set up a structure for y containing ys for each eqny(1).eq = y1; y(2).eq = y2; y(3).eq = y3;% set up a structure for Y (RHS endogenous) for each eqnY(1).eq = ; Y(2).eq = y1; Y(3).eq = y2;% set up a structure for X (exogenous) in each eqnX(1).eq = iota x1; X(2).eq = iota x2; X(3).eq = iota x2 x3;result = thsls(neqs,y,Y,X); % do thsls regressionvname = strvcat(vname1,vname2,vname3);prt(result,vname);Three Stage Least-squares Estimates - Equation 1 Dependent Variable = y1-LHS R-squared = 0.4808 Rbar-squared = 0.4755 sigma2 = 0.9779 Durbin-Watson = 1.8904 Nobs, Nvars = 100, 2 *Variable Coefficient t-statistic t-probability constant 9.985283 100.821086 0.000000 x1 var 1.101060 9.621838 0.000000 Three Stage Least-squares Estimates - Equation 2 Dependent Variable = y2-LHS R-squared = 0.6650 Rbar-squared = 0.6581 sigma2 = 1.4384 Durbin-Watson = 1.8810 Nobs, Nvars = 100, 3 *Variable Coefficient t-statistic t-probability y1-RHS 0.959248 7.615010 0.000000 constant 10.389051 8.177007 0.000000 x2 var 0.925196 7.326562 0.000000 Three Stage Least-squares Estimates - Equation 3 Dependent Variable = y3-LHS R-squared = 0.9156 Rbar-squared = 0.9129 sigma2 = 0.9098 Durbin-Watson = 1.6829 Nobs, Nvars = 100, 4 *Variable Coefficient t-statistic t-probability y2-RHS 1.005874 9.667727 0.000000 constant 9.819170 4.705299 0.000008 x2 var 0.969571 6.898767 0.000000 x3 var 1.097547 14.974708 0.000000 Cross-equation sig(i,j) estimates equation y1-LHS y2-LHS y3-LHS y1-LHS 0.9779 0.1707 -0.1108 y2-LHS 0.1707 1.4383 0.7660 y3-LHS -0.1108 0.7660 0.9005 Cross-equation correlations equation y1-LHS y2-LHS y3-LHS y1-LHS 1.0000 0.1439 -0.1181 y2-LHS 0.1439 1.0000 0.6731 y3-LHS -0.1181 0.6731 1.0000Cross-equation sig(i,j) estimates 计算二阶段最小二乘的残差的协差阵:ans1=ols(y1,iota x1); 方程一:不含随机解释变量,用OLSans2=tsls(y2,y1,iota x2,iota x1 x2 x3); an
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
评论
0/150
提交评论