MathematicsFundationofOptimalizationTheory课件_第1页
MathematicsFundationofOptimalizationTheory课件_第2页
MathematicsFundationofOptimalizationTheory课件_第3页
MathematicsFundationofOptimalizationTheory课件_第4页
MathematicsFundationofOptimalizationTheory课件_第5页
已阅读5页,还剩48页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、2mathematicsfundationofoptimalizationtheory12 2 mathematical foundationsof optimization theory 2mathematicsfundationofoptimalizationtheory2nquadratic forms and positive definite matrixndirectional derivatives and the gradient nhessian matrix and taylors theoremnconcavity and convexity of functionsnc

2、onditions of identifying extreme valuemain content2mathematicsfundationofoptimalizationtheory3quadratic forms and positive definite matrix1 quadratic forms (二次型) ninjjiijnnnnnnnnnnnnxxaxaxxaxxaxxaxaxxaxxaxxaxa)xxf(x11222112222221221112112211121,2mathematicsfundationofoptimalizationtheory4quadratic f

3、orms and positive definite matrixaxxxxxaxxxxxaxxxftnnninjjiijn21211121,),(2mathematicsfundationofoptimalizationtheory5n2 positive definite matrix (正定矩阵)npositive semidefinite matrixnnegetive definite matrixnnegetive semidefinite matrixnindefinite matrix 1212for,(not all equal zero),we have that( ,)0

4、ntnx xxf x xxx ax =quadratic forms and positive definite matrix2mathematicsfundationofoptimalizationtheory6how to determine whether a matrix is a positive definite matrix or not?1 definition2 ordered principal minor(顺序主子式)3 eigevalues(特征值)11121212221121112122120,0,0nnnnnnaaaaaaaaaaaaaaquadratic form

5、s and positive definite matrix2mathematicsfundationofoptimalizationtheory7namedefinitionnecessary and sufficient conditionspositive definite matrixeigevalues0ordered principal minor ai0(i=1,2,n)positive semidefinite matrixeigevalues=0ai=0. det(a)=0negetive definite matrixeigevalues0ai0( i is even)ne

6、getive semidefinite matrixeigevalues=0det(a)=0ai=0(i is even)indefinite matrixsome=0,other=guadratic forms and positive definite matrix2mathematicsfundationofoptimalizationtheory9guadratic forms and positive definite matrix2mathematicsfundationofoptimalizationtheory10directional derivatives and the

7、gradient n1 directional derivatives(方向导数) nsuppose f: is differentiable at point of x0 , p is fixed nonzero vector, e is unit vector(单位向量) on the direction p. then the directional derivative of function f(x) at point of x0 on the direction p can be given:txftexfpxft)()(lim)(00001rrn2mathematicsfunda

8、tionofoptimalizationtheory11directional derivatives and the gradient 0 and,xfunction, continuous a is : suppose01prprrrfnnn上升方向)direction(ascent is else ofpoint at the function for )direction(descent is then ),()( if )(0,tthen when 0, exists there000pxfpxftpxf下降方向2mathematicsfundationofoptimalizatio

9、ntheory12nwe can get the following conclusions according to above two definitions:nif then is descent along the direction of p from x0 to x0 nearby,nelse then is ascent along the direction of p from x0 to x0 nearby.()f x()f xdirectional derivatives and the gradient 0)(0pxf0)(0pxf2mathematicsfundatio

10、nofoptimalizationtheory13n2 the gradient (梯度) directional derivatives and the gradient tnxxfxfxxfxffn)(,x)(,)()( as noted is and function ofgradient thecalled isfunction of )s(derivative partial of vector the21偏导数axax)(xaxxxbx)(bxfcxfttt2then )matrix( symmetric a is if)4(.2)()3(.(2)0;c is that , 0)(

11、 then ),constant( )( if (1),对称矩阵2mathematicsfundationofoptimalizationtheory14directional derivatives and the gradient we may use these formulas in the future:.)()(,)()( then, where; if )4( where,)(3).matrix(identity is vector,ldimensiona- is where,)2(matrix. zero is 0 vector,ldimensiona is c andcont

12、ant is c ofcomponent all where, 0) 1 (02 01110ptpxfptptpxftr:rrf:rtp)f(x(t)aaaxnninxixnnn-cttnnm单位矩阵2mathematicsfundationofoptimalizationtheory15n3 contour(等高线)directional derivatives and the gradient 2mathematicsfundationofoptimalizationtheory16directional derivatives and the gradient 22112212223xx

13、 xxxx-+-+2mathematicsfundationofoptimalizationtheory17nmake contour x1,x2=meshgrid(-6:0.1:6,-6:0.1:6); z=2*x1.2-2*x1.*x2+x2.2-3*x1+x2; contour(x1,x2,z,10); we can see it only has a global minimizer. directional derivatives and the gradient 2mathematicsfundationofoptimalizationtheory18directional der

14、ivatives and the gradient 2mathematicsfundationofoptimalizationtheory194 the relationship between directional derivatives and gradientdirectional derivatives and the gradient 0000 ofpoint at function for the ascent is direction then the, 0)( if )2( ofpoint at function for the descent is direction th

15、en the, 0)( if ) 1 (xfppxfxfppxftt2mathematicsfundationofoptimalizationtheory20directional derivatives and the gradient 2mathematicsfundationofoptimalizationtheory21we can get the following conclusions: (1) the direction of gradient is the steepest ascent direction; (2) the rate of change at the ort

16、hogonal direction(正交方 向) of the gradient is zero; (3) function in an acute angle with the direction of the gradient is increased, while in an obtuse angle with the direction of the gradient is decreased; (4) the direction opposite to gradient direction is the steepest descent direction of function v

17、alue.directional derivatives and the gradient 2mathematicsfundationofoptimalizationtheory22nexample22120consider the function()1,please try to find the steepest descent direction at the point of(0,3) ,and the value of this function at newpoint which is obtained after moving one unitalontf xxxx =+ g

18、this function. directional derivatives and the gradient 2mathematicsfundationofoptimalizationtheory23directional derivatives and the gradient . 5120 isfunction of valueingcorrespond the.201030e ispoint new then the .10)()(e isdirection on thisvector identity the .6022)( isdirection descent steepest

19、theso ,2,2consider :solution221010030210221121)f(xxxxfxf xxxfxxfxxfxx2mathematicsfundationofoptimalizationtheory24hessian matrix and taylors theoremplease write:the conditions of identifying extreme value of twodimensionalfunction( , )taylor series of onedimensional function( )zf x yyf x - = - =2mat

20、hematicsfundationofoptimalizationtheory25nhessian matrix(海色矩阵):second-order derivativesattention: the vector of the first order derivatives of function of is called the gradient of function , the matrix of second order derivatives is called hessian matrix. hessian matrix and taylors theorem222212222

21、22122122122122)()()()()()()()()()()(nnnnnxxfxxxfxxxfxxxfxxfxxxfxxxfxxxfxxfxfxf2mathematicsfundationofoptimalizationtheory26nexample: hessian matrix and taylors theorem220222022)( 322)( function of )(matrix hessian compute233221232221xfxxxxxxxxxfxh2mathematicsfundationofoptimalizationtheory27nsolving

22、 hessian matrix by matlab.npatten 1:maple(hessian(x2+y2+z2-2*x*y-2*y*z +3*z , x,y,z);)npatten 2: maple(hessian,x2+y2+z2-2*x*y-2*y*z+3*z,x,y,z)we can transform above to symbol patten: fh=sym(fh2) hessian matrix and taylors theorem2mathematicsfundationofoptimalizationtheory28ntry to solve the gradient

23、 and hessian matrix of these functions:43222123122313(1)()234f xxxxx xx xx x =+-+-hessian matrix and taylors theoremcxbaxxxfrcrbrabxaxfrbrxrattnnntnn21)( ,matrix, symmetric a is )3()( ,211)(2mathematicsfundationofoptimalizationtheory29 hessian matrix and taylors theorem.)( ,)() 3(. 0)( ,)()2(2642412

24、222212)( .2464624)(121222211121122211321312212312332122232131aaaaaaaaaaxfbaxxfxfaaaaxfxxxxxxxxxfxxxxxxxxxxxxfnnnnnntn)(2mathematicsfundationofoptimalizationtheory30nfunction of n variables(多元函数)hessian matrix and taylors theorem. 10 where)()(21)()()( then, if)()()(21)()()()(2)0()0(2)0()0()0()0(2)0()

25、0()0(2)0()0()0()0(xxpxfxxpxfxfxfpxxxxxxxfxxxxxfxfxftttt2mathematicsfundationofoptimalizationtheory31nusing matlab to solve taylors theorem:nmaple(readlib(mtaylor););ntl2=maple(mtaylor(sin(x2+y2),x=0,y=0,8);ntl2=x2+y2-1/6*x6-1/2*y2*x4-1/2*y4*x2 -1/6*y6. hessian matrix and taylors theorem2mathematicsf

26、undationofoptimalizationtheory32convexity and concavity of function(1)x(1)(2)(1)xxaa+-(2)x(1)(2)()(1) ()f xf xaa+-(1)x(1)(2)(1)xxaa+-(2)x(1)(2)()(1) ()f xf xaa+-convex functionconcave function2mathematicsfundationofoptimalizationtheory33convexity and concavity of function. )()( then, 1 and, 0,set co

27、nvex aon function convex a is )( suppose11k1ikiiikiiiiixfxfxf1property :function.convex a also is (x)( sum then sets,convex on functionsconvex both are )(),( suppose2121fxfxfxf2property :set.convex aon function convex a is )(0,number realany for then ,function convex a is )( suppose:xfxf3property 2m

28、athematicsfundationofoptimalizationtheory34n the following can be obtained from property 2 and property 3: convexity and concavity of functionfunction.convex a also is)()()( n combinatioliner their , 0,set convex aon )(,),(),(function convex ofnumber finite a are there22112121xfxfxfxfxfxfmmmmon )set

29、( level is andset convex a is )(,|sset ,number realevery for then ,on function convex a is )( if水平集:sxfxxxfproperty 42mathematicsfundationofoptimalizationtheory35nhowever, the converse to property 4 is not right.nfor example:convexity and concavity of functionset.convex a is)(,| set,for ,however fun

30、ction, concavestrictly a is )(,0|when xfxxsrxxfxrx2mathematicsfundationofoptimalizationtheory36nfirst-order condition of judging convex functions:convexity and concavity of functionsurface. thebelow located isfunction theofgraph on thepoint any at plane tangent theconvex isfunction abledifferenti a

31、, isthat necessary. and sufficient isit ,inequalitystrictly is inequality when the)()()()( bemust therex,xany for on function convex a is )(then on sderivative partial continuous one has )( suppose:) 1 ()2() 1 () 1 ()2()2() 1 ()2(1)xxxfxfxfxxxfxftheorem 1t,2mathematicsfundationofoptimalizationtheory

32、37nsecond-order condition of judging convex functions:convexity and concavity of functiontheorem2:suppose() has two continuous partial derivatives,then() is a convex function on convex sethession matrix () of() is positive semidefinite on.if hession matrix () of() isf xf xh xf xh xf x w w positive d

33、efinite,then() is a strictly convex function.on the contratry does not hold.concave function is similar to the results above.f x 2mathematicsfundationofoptimalizationtheory38convexity and concavity of functionfunction. concavestrictly a is f(x) somatrix, definite negative is 2002 so02222 proofconcav

34、e?or convex is:example2221 h(x)h(x)xxfxxf, xf, xfxxf, xxfxxf(x)12221222221222112mathematicsfundationofoptimalizationtheory39convexity and concavity of functionfor any convex function of convex set, all minimizers of function are located in a convex set, that is, all minimizers make up a convex set ,

35、 and the local minimizers are also global minimizers. .onminimizer global also is ofminimizer localany and set,convex a bemust minimizersget can whichofset points the,set convex aon function convex a is suppose : f(x) f(x)f(x) theorem2mathematicsfundationofoptimalizationtheory40conditions of identif

36、ying extreme value331212for example:point(0,0) is a stationary point of (,),but is not an extreme point.f x xxx =+first-order necessary conditions of identifying extreme valueuncertain. ispoint extremean ispoint stationary a whether otherewise),(point stationary a ispoint extremean ,region aon point

37、. stationary as noted is . 0 exists then there value,extreme local has and on abledifferenti is if,on point a is , of on function a be )(let *驻点x)f(xxf(x)xexf*n2mathematicsfundationofoptimalizationtheory41 conditions of identifying extreme value)0 , 0(0)(),(*323121xxfxxxxf2mathematicsfundationofopti

38、malizationtheory42nsecond-order conditions of identifying extreme valueproblem: what does the extra condition effect?conditions of identifying extreme valuetesemidefini positive is 0then,minimizer local a is if,at )able(differenti- twiceis )( suppose2*)f(x)f(xxxxf*二阶可微2mathematicsfundationofoptimali

39、zationtheory43conditions of identifying extreme value soindefinite is 4121218)(0 0;0 0)(4 ,2;2,26)()()9 6()0 0(:point stationary0)(let :solution2)(minconsider 2111212212222131 ;- -xhxhxxxxxfxh,.x,xxfxxxxxf2mathematicsfundationofoptimalizationtheory44conditions of identifying extreme valueminimizer.

40、local anot is ),()(get thus,)2, 0( findcan we, of region odneighborho any on however,matrix, tesemidefini positive a is )(0002)(,6002)(0 , 0x then , 0)(let ,32)( that note:solution)(min :example22213221xxfxfxxxx-xhxhxxhxfxxxfxxxfttt2mathematicsfundationofoptimalizationtheory45point. extremenot is (0

41、,0) so,indefinite is )(2002)( so)0 , 0(0)(0, 2, 22,2:proof)( of points extreme find:examplefor *12221222221222112221xhxhxxfxxfxxfxfxfxxfxxfxxxfconditions of identifying extreme value2mathematicsfundationofoptimalizationtheory46nsecond-order sufficient and necessary conditionsconditions of identifyin

42、g extreme valueminimizer. local a is then definite positive is)( 0)( and,at abledifferenti- twiceis )( suppose*2*xxfxfxxf2mathematicsfundationofoptimalizationtheory47conditions of identifying extreme value minimizer. localstrictly a is say can weso106610)()0 , 0(x:point stationary565)(minconsider *2

43、22121*xxhxxxxxf2mathematicsfundationofoptimalizationtheory48conditions of identifying extreme valueproblem: what is the difference between the below and second-order necessary condition?optimum. local a is callcan then wete,semidefini positive is )(, of ),( odneighborho in the point any for 0, exists thereand 0)(, if spaceeuclidean infunction abledifferentiorder -second a be )(let 2*nnxxfxxnxxfexexf2mathematicsfundationofoptimalizationtheory49conditions of identifying extreme value).( ofminimizer localstrictly a is is,that xat minimun get can )(but matrix, positivenot is )(0000)( so12488412)(

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论