最优化理论与方法lec10penalty课件_第1页
最优化理论与方法lec10penalty课件_第2页
最优化理论与方法lec10penalty课件_第3页
最优化理论与方法lec10penalty课件_第4页
最优化理论与方法lec10penalty课件_第5页
已阅读5页,还剩33页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、Introduction The general class of penalization methods includes two groups of methods:One group imposes a penalty for violating a constraint; The other imposes a penalty for reaching the boundary of an inequality constraint. (ii) Barrier Methods (i) Penalty Methods This part discusses a group of met

2、hods, referred to as penalization methods, which solve a constrained optimization problem by solving a sequence of unconstrained optimization problems. The hope is that, in the limit, the solutions of the unconstrained problems will converge to the solution of the constrained problem. IntroductionSu

3、ppose that our constrained problem is given in the form DefineHence the constrained problem can be transformed into equivalent unconstrained problem Conceptually, if we could solve this unconstrained minimization problem we would be done. IntroductionUnfortunately this is not a practical idea, since

4、 the objective function of the unconstrained minimization is not defined outside the feasible region. Barrier and penalty methods solve a sequence of unconstrained subproblems that are more “manageable”.Barrier MethodsPenalty Methodsbarrier termpenalty termIntroductiongenerate a sequence of strictly

5、 feasible iterates that converge to a solution of the problem from the interior of the feasible regionalso called interior-point methods since the methods require the interior of the feasible region to be nonempty, they are not appropriate for problems with equality constraints Barrier methods Penal

6、ty methods generate a sequence of points that converges to a solution of the problem from the exterior of the feasible region usually more convenient on problems with equality constraints IntroductionDespite their apparent differences, barrier and penalty methods have much in common. Their convergen

7、ce theories are similar, and the underlying structure of their unconstrained problems is similarMuch of the theory for barrier methods can be replicated for penalty methods and vice versa It is common to use the generic name “penalty methods” to describe both methodsBarrier MethodsPenalty Methodsint

8、erior penalty methods exterior penalty methods Barrier MethodsConsider the nonlinear inequality-constrained problemThe functions are assumed to be twice continuously differentiable.Barrier FunctionsTwo examples of such a function are the logarithmic functionBarrier FunctionsEffect of Barrier Terma o

9、ne-dimensional problem with bounded constraintsBarrier FunctionsThe best known barrier function is the logarithmic barrier function: but the inverse barrier function is also widely used:Barrier Functions Barrier methods solve a sequence of unconstrained minimization problems of the form As the barri

10、er parameter is decreased, the effect of the barrier term is diminished, so that the iterates can gradually approach the boundary of the feasible region.Barrier Methods Example 1 Consider the nonlinear program:Then the logarithmic barrier function gives the unconstrained problemBarrier Methods Examp

11、le 1 If the constraints are strictly satisfied, the denominators are positive. The unconstrained objective is strictly convex, hence this solution is the unique local minimizer in the feasible region.Barrier Methods Some RemarksFrom the Example 1, we see thatIndeed, it is possible to prove convergen

12、ce for barrier methods under mild conditions. barrier trajectoryA regular point is a point that satisfies some constraint qualification (LICQ). Barrier Methods Some RemarksSetting the gradient of the barrier function to zero we obtain.Barrier Methods Some Remarks The above results show that the poin

13、ts on the barrier trajectory, together with their associated Lagrange multiplier estimates, are the solutions to a perturbation of the first-order optimality conditionsExample 2Obviously, the optimum:Barrier Methods Example 2The first-order necessary conditions for optimality are: Suppose the proble

14、m is solved via a logarithmic barrier method. Then the method solves the unconstrained minimization problemThe Lagrange multiplier estimates at this point are:Barrier Methods Some Remarks Another desirable property shared by both the logarithmic barrier function and the inverse barrier function is t

15、hat the barrier function is convex if the constrained problem is a convex program. Barrier methods also have potential difficulties. The property for which barrier methods have drawn the most severe criticism is that the unconstrained problems become increasingly difficult to solve as the barrier pa

16、rameter decreases. The reason is that (with the exception of some special cases) the condition number of the Hessian matrix of the barrier function at its minimum point becomes increasingly large, tending to infinity as the barrier parameter tends to zero. Barrier Methods Example 3Consider the probl

17、em of Example 2. ThenThe Hessian matrix is ill conditioned.Barrier MethodsBarrier methods require that the initial guess of the solution be strictly feasible. In our examples, such an initial guess has been provided, but for general problems a strictly feasible point may not be known. It is sometime

18、s possible to find an initial point by solving an auxiliary optimization problem. This is analougous to the use of a two-phase method in linear programming.Penalty Methods In contrast to barrier methods, penalty methods solve a sequence of unconstrained optimization problems whose solution is usuall

19、y infeasible to the original constrained problem. A penalty for violation of the constraints is incurred. As this penalty is increased, the iterates are forced towards the feasible region. An advantage of penalty methods is that they do not require the iterates to be strictly feasible. Thus, unlike

20、barrier methods, they are suitable for problems with equality constraints. Consider the equality-constrained problemAssume that all functions are twice continuously differentiable.Penalty MethodsThe best-known such penalty is the quadratic-loss function:Also possible is a penalty of the formPenalty

21、MethodsThe penalty method consists of solving a sequence of unconstrained minimization problems of the formPenalty methods share many of the properties of barrier methods: Under mild conditions, it is possible to guarantee convergenceUnder appropriate conditions, the sequence of penalty function min

22、imizers defines a continuous trajectoryIt is possible to get estimates of the Lagrange multipliers at the solutionPenalty MethodsFor example, consider the quadratic-loss penalty functionPenalty Methods Example 3Suppose that this problem is solved via a penalty method using the quadratic-loss penalty

23、 function. Consider the problemThe necessary conditions for optimality for the unconstrained problem arePenalty Methods Example 3Define a Lagrange multiplier estimate:Penalty Methods Example 3Penalty functions suffer from the same problems of ill conditioning as do barrier functions.Penalty MethodsI

24、t is also possible to apply penalty methods to problems with inequality constraints.The quadratic-loss penalty in this case isThis function has continuous first derivativesPenalty MethodsThe same observation holds for other simple forms of the penalty function. Thus, one cannot safely use Newtons me

25、thod to minimize the function. For this reason, straightforward penalty methods have not been widely used for solving general inequality-constrained problems. Multiplier-Based Methods The ill conditioning of penalty methods can be ameliorated by including multipliers explicitly in the penalty functi

26、on. Of course, multipliers appear in the context of the classical penalty method, but in that case they are a by-product of the method. For example, in classical penalty method, the multiplier estimate is where g is the vector of constraint functions.These multiplier estimates are usedin termination

27、 testsin sensitivity analysisin a more active way to derive an optimization algorithm Multiplier-Based MethodsExamining problems of the form augmented Lagrangian methodMultiplier-Based Methods AlgorithmA simple augmented-Lagrangian method has the following form: Multiplier-Based MethodsComments on the final step requires.The algorithm will terminate whenMultiplier-Based Methods E

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论