




已阅读5页,还剩16页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
Find minimum of constrained nonlinear multivariable functionEquationFinds the minimum of a problem specified byb and beq are vectors, A and Aeq are matrices, c(x) and ceq(x) are functions that return vectors, and f(x) is a function that returns a scalar. f(x), c(x),and ceq(x) can be nonlinear functions.x, lb, and ub can be passed as vectors or matrices; see Matrix Arguments.Syntaxx = fmincon(fun,x0,A,b)x = fmincon(fun,x0,A,b,Aeq,beq)x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub)x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)x = fmincon(problem)x,fval = fmincon(.)x,fval,exitflag = fmincon(.)x,fval,exitflag,output = fmincon(.)x,fval,exitflag,output,lambda = fmincon(.)x,fval,exitflag,output,lambda,grad= fmincon(.)x,fval,exitflag,output,lambda,grad,hessian= fmincon(.)Descriptionfmincon attempts to find a constrained minimum of a scalar function of several variables starting at an initial estimate.This is generally referred to as constrained nonlinear optimization or nonlinear programming.Note:Passing Extra Parameters explains how to pass extra parameters to the objective function and nonlinear constraint functions, if necessary.x = fmincon(fun,x0,A,b) starts at x0 and attempts to find a minimizer x of the function described in fun subject to the linearinequalities A*xb. x0 canbe a scalar, vector, or matrix.x = fmincon(fun,x0,A,b,Aeq,beq) minimizes fun subjectto the linear equalities Aeq*x=beq and A*xb. If no inequalitiesexist, set A = and b = .x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub) definesa set of lower and upper bounds on the design variables in x,so that the solution is always in the range lbxub.If no equalities exist, set Aeq = and beq= . If x(i) is unbounded below, set lb(i)= -Inf, and if x(i) is unbounded above,set ub(i) = Inf.Note:If the specified input bounds for a problem are inconsistent,the output x is x0 and the output fval is .Components of x0 that violate the bounds lbxub are reset to the interior of the box definedby the bounds. Components that respect the bounds are not changed.See Iterations Can Violate Constraints.x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon) subjectsthe minimization to the nonlinear inequalities c(x) orequalities ceq(x) defined in nonlcon. fmincon optimizessuch that c(x)0 and ceq(x)=0. If no bounds exist, set lb= and/or ub = .x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options) minimizeswith the optimization options specified in options.Use optimoptions to set theseoptions. If there are no nonlinear inequality or equality constraints,set nonlcon = .x = fmincon(problem) finds the minimumfor problem, where problem isa structure described in Input Arguments. Create the problem structure by exporting a problem from Optimization app, as described in Exporting Your Work.x,fval = fmincon(.) returns the value of the objective function fun at the solution x.x,fval,exitflag = fmincon(.) returns a value exitflag that describes the exit condition of fmincon.x,fval,exitflag,output = fmincon(.) returns a structure output with information about the optimization.x,fval,exitflag,output,lambda = fmincon(.) returns a structure lambda whose fields contain the Lagrange multipliers at the solution x.x,fval,exitflag,output,lambda,grad= fmincon(.) returns the value of the gradient of fun atthe solution x.x,fval,exitflag,output,lambda,grad,hessian= fmincon(.) returns the value of the Hessian at thesolution x. See fmincon Hessian.Input ArgumentsFunction Arguments describes the arguments passed to fmincon. Options provides the function-specific details for the options values. This section provides function-specific details for fun, nonlcon,and problem.FunThe function to be minimized. fun is a function that accepts a vector x and returns a scalar f, the objective function evaluated at x. fun can be specified as a function handle for a file:x = fmincon(myfun,x0,A,b)where myfun is a MATLAB function such asfunction f = myfun(x)f = . % Compute function value at xfun can also be a function handle for an anonymous function:x = fmincon(x)norm(x)2,x0,A,b);If the gradient of fun can also be computed and the GradObj option is on, as set byoptions = optimoptions(fmincon,GradObj,on)then fun must return the gradient vector g(x) in the second output argument.If the Hessian matrix can also be computed and the Hessian option is on via options=optimoptions (fmincon,Hessian,user-supplied) and the Algorithm option is trust-region-reflective, fun must return the Hessian value H(x), a symmetric matrix, in a third output argument. fun can give a sparse Hessian. See Writing Objective Functions for details.If the Hessian matrix can be computed and the Algorithm option is interior-point, there are several ways to pass the Hessian to fmincon. For more information,see Hessian.A, b, Aeq, beqLinear constraint matrices A and Aeq, and their corresponding vectors b and beq, can be sparse or dense. The trust-region-reflective and interior-point algorithms use sparse linear algebra. If A or Aeq is large, with relatively few nonzero entries, save running time and memory in the trust-region-reflective or interior-point algorithms by using sparse matrices. NonlconThe function that computes the nonlinear inequality constraints c(x) 0 and the nonlinear equality constraints ceq(x) = 0. nonlcon accepts a vector x and returns the two vectors c and ceq. c is a vector that contains the nonlinear inequalities evaluated at x, and ceq is a vector that contains the nonlinear equalities evaluated at x. nonlcon should be specified as a function handle to a file or to an anonymous function,such as mycon:x = fmincon(myfun,x0,A,b,Aeq,beq,lb,ub,mycon)where mycon is a MATLAB function such asfunction c,ceq = mycon(x)c = . % Compute nonlinear inequalities at x.ceq = . % Compute nonlinear equalities at x.If the gradients of the constraints can also be computed and the GradConstr option is on, as set byoptions = optimoptions (fmincon,GradConstr,on) then nonlcon must also return, in the third and fourth output arguments, GC, the gradient of c(x), and GCeq, the gradient of ceq(x). GC and GCeq can be sparse or dense. If GC or GCeq is large, with relatively few nonzero entries, save running time and memory in the interior-point algorithm by representing them as sparse matrices. For more information, see Nonlinear Constraints.NoteBecause Optimization Toolbox functions only accept inputs of type double, user-supplied objective and nonlinear constraint functions must return outputs of type double. Problemobjective Objective functionx0 Initial point for xAineq Matrix for linear inequality constraintsbineq Vector for linear inequality constraintsAeq Matrix for linear equality constraintsbeq Vector for linear equality constraintslbVector of lower boundsubVector of upper boundsnonlcon Nonlinear constraint functionsolver fminconoptions Options created with optimoptionsOutput ArgumentsFunction Arguments describes arguments returned by fmincon. This section provides function-specific details for exitflag, lambda, and output: ExitflagInteger identifying the reason the algorithm terminated. The following lists the values of exitflag and the corresponding reasons the algorithm terminated. All Algorithms:1 First-order optimality measure was less than options.TolFun, and maximum constraint violation was less than options.TolCon.0 Number of iterations exceeded options.MaxIter or number of function evaluations exceeded options.MaxFunEvals.-1 Stopped by an output function or plot function.-2 No feasible point was found.trust-region-reflective, interior-point, and sqp algorithms:2 Change in x was less than options.TolX and maximum constraint violation was less than options.TolCon.trust-region-reflective algorithm only:3 Change in the objective function value was less than options.TolFun and maximum constraint violation was less than options.TolCon.active-set algorithm only:4 Magnitude of the search direction was less than 2*options.TolX and maximum constraint violation was less than options.TolCon.5 Magnitude of directional derivative in search direction was less than 2*options.TolFun and maximum constraint violation was less than options.TolCerior-point and sqp algorithms:-3 Objective function at current iteration went below options.ObjectiveLimit and maximum constraint violation was less than options.TolCon.Grad Gradient at xHessian Hessian at xLambda Structure containing the Lagrange multipliers at the solution x (separated by constraint type). The fields of the structure are:Lower Lower bounds lbUpper Upper bounds ubIneqlin Linear inequalitieseqlinLinear equalitiesineqnonlin Nonlinear inequalitieseqnonlin Nonlinear equalitiesoutput Structure containing information about the optimization. The fields of the structure are:iterations Number of iterations takenfuncCount Number of function evaluationslssteplength Size of line search step relative to search direction (active-set algorithm only)constrviolation Maximum of constraint functionsstepsize Length of last displacement in x (active-set and interior-point algorithms)algorithm Optimization algorithm usedcgiterations Total number of PCG iterations (trust-region-reflective and interior-point algorithms)firstorderopt Measure of first-order optimality message Exit messageHessianfmincon uses a Hessian as an optional input. This Hessian is the second derivatives of the Lagrangian (see Equation3-1), namely,The various fmincon algorithms handle input Hessians differently:The active-set and sqp algorithms do not accept a user-supplied Hessian. They compute a quasi-Newton approximation to the Hessian of the Lagrangian.The trust-region-reflective algorithm can accept a user-supplied Hessian as the final output of the objective function. Since this algorithm has only bounds or linear constraints, the Hessian of the Lagrangian is same as the Hessian of the objective function. See Writing Scalar Objective Functions for details on how to pass the Hessian to fmincon. Indicate that you are supplying a Hessian byoptions= optimoptions(fmincon,Algorithm,trust-region-reflective,Hessian,user-supplied);If you do not pass a Hessian, the algorithm computes a finite-difference approximation.The interior-point algorithm can accept a user-supplied Hessian as a separately defined functionit is not computed in the objective function. The syntax ishessian = hessianfcn(x, lambda)hessian is an n-by-n matrix, sparse or dense, where n is the number of variables. If hessian is large and has relatively few nonzero entries, save running time and memory by representing hessian as a sparse matrix. lambda is a structure with the Lagrange multiplier vectors associated with thenonlinear constraints: lambda.ineqnonlinlambda.eqnonlinfmincon computes the structure lambda. hessianfcn must calculate the sums in Equation10-1. Indicate that you are supplying a Hessian byoptions = optimoptions(fmincon,Algorithm,interior-point,. Hessian,user-supplied,HessFcn,hessianfcn);For an example, see fmincon Interior-Point Algorithm with Analytic Hessian. The interior-point algorithm has several more options for Hessians, see Choose Input Hessian for interior-point fmincon:options = optimoptions(fmincon,Hessian,bfgs);fmincon calculates the Hessian by a dense quasi-Newton approximation. This is the default.options = optimoptions(fmincon,Hessian,lbfgs);fmincon calculates the Hessian by a limited-memory,large-scale quasi-Newton approximation. The default memory, 10 iterations,is used.options = optimoptions(fmincon,Hessian,lbfgs,positiveinteger);fmincon calculates the Hessian by a limited-memory, large-scale quasi-Newton approximation. The positive integer specifies how many past iterations should be remembered.options = optimoptions (fmincon,Hessian,fin-diff-grads,. SubproblemAlgorithm, cg, GradObj, on,. GradConstr,on);fmincon calculates a Hessian-times-vector product by finite differences of the gradient(s). You must supply the gradient of the objective function, and also gradients of nonlinear constraints if they exist.Hessian Multiply FunctionThe interior-point and trust-region-reflective algorithms allow you to supply a Hessian multiply function. This function gives the result of a Hessian-times-vector product, without computing the Hessian directly. This can save memory. The syntax for the two algorithms differ:For the interior-point algorithm,the syntax isW = HessMultFcn(x,lambda,v);The result W should be the product H*v, where H is the Hessian of the Lagrangian at x (see Equation10-1), lambda isthe Lagrange multiplier (computed by fmincon), and v is a vector of size n-by-1.Set options as follows:options = optimoptions (fmincon,Algorithm,interior-point,Hessian,user-supplied,. SubproblemAlgorithm,cg,HessMult,HessMultFcn);Supply the function HessMultFcn, which returns an n-by-1 vector, where n isthe number of dimensions of x. The HessMult option enables you to pass the result of multiplying the Hessian by a vector without calculating the Hessian.The trust-region-reflective algorithm does not involve lambda:W = HessMultFcn(H,v);The result W = H*v. fmincon passes H as the value returned in the third output of the objective function (see Writing Scalar Objective Functions). fmincon also passes v, a vector or matrix with n rows. The number of columns in v can vary, so write HessMultFcn to accept an arbitrary number of columns. H does not have to be the Hessian; rather, it can be anything that enables you to calculate W = H*v.Set options as follows:options = optimoptions(fmincon,Algorithm,trust-region-reflective,. Hessian, user-supplied, HessMult, HessMultFcn);For an example using a Hessian multiply function with the trust-region-reflective algorithm, see Minimization with Dense Structured Hessian, Linear Equalities.OptionsOptimization options used by fmincon. Some options apply to all algorithms, and others are relevant for particular algorithms. Use optimoptions to set or change the values in options. See Optimization Options Reference for detailed information.All AlgorithmsAll four algorithms use these options:AlgorithmChoose the optimization algorithm:interior-point (default)trust-region-reflectivesqp active-set For information on choosing the algorithm, see Choosing the Algorithm. The trust-region-reflective algorithm requires:A gradient to be supplied in the objective functionGradObj to be set to onEither bound constraints or linear equality constraints, but not both If you select the trust-region-reflective algorithm and these conditions are not all satisfied, fmincon throws an error.The active-set and sqp algorithms are not large-scale. See Large-Scale vs. Medium-Scale Algorithms.DerivativeCheck Compare user-supplied derivatives (gradients of objective or constraints) to finite-differencing derivatives. The choices are on or the default, off.Diagnostics Display diagnostic information about the function to be minimized or solved. The choices are on or the default, off.DiffMaxChange Maximum change in variables for finite-difference gradients (a positive scalar). The default is Inf.DiffMinChange Minimum change in variables for finite-difference gradients (a positive scalar). The default is 0.Display Level of display: off or none displays no output.iter displays output at each iteration, and gives the default exit message.iter-detailed displays output at each iteration, and gives the technical exit message.notify displays output only if the function does not converge, and gives the default exit message.notify-detailed displays output only if the function does not converge, and gives the technical exit message.final (default) displays just the final output, and gives the default exit message.final-detailed displays just the final output, and gives the technical exit message. FinDiffRelStep Scalar or vector step size factor. When you set FinDiffRelStep to a vector v, forward finite differences delta aredelta= v.*sign(x).*max(abs(x),TypicalX);and central finite differences aredelta = v.*max(abs(x),TypicalX);Scalar FinDiffRelStep expands to a vector. The default is sqrt(eps) for forward finite differences, and eps(1/3) for central finite differences.FinDiffType Finite differences, used to estimate gradients, are either forward (default), or central (centered). central takes twice as many function evaluations but should be more accurate.fmincon is careful to obey bounds when estimating both types of finite differences. So, for example, it could take a backward, rather than a forward, difference to avoid evaluating at a point outside bounds. However, for the interior-point algorithm, central differences might violate bounds during their evaluation if the AlwaysHonorConstraints option is set to none.FunValCheck Check whether objective function and constraints values are valid. on displays an error when the objective function or constraints return a value that is complex, Inf, or NaN. The default, off, displays no error.GradConstr Gradient for nonlinear constraint functions defined by the user. When set to on, fmincon expects the constraint function to have four outputs, as described in nonlcon in the Input Arguments section. When set to the default, off, gradients of the nonlinear constraints are estimated by finite differences. The trust-region-reflec
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 办事处安全培训计划课件
- 化工公司安全培训心得课件
- 变更抚养权协议样本范文8篇
- 初级安全员培训课程课件
- 红酒网络营销方案(3篇)
- 2024-2025学年江苏省无锡市九年级上学期数学期中试卷【附答案】
- 内衣材料安全知识培训课件
- 创造宣言笔记课件
- 先进教研组交流课件
- 化学品安全培训报告课件
- 福建省全国名校联盟2026届高三上学期联合开学摸底考试语文试题及参考答案
- 2025年广工建筑电气试卷及答案
- 2024年广西桂林理工大学南宁分校招聘真题
- 排污许可证管理条例课件
- 乡镇人大主席“干在实处、走在前列”学习讨论发言材料
- 2025年食品安全管理员考试题库及参考答案
- 用户反馈收集及问题分析表
- 无人机飞行操作规范手册
- 【里斯】年轻一代新能源汽车消费洞察与预测 -新物种 新理念 新趋势(2024-2025)
- 公路施工质量培训课件
- 化工总控工项目6任务28精馏操作专项训练课件
评论
0/150
提交评论