版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1、外文文献:Adaptive DynamicProgramming: An IntroductionAbstract: In this article, we introduce some recent research trendswithin the field of adaptive/approximate dynamic programming(ADP), including the variations on the structure of ADPschemes, the development of ADP algorithms and applicationsof ADP sch
2、emes. For ADP algorithms, the point of focus is thatiterative algorithms of ADP can be sorted into two classes: oneclass is the iterative algorithm with initial stable policy; the otheris the one without the requirement of initial stable policy. It isgenerally believed that the latter one has less c
3、omputation at thecost of missing the guarantee of system stability during iterationprocess. In addition, many recent papers have provided convergenceanalysis associated with the algorithms developed. Furthermore,we point out some topics for future studies.IntroductionAs is well known, there are many
4、 methods for designingstable control for nonlinear systems. However, stabilityis only a bare minimum requirement in a systemdesign. Ensuring optimality guarantees the stability ofthe nonlinear system. Dynamic programming is a very usefultool in solving optimization and optimal control problems byemp
5、loying the principle of optimality. In 16, the principle ofoptimality is expressed as: “An optimal policy has the propertythat whatever the initial state and initial decision are, theremaining decisions must constitute an optimal policy withregard to the state resulting from the first decision.” The
6、re areseveral spectrums about the dynamic programming. One canconsider discrete-time systems or continuous-time systems, linearsystems or nonlinear systems, time-invariant systems ortime-varying systems, deterministic systems or stochastic systems,etc.We first take a look at nonlinear discrete-time
7、(timevarying)dynamical (deterministic) systems. Time-varying nonlinearsystems cover most of the application areas anddiscrete-time is the basic consideration for digital computation.Suppose that one is given a discrete-time nonlinear (timevarying)dynamical systemwhere represents the state vector of
8、the system anddenotes the control action and F is the system function.Suppose that one associates with this system the performanceindex (or cost)where U is called the utility function and g is the discountfactor with 0 , g # 1. Note that the function J is dependenton the initial time i and the initi
9、al state x( i ), and it is referred toas the cost-to-go of state x( i ). The objective of dynamic programmingproblem is to choose a control sequenceu(k), k5i, i11,c, so that the function J (i.e., the cost) in(2) is minimized. According to Bellman, the optimal cost fromtime k is equal toThe optimal c
10、ontrol u* 1k2 at time k is the u1k2 which achievesthis minimum, i.e.,Equation (3) is the principle of optimality for discrete-timesystems. Its importance lies in the fact that it allows one tooptimize over only one control vector at a time by workingbackward in time.In nonlinear continuous-time case
11、, the system can bedescribed byThe cost in this case is defined asFor continuous-time systems, Bellmans principle of optimalitycan be applied,too. Theoptimal cost J*(x0)5minJ(x0, u(t) will satisfy theHamilton-Jacobi-Bellman EquationEquations (3) and (7) are called the optimality equationsof dynamic
12、programming which are the basis for implementationof dynamic programming. In the above, if the functionF in (1) or (5) and the cost function J in (2) or (6) areknown, the solution of u(k ) becomes a simple optimizationproblem. If the system is modeled by linear dynamics andthe cost function to be mi
13、nimized is quadratic in the stateand control, then the optimal control is a linear feedback ofthe states, where the gains are obtained by solving a standardRiccati equation 47. On the other hand, if the system ismodeled by nonlinear dynamics or the cost function is nonquadratic,the optimal state fee
14、dback control will dependupon solutions to the Hamilton-Jacobi-Bellman (HJB)equation 48 which is generally a nonlinear partial differentialequation or difference equation. However, it is oftencomputationally untenable to run true dynamic programmingdue to the backward numerical process required for
15、itssolutions, i.e., as a result of the well-known “curse of dimensionality”16, 28. In 69, three curses are displayed inresource management and control problems to show thecost function J , which is the theoretical solution of theHamilton-Jacobi- Bellman equation, is very difficult toobtain, except f
16、or systems satisfying some very good conditions.Over the years, progress has been made to circumventthe “curse of dimensionality” by building a system, called“critic”, to approximate the cost function in dynamic programming(cf. 10, 60, 61, 63, 70, 78, 92, 94,95). The idea is to approximate dynamic p
17、rogrammingsolutions by using a function approximation structure suchas neural networks to approximate the cost function.The Basic Structures of ADPIn recent years, adaptive/approximate dynamic programming(ADP) has gained much attention from many researchers in orderto obtain approximate solutions of
18、 the HJB equation,cf. 2, 3, 5, 8, 1113, 21, 22, 25, 30, 31, 34,35, 40, 46, 49, 52, 54, 55, 63, 70, 76, 80,83, 95, 96, 99, 100. In 1977, Werbos 91 introduced anapproach for ADP that was later called adaptive critic designs(ACDs). ACDs were proposed in 91, 94, 97 as a way forsolving dynamic programmin
19、g problems forward-in-time. In theliterature, there are several synonyms used for “Adaptive CriticDesigns” 10, 24, 39, 43, 54, 70, 71, 87, including“Approximate Dynamic Programming” 69, 82, 95, “AsymptoticDynamic Programming” 75, “Adaptive Dynamic Programming”63, 64, “Heuristic Dynamic Programming”
20、46,93, “Neuro-Dynamic Programming” 17, “Neural DynamicProgramming” 82, 101, and “Reinforcement Learning” 84.Bertsekas and Tsitsiklis gave an overview of the neurodynamicprogramming in their book 17. They provided thebackground, gave a detailed introduction to dynamic programming,discussed the neural
21、 network architectures and methods fortraining them, and developed general convergence theorems forstochastic approximation methods as the foundation for analysis ofvarious neuro-dynamic programming algorithms. They providedthe core neuro-dynamic programming methodology, includingmany mathematical r
22、esults and methodological insights. They suggestedmany useful methodologies for applications to neurodynamicprogramming, like Monte Carlo simulation, on-line andoff-line temporal difference methods, Q-learning algorithm, optimisticpolicy iteration methods, Bellman error methods, approximatelinear pr
23、ogramming, approximate dynamic programmingwith cost-to-go function, etc. A particularly impressive success thatgreatly motivated subsequent research, was the development of abackgammon playing program by Tesauro 85. Here a neuralnetwork was trained to approximate the optimal cost-to-go functionof th
24、e game of backgammon by using simulation, that is, byletting the program play against itself. Unlike chess programs, thisprogram did not use lookahead of many steps, so its success canbe attributed primarily to the use of a properly trained approximationof the optimal cost-to-go function.To implemen
25、t the ADP algorithm, Werbos 95 proposed ameans to get around this numerical complexity by using“approximate dynamic programming” formulations. His methodsapproximate the original problem witha discrete formulation. Solution to theADP formulation is obtained through neuralnetwork based adaptive criti
26、c approach.The main idea of ADP is shown in Fig. 1.He proposed two basic versions whichare heuristic dynamic programming (HDP)and dual heuristic programming (DHP).HDP is the most basic and widely appliedstructure of ADP 13, 38, 72, 79, 90,93, 104, 106. The structure of HDP isshown in Fig. 2. HDP is
27、a method for estimatingthe cost function. Estimating the costfunction for a given policy only requiressamples from the instantaneous utility function U, while modelsof the environment and the instantaneous reward are needed tofind the cost function corresponding to the optimal policy.In HDP, the out
28、put of the critic network is J, which is theestimate of J in equation (2). This is done by minimizing thefollowing error measure over timewhere J(k)5J 3x(k), u(k), k, WC4 and WC represents theparameters of the critic network. When Eh50 for all k, (8)implies thatDual heuristic programming is a method
29、 for estimating thegradient of the cost function, rather than J itself. To do this,a function is needed to describe the gradient of the instantaneouscost function with respect to the state of the system. Inthe DHP structure, the action network remains the same as theone for HDP, but for the second n
30、etwork, which is called thecritic network, with the costate as its output and the state variablesas its inputs.The critic networks training is more complicated than thatin HDP since we need to take into account all relevant pathwaysof backpropagation.This is done by minimizing the following error me
31、asureover timewhere 'J 1k2 /'x1k2 5'J 3x1k2, u1k2, k, WC4/'x1k2 and WCrepresents theparameters of the critic network. When Eh50for all k, (10) implies that2. Theoretical DevelopmentsIn 82, Si et al summarizes the cross-disciplinary theoreticaldevelopments of ADP and overviews DP and
32、ADP; and discussestheir relations to artificial intelligence, approximationtheory, control theory, operations research, and statistics.In 69, Powell shows how ADP, when coupled with mathematicalprogramming, can solve (approximately) deterministicor stochastic optimization problems that are far large
33、r than anythingthat could be solved using existing techniques and showsthe improvement directions of ADP.In 95, Werbos further gave two other versions called “actiondependentcritics,” namely, ADHDP (also known as Q-learning89) and ADDHP. In the two ADP structures, the control isalso the input of the
34、 critic networks. In 1997, Prokhorov andWunsch 70 presented more algorithms according to ACDs.They discussed the design families of HDP, DHP, and globalizeddual heuristic programming (GDHP). They suggestedsome new improvements to the original GDHP design. Theypromised to be useful for many engineeri
35、ng applications in theareas of optimization and optimal control. Based on one ofthese modifications, they present a unified approach to allACDs. This leads to a generalized training procedure for ACDs.In 26, a realization of ADHDP was suggested: a least squaressupport vector machine (SVM) regressor
36、has been used forgenerating the control actions, while an SVM-based tree-typeneural network (NN) is used as the critic. The GDHP orADGDHP structure minimizes the error with respect to boththe cost and its derivatives. While it is more complex to do thissimultaneously, the resulting behavior is expec
37、ted to be superior.So in 102, GDHP serves as a reconfigurable controller todeal with both abrupt and incipient changes in the plantdynamics due to faults. A novel fault tolerant control (FTC)supervisor is combined with GDHP for the purpose ofimproving the performance of GDHP for fault tolerant contr
38、ol.When the plant is affected by a known abrupt fault, the newinitial conditions of GDHP are loaded from dynamic modelbank (DMB). On the other hand, if the fault is incipient, thereconfigurable controller maintains performance by continuouslymodifying itself without supervisor intervention. It isnot
39、ed that the training of three networks used to implementthe GDHP is in an online fashion by utilizing two distinct networksto implement the critic. The first critic network istrained at every iterations while the second one is updated witha copy of the first one at a given period of iterations.All t
40、he ADP structures can realize the same function that isto obtain the optimal control policy while the computationprecision and running time are different from each other. Generallyspeaking, the computation burden of HDP is low but thecomputation precision is also low; while GDHP has better precision
41、but the computation process will take longer time andthe detailed comparison can be seen in 70.In 30, 33 and 83, the schematic of direct heuristicdynamic programming is developed. Using the approach of83, the model network in Fig. 1 is not needed anymore.Reference 101 makes significant contributions
42、to model-free adaptive criticdesigns. Several practical examples areincluded in 101 for demonstrationwhich include single inverted pendulumand triple inverted pendulum. A reinforcementlearning-based controllerdesign for nonlinear discrete-time systemswith input constraints is presented by 36,where t
43、he nonlinear tracking control isimplemented with filtered tracking errorusing direct HDP designs. Similar worksalso see 37. Reference 54 is also aboutmodel-free adaptive critic designs. Twoapproaches for the training of critic networkare provided in 54: A forward-in-timeapproach and a backward-in-ti
44、me approach.Fig. 4 shows the diagram of forward-intimeapproach. In this approach, we viewJ(k) in (8) as the output of the critic networkto be trained and chooseU(k)1gJ(k11) as the training target.Note that J(k) and J(k11) are obtainedusing state variables at different time instances. Fig. 5 showsthe
45、 diagram of backward-in-time approach. In this approach,we view J(k11) in (8) as the output of the critic network tobe trained and choose ( J(k)2U(k)/g as the training target.The training ap proach of 101 can be considered as a backward-in-time ap proach. In Fig. 4 and Fig. 5, x(k11) is theoutput of
46、 the model network.An improvement and modification to the two network architecture,which is called the “single network adaptive critic(SNAC)” was presented in 65, 66. This approach eliminatesthe action network. As a consequence, the SNAC architectureoffers three potential advantages: a simpler archi
47、tecture, lessercomputational load (about half of the dual network algorithms),and no approximate error due to the fact that the action networkis eliminated. The SNAC approach is applicable to a wide class ofnonlinear systems where the optimal control (stationary) equationcan be explicitly expressed
48、in terms of the state and thecostate variables. Most of the problems in aerospace, automobile,robotics, and other engineering disciplines can be characterizedby the nonlinear control-affine equations that yield such a relation.SNAC-based controllers yield excellent tracking performancesin applicatio
49、ns to microelectronic mechanical systems,chemical reactor, and high-speed reentry problems. Padhi et al.65 have proved that for linear systems (where the mappingbetween the costate at stage k11 and the state at stage k is linear),the solution obtained by the algorithm based on the SNACstructure conv
50、erges to the solution of discrete Riccati equation.译文:自适应动态规划综述摘要:自适应动态规划(Adaptive dynamic programming, ADP) 是最优控制领域新兴起的一种近似最优方法, 是当前国际最优化领域的研究热点. ADP 方法利用函数近似结构来近似哈密顿 雅可比 贝尔曼(Hamilton-Jacobi-Bellman, HJB)方程的解, 采用离线迭代或者在线更新的方法, 来获得系统的近似最优控制策略, 从而能够有效地解决非线性系统的优化控制问题. 本文按照ADP 的结构变化、算法的发展和应用三个方面介绍ADP 方
51、法. 对目前ADP 方法的研究成果加以总结, 并对这一研究领域仍需解决的问题和未来的发展方向作了进一步的展望。关键词:自适应动态规划, 神经网络, 非线性系统, 稳定性引言动态系统在自然界中是普遍存在的, 对于动态系统的稳定性分析长期以来一直是研究热点, 且已经提出了一系列方法. 然而控制科技工作者往往在保证控制系统稳定性的基础上还要求其最优性. 本世纪50»60 年代, 在空间技术发展和数字计算机实用化的推动下, 动态系统的优化理论得到了迅速的发展, 形成了一个重要的学科分支: 最优控制. 它在空间技术、系统工程、经济管理与决策、人口控制、多级工艺设备的优化等许多领域都有越来越广泛
52、的应用. 1957 年Bellman 提出了一种求解最优控制问题的有效工具: 动态规划(Dynamic programing,DP) 方法1. 该方法的核心是贝尔曼最优性原理,即: 多级决策过程的最优策略具有这种性质, 不论初始状态和初始决策如何, 其余的决策对于由初始决策所形成的状态来说, 必定也是一个最优策略. 这个原理可以归结为一个基本的递推公式, 求解多级决策问题时, 要从末端开始, 到始端为止, 逆向递推.该原理适用的范围十分广泛, 例如离散系统、连续系统、线性系统、非线性系统、确定系统以及随机系统等。下面分别就离散和连续两种情况对DP方法的基本原理进行说明. 首先考虑离散非线性系统
53、。假设一个系统的动态方程为其中, 为系统的状态向量, 为控制输入向量。系统相应的代价函数(或性能指标函数)的形式为其中, 初始状态x(k) = xk 给定, l(x(k),u(k),k) 是效用函数, r为折扣因子且满足0 < r<=1。控制目标就是求解容许决策(或控制) 序列u(k), k = i, i+ 1, , 使得代价函数(2) 最小。根据贝尔曼最优性原理, 始自第k 时刻任意状态的最小代价包括两部分, 其中一部分是第k 时刻内所需最小代价, 另一部分是从第k +1 时刻开始到无穷的最小代价累加和, 即相应的k 时刻的控制策略u(k) 也达到最优, 表示为接下来, 考虑连续
54、非线性(时变) 动态(确定) 系统的最优控制问题. 考察如下的连续时间系统:其中, F(x,u,t) 为任意连续函数。 求一容许控制策略u(t) 使得代价函数(或性能指标函数)最小. 我们可以通过离散化的方法将连续问题转换为离散问题, 然后通过离散动态规划方法求出最优控制, 当离散化时间间隔趋于零时, 两者必趋于一致。 通过应用贝尔曼最优性原理, 可以得到DP 的连续形式为可以看出, 上式是J*(x(t),t) 以x(t)、t 为自变量的一阶非线性偏微分方程, 在数学上称其为哈密顿雅可比贝尔曼(Hamilton-Jacobi-Bellman, HJB)方程。如果系统是线性的且代价函数是状态和控
55、制输入的二次型形式, 那么其最优控制策略是状态反馈的形式, 可以通过求解标准的黎卡提方程得到. 如果系统是非线性系统或者代价函数不是状态和控制输入的二次型形式, 那么就需要通过求解HJB 方程进而获得最优控制策略. 然而, HJB 方程这种偏微分方程的求解是一件非常困难的事情. 此外, DP方法还有一个明显的弱点: 随着x 和u 维数的增加, 计算量和存储量有着惊人的增长, 也就是我们平常所说的维数灾" 问题1-2. 为了克服这些弱点, Werbos 首先提出了自适应动态规划(Adaptivedynamic programming, ADP) 方法的框架3, 其主要思想是利用一个函数
56、近似结构(例如神经网络、模糊模型、多项式等) 来估计代价函数, 用于按时间正向求解DP 问题。近些年来, ADP 方法获得了广泛的关注, 也产生了一系列的同义词, 例如: 自适应评价设计4-7、启发式动态规划8-9、神经元动态规划10-11、自适应动态规划12 和增强学习13 等. 2006 年美国科学基金会组织的2006 NSF Workshop and Out-reach Tutorials on Approximate Dynamic Pro-gramming" 研讨会上, 建议将该方法统称为Adap-tive/Approximate dynamic programming&q
57、uot;. Bert-sekas 等在文献10¡11 中对神经元动态规划进行了总结, 详细地介绍了动态规划、神经网络的结构和训练算法, 提出了许多应用神经元动态规划的有效方法. Si 等总结了ADP 方法在交叉学科的发展,讨论了DP 和ADP 方法与人工智能、近似理论、控制理论、运筹学和统计学的联系14. 在文献15中, Powell 展示了如何利用ADP 方法求解确定或者随机最优化问题, 并指出了ADP 方法的发展方向. Balakrishnan 等在文献16 中从有模型和无模型两种情况出发, 对之前利用ADP 方法设计动态系统反馈控制器的方法进行了总结. 文献17 从要求初始稳定
58、和不要求初始稳定的角度对ADP 方法做了介绍. 本文将基于我们的研究成果, 在之前研究的基础上, 概述ADP 方法的最新进展。ADP 的结构发展为了执行ADP 方法, Werbos 提出了两种基本结构: 启发式动态规划(Heuristic dynamic pro-gramming, HDP) 和二次启发式规划(Dual heuris-tic programming, DHP), 其结构如图1 和图2 所示4.HDP 是ADP 方法最基础并且应用最广泛的结构, 其目的是估计系统的代价函数, 一般采用三个网络: 评价网、控制网和模型网. 评价网的输出用来估计代价函数J(x(k); 控制网用来映射状态变量和控制输入之间的关系; 模型网用来估计下一时刻的系统状态. 而DHP 方法则是估计系统代价函数的梯度. DHP 的控制网和模型网的定义与HDP 相同,而其评价网的输出是代价函数的梯度。HDP是ADP最基本和广泛应用结构13,38,72,79,90,93,104,106。HDP的结构被显示在图2
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2025福建泉州交发集团招聘112人笔试历年参考题库附带答案详解
- 2025福建东南标准认证中心有限公司福建东南质量技术培训中心有限公选聘2人笔试历年参考题库附带答案详解
- 2025湖北恩施市福牛物业有限公司招聘恩施市启智教育科技发展有限公司幼儿园工作人员1人笔试历年备考题库附带答案详解试卷2套
- 2025年11月江苏南通市阳光养老集团招聘工作人员1人笔试历年典型考点题库附带答案详解试卷2套
- 2025中国南水北调集团水网智慧科技有限公司秋季招聘14人笔试历年常考点试题专练附带答案详解试卷2套
- 建筑光伏一体化(BIPV)推广创新创业项目商业计划书
- 快速制冷酒柜技术创新创业项目商业计划书
- 塑胶节日装饰创新创业项目商业计划书
- 2025-2030中国漆器制作行业市场深度分析及前景展望研究报告
- 安全教育安全培训油田质量保证和机械完整性教案(2025-2026学年)
- GB/T 18916.1-2021取水定额第1部分:火力发电
- GB 17568-2008γ辐照装置设计建造和使用规范
- 妊娠与肾脏疾病-陶冶主任课件
- 新形态一体化教材建设的探索与实践课件
- 2022年石家庄交通投资发展集团有限责任公司招聘笔试试题及答案解析
- 四川大学经济学院党政办公室工作人员招考聘用2人【共500题附答案解析】模拟检测试卷
- 《园林花卉学》课后题及答案
- 全国连片特困地区分县名单
- 15堆肥工艺流程图
- GB∕T 25997-2020 绝热用聚异氰脲酸酯制品
- 《工程量确认单》word版
评论
0/150
提交评论