微分优化算法译文 (1).doc

随机优化技术研究

收藏

压缩包内文档预览:
预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图 预览图
编号:155459366    类型:共享资源    大小:1.04MB    格式:RAR    上传时间:2021-10-17 上传人:好资料QQ****51605 IP属地:江苏
20
积分
关 键 词:
随机 优化 技术研究
资源描述:
随机优化技术研究,随机,优化,技术研究
内容简介:
英 文 翻 译系 别 自动化系 专 业 自动化 班 级 191003 学生姓名 汪士骏 学 号 103645 指导教师 邢 超 Differential Evolution Algorithm With Strategy Adaptation for Global Numerical Optimization A. K. Qin, V. L. Huang, and P. N. SuganthanAbstractDifferential evolution (DE) is an efficient and powerful population-based stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trial-and-error scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process/evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 bound-constrained numerical optimization problems and compares favorably with the conventional DE and several state-of-the-art parameter adaptive DE variants.Index TermsDifferential evolution (DE), global numerical optimization, parameter adaptation, self-adaptation, strategy adaptation.I. INTRODUCTIONEVOLUTIONARY ALGORITHMs (EAs), inspired by the natural evolution of species, have been successfully applied to solve numerous optimization problems in diverse fields. However, when implementing the EAs, users not only need to determine the appropriate encoding schemes and evolutionary operators, but also need to choose the suitable parameter settings to ensure the success of the algorithm, which may lead to demanding computational costs due to the time-consuming trial-and-error parameter and operator tuning process. To overcome such inconvenience, researchers have actively investigated the adaptation of parameters and operators in EAs 13. Different categorizations of parameter adaptation methods have been presented in 46. Angeline 4 summarized two types of parameter updating rules in an adaptive EA, namely, absolute and empirical rules. Absolute updating rules usually prespecify how the parameter modifications would be made, while empirical updating rules adapt parameters according to the competition inherent in EAs. Literature 5 divided the parameter adaptation techniques into three categories: deterministic, adaptive, and self-adaptive control rules. Deterministic rules modify the parameters according to certain predetermined rationales without utilizing any feedback from the search process. Adaptive rules incorporate some form of the feedback from the search procedure to guide the parameter adaptation. Self-adaptive rules directly encode parameters into the individuals and evolve them together with the encoded solutions. Parameter values involved in individuals with better fitness values will survive, which fully utilize the feedback from the search process. Generally speaking, self-adaptive rules can also refer to those rules that mainly utilize the feedback from the search process such as fitness values to guide the updating of parameters.The differential evolution (DE) algorithm, proposed by Storn and Price 7, is a simple yet powerful population-based stochastic search technique, which is an efficient and effective global optimizer in the continuous search domain. DE has been successfully applied in diverse fields such as mechanical engineering 13, 14, communication 11, and pattern recognition 10. In DE, there exist many trial vector generation strategies out of which a few may be suitable for solving a particular problem. Moreover, three crucial control parameters involved in DE, i.e., population size NP , scaling factor F, and crossover rate (CR), may significantly influence the optimization performance of the DE. Therefore, to successfully solve a specific optimization problem at hand, it is generally required to perform a time-consuming trial-and-error search for the most appropriate strategy and to tune its associated parameter values.However, such a trial-and-error searching process requires high computational costs. Moreover, as evolution proceeds, the population of DE may move through different regions in the search space, within which certain strategies associated with specific parameter settings may be more effective than others. Therefore, it is desirable to adaptively determine an appropriate strategy and its associated parameter values at different stages of evolution/search process. In this paper, we propose a self-adaptive DE (SaDE) algorithm to avoid the expensive computational costs spent on searching for the most appropriate trial vector generation strategy as well as its associated parameter values by a trial-and-error procedure. Instead,both strategies and their associated parameters are gradually self-adapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different search/evolution phases. Specifically, at each generation, a set of trial vector generation strategies together with their associated parameter values will be separately assigned to different individuals in the current population according to the selection probabilities learned from the previous generations.The remainder of this paper is organized as follows. The conventional DE and related work are reviewed in Sections II and III, respectively. Section IV describes the proposed SaDE. Experimental results demonstrating the performance of SaDE in comparison with the conventional DE and several state-of-the-art adaptive DE variants over a suite of 26 bound constrained numerical optimization problems are presented in Section V. Section VI concludes this paper.II. PREVIOUS WORK RELATED TO DEThe performance of the conventional DE algorithm highly depends on the chosen trial vector generation strategy and associated parameter values used. Inappropriate choice of strategies and parameters may lead to premature convergence or stagnation, which have been extensively demonstrated in 8, 16, 17, 24, and 29. In the past decade, DE researchers have suggested many empirical guidelines for choosing trial vector generation strategies and their associated control parameter settings. Storn and Price 15 suggested that a reasonable value for NP should be between 5D and 10D, and a good initial choice of F was 0.5. The effective range of F values was suggested between 0.4 and 1. The first reasonable attempt of choosing CR value can be 0.1. However, because the large value can speed up convergence, the value of 0.9 CR for may also be a good initial choice if the problem is near unimodal or fast convergence is desired. Moreover, if the population converges prematurely,either F or NP can be increased.It was recommended in 20 to use the trial vector generation strategy DE/current-to-rand/1 and parameter setting NP=20D,k=0.5F=0.8. If the DE converges prematurely, one should increase the value of NP and F or decrease the value of K. If the population stagnates, one should increase the value of NP or F, or randomly choose K within the range 0,1. If none of the above configuration works, one may try the strategy DE/rand/1/bin along with a small rm CR value. Gmperle et al. 17 examined different parameter settings for DE on Sphere, Rosenbrock, and Rastrigin functions. Their experimental results showed that the searching capability and convergence speed are very sensitive to the choice of control parameters NP, F,and CR. They recommended that the population size NP be between 3D and 8D, the scaling factor F equal 0.6, and the crossover rate CR be between 0.3,0.9.Recently, Rnkknen et al. 21 suggested using F values between 0.4,0.95 with F=0.9 being a good initial choice. The CR values should lie in 0,0.2 when the function is separable while in 0.9,1 when the functions parameters are dependent.However, when solving a real engineering problem, the characteristics of the problem are usually unknown. Consequently, it is difficult to choose the most appropriate CR value in advance. In DE literatures, various conflicting conclusions have been drawn with regard to the rules for manually choosing the strategy and control parameters, which undesirably confuse scientist and engineers who are about to utilize DE to solve scientific and engineering problems. In fact, most of these conclusions lack sufficient justifications as their validity is possibly restricted to the problems, strategies, and parameter values considered in the investigations.Therefore, researchers have developed some techniques to avoid manual tuning of the control parameters. For example, Das et al. 29 linearly reduced the scaling factor F with increasing generation count from a maximum to a minimum value, or randomly varied F in the range (0.5,1). They also employed a uniform distribution between 0.5 and 1.5 (with a mean value of 1) to obtain a new hybrid DE variant 30.In addition, several researchers 18, 2426 focused on the adaptation of the control parameters F and CR. Liu and Lampinen introduced fuzzy adaptive differential evolution (FADE) using fuzzy logic controllers whose inputs incorporate the relative function values and individuals of successive generations to adapt the parameters for the mutation and crossover operations 18. Based on the experimental results on test functions,the FADE algorithm outperformed the conventional DE on higher dimensional problems. Zaharie proposed a parameter adaptation for DE (ADE) based on the idea of controlling the population diversity, and implemented a multipopulation approach 24. Following the same ideas, Zaharie and Petcu designed an adaptive Pareto DE algorithm for multiobjective optimization and analyzed its parallel implementation 25. Abbass 26 self-adapted the crossover rate of DE for multiobjective optimization problems, by encoding the crossover rate into each individual, to simultaneously evolve with other parameter. The scaling factor F is generated for each variable from a Gaussian distribution N(0,1).Since our preliminary self-adaptive DE work presented in 19, some new research papers focusing on the adaptation of control parameters in DE were published. Omran et al. 27 introduced a self-adapted scaling factor parameter F analogous to the adaptation of crossover rate CR in 26. The crossover rate CR in 27 is generated for each individual from a normal distribution N(0.5,0.15). This approach (called SDE) was tested on four benchmark functions and performed better than other versions of DE. Besides adapting the control parameters F or CR, Teo proposed differential evolution with self adapting populations (DESAP) 22, based on Abbasss self-adaptive Pareto DE. Recently, Brest et al. 28 encoded control parameters F and CR into the individuals and adjusted by introducing two new parameters 1 and 2. In their algorithm (called jDE), a set of values were assigned to individuals in the population. Then, a random number rand was uniformly generated in the range of 0,1. If rand1, the F was reinitialized to a new random value in the range of 0.1,1.0, otherwise it was kept unchanged. The CR was adapted in the same manner but with a different reinitialization range of 0,1. Brest et al. Further compared the performance of several self-adaptive and adaptive DE algorithms in 42. Recently, the self-adaptive neighborhood search DE algorithm was adopted into a novel cooperative coevolution framework 39. Moreover, researchers improved the performance of DE by implementing opposition-based learning 40 or local search 41.微分优化算法与对于全球数值优化的适应策略 A. K. Qin, V. L. Huang, and P. N. Suganthan概要微分优化算法(DE)是为解决连续空间优化问题的一个强有效并基于随机搜索的技术,已被广泛运用于许多科学和工程领域。然而,成功地运用微分优化算法解决一个特定的问题关键取决于适当选择试验的矢量生成策略及其相关控制参数值。采用一个试错方案寻找最适合的策略及其相关参数设置需要的大量计算成本。此外,在演化的不同阶段,为了实现最佳的性能,可能需要不同的策略加上不同的参数设置。在本文中,我们提出一个自适应的微分优化算法(SaDE),其中,通过学习之前有价值方案的生成经验,试验的矢量生成策略及其相关控制参数值会逐渐地自我适应。因此,为匹配搜索过程的不同阶段可以确定一个更适合时代的战略及其参数。在一套26个绑定数值的优化问题中,自适应微分优化算法的性能被广泛的评估(使用代码参考P. N. Suganthan),并且优于传统的微分优化算法及一些先进的自适应变异参数。索引目录微分优化,全球数值优化,参数调整,自适应,战略调整。1、 介绍优化算法,其灵感来自物种的自然进化,已经被成功得应用于众多不同领域的优化问题。然而,实现优化算法时,使用者不仅需要确定适合的编码方案及优化操作者,还需要选择合适的参数设置以确保算法的成功运用,其中,由于耗时的试错参数和操作者的调优过程,可能会导致需要计算成本。为克服这种不便,研究人员已在优化算法13中,积极地研究了参数及其操作者的适应。不同类别参数的调整方法已在46中提出。Angeline 4对适应性的优化算法总结了两种类型的参数更新规则,即绝对规则和经验规则。绝对更新规则通常提前阐述如何修改参数,而经验更新规则是根据优化算法的内在竞争去适应参数。文章5将参数适应技术分为三个类别:确定性、适应性、自适应控制规则。确定性规则通过预先确定的基本原理来修改参数,其中不使用搜索过程中的任何反馈。适应性原则进行参数修改时,搜索过程的一些反馈提供了指导。自适应规则直接编码到个体,并同时优化编码的解决方案。参数值涉及一些个体,它们有着更好的适应值并将留存,它们充分地利用了搜索过程中的反馈。一般来说,自适应规则也可以指那些主要利用搜索过程反馈的规则,如适应值指导更新参数值。微分优化算法,由R. Storn 和K. V. Price提出,是一个简单而强大的以人群为基础的随机搜索技术,在连续搜索领域,这是一个高效的全球优化程序。微分优化算法已成功地应用在不同领域,如机械工程13, 14、沟通11、模式识别10。在微分优化算法中,存在许多试验矢量生成策略,其中一些有可能适合解决特定的问题。此外,三个关键因素会影响微分优化算法,即,人后规模NP,比例因素F,交叉率CR,可能对微分优化算法的优化性能有着显著的影响。因此,成功解决一个特定的优化问题,通常需要实行耗时的试错方法来搜索最合适的策略和优化其相关参数值。然而,这样一个试错搜索过程需要大量计算成本。此外,随着优化的过程,在搜索空间中微分优化的规模可能会涉及不同地区,其中,拥有确定的策略与具体的参数设置可能比其他的更有效。因此,在不同的优化阶段或搜索过程中,确定一个适合的策略及其相关参数值是非常值得的。在本文中,我们提出一个自适应微分优化算法,以避免通过一个试错的过程花费昂贵的计算成本去寻找最合适的试验矢量生成策略以及相关的参数值。相反,策略及其相关参数值会通过学习他们过去生成有价值方案的经验去逐渐的自我适应。因此,一个更合适时代的战略及其参数设置可以确定合适的匹配不同的搜索或优化阶段。具体来讲,在每一时期,根据上一时期的选择概率,一组试验矢量生成策略与其相关的参数值将被分配给当前总体中的不同个体。本文的其余部分如下。第二和第三部分将分别叙述传统的微分优化及其相关运行。第四部分讲述了提及到的自适应微分优化算法。第五部分讲述了试验的结果,在一套26个绑定的数值优化问题中,自适应微分优化的性能与传统微分优化及一些先进的适应性微分优化变异之间的区别。第六部分总结全文。二、微分优化有关的的准备工作传统微分优化算法的性能高度依赖于选择试验矢量生成策略和相关参数值的使用。策略和参数的选择不当可能导致过早收敛或停滞,在8,16,17,24,29中已被广泛证明。在过去的十年里,研究人员对于试验矢量生成策略及其相关控制参数的设置,建议了很多经验准则。R. Storn 和K. V. Price建议一个合理的值应在5D到10D之间,并且良好的初始选择应是0.5,F值得有效范围建议在0.4到1之间。选择CR值的第一次合理尝试可以是0.1。然而,由于大的CR值可以加速收敛,如果问题是单峰或快速收敛是理想存在的,那么0.9对于CR来说也许是一个好的选择。此外,如果总体过早收敛,那么F或
温馨提示:
1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
2: 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
3.本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
提示  人人文库网所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。
关于本文
本文标题:随机优化技术研究
链接地址:https://www.renrendoc.com/paper/155459366.html

官方联系方式

2:不支持迅雷下载,请使用浏览器下载   
3:不支持QQ浏览器下载,请用其他浏览器   
4:下载后的文档和图纸-无水印   
5:文档经过压缩,下载后原文更清晰   
关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

网站客服QQ:2881952447     

copyright@ 2020-2025  renrendoc.com 人人文库版权所有   联系电话:400-852-1180

备案号:蜀ICP备2022000484号-2       经营许可证: 川B2-20220663       公网安备川公网安备: 51019002004831号

本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知人人文库网,我们立即给予删除!