




免费预览已结束,剩余31页可下载查看
下载本文档
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
华北水利水电学院毕业论文一个动态的库存策略问题摘要: 在现实生活中,一个企业或者一个公司在生产销售或订购销售中,为了获得更大的利益,研究动态库存已经成为一个必须的课题。动态规划方法就是一种适合于动态库存的建模方法。 本文以永安公司从事一种中药材的订购与销售业务为例,选择了该公司的某一年的各个月的订购与销售价格。在考虑一些主要因素的同时,忽略了一些次要因素,建立了动态库存的策略问题的动态规划模型,采用逆序的递推方法求得这一年中每个月的最优订购量和销售量以及库存量。关键词:动态规划; 逆序递推; 订购销售Abstract:In real life, a business or a sale or production companies in order sales, in order to obtain greater benefits, research has become a dynamic inventory must be the subject. Dynamic programming is a method suitable for dynamic inventory method of modeling. YongAN company engaged in a combination of Chinese herbal medicines ordering and sales business as an example, the companys choice of a particular year on the various orders and sales prices. In considering some of the major factors at the same time, overlook a number of minor factors, the establishment of a dynamic inventory of the strategy of dynamic programming model, using reverse the recursive method in every month this year obtained the optimal ordering and sales and Inventories.Key Words:Dynamic Planing; reverse recursive; sales order目 录摘要:1关键词:1Abstract:2Key Words:21 前言42 绪 论42.1 动态规划的基本概念42.2 动态规划方法的基本思想62.3 动态规划的求解方法73 模型建立与参数设定83.1问题提出83.2模型识别与建立93.3模型求解94小 结145 致 谢14参考文献15附录:161 前 言订购与销售是当今社会许多公司面临的问题,为了得到更大的利益公司不得不考虑库存量、订购量、销售量的问题。以及考虑每个月应该订购和销售多少才能为公司赚得更大的利益。本文以永安公司从事的药材生意为例,公司为得到最大利益,必须要确定每个月的订购与销售以及库存量。为此,以运筹学里的动态规划为依托来建立数学模型,用逆序的方法求解,最终求得最大利益。2 绪 论2.1 动态规划的基本概念:1、 多阶段决策问题多阶段决策问题是指这样的一类特殊的活动过程,它们可以按照时间和空间以次划分为若干个相互联系的阶段,在每一个阶段中,都需要做出一定的决策(备选方案),全过程的决策集形成一个决策序列,这种考虑整个决策过程中各个阶段决策的全体又称为一个策略,这类问题就是多阶段决策问题。显而易见,多阶段决策问题全过程的最优化目标是使整个活动过程的总体效果达到最优。因而,要求每个阶段所做出的决策都应该能产生一定的效果,并且在每个阶段的最优决策的选择时,不能只是单纯地考虑本阶段的效果,而必须注意到整个过程中各段决策之间的有机联系。本段决策的执行将会影响到下一段决策的制定和实施,以至于影响整体效果,所以在保证每个阶段的决策都能产生深远的影响,从而做出对全局来说是最优的决策。动态规划就是符合这种要求的一种决策方法。综上所述,动态规划方法是随着时间的推移,依次分段选择决策,形成决策序列,以解决整个过程的最优化问题,这就是动态的含义,不仅如此,对于一些与时间因素无关的静态问题,只要人为地引入“时间”因素,把问题转化为多阶段的决策过程,仍可以用动态规划方法求解。2、 动态规划的基本概念:(1) 阶段。阶段是指将所给问题的全过程按照时间和空间特征进行分解,使之形成若干个互相联系的整体系统,再按次序进行逐个求解,通常用k表示阶段变量。(2) 状态。状态是指各阶段开始时具有的自然状况或客观条件。一般来说,一个阶段有若干个状态,每个状态可以用一个或一组变量来表示,称为状态变量,通常用表示表示第k阶段的状态变量,状态变量的取值集合称之为状态集合,用表示。动态规划中的状态变量满足下面的重要性质:无后效性。无后效性是指当某个阶段的状态给定以后,在次阶段以后过程的发展不受这段以前各阶段的状态的影响。即由某个状态出发的后续过程(又称为k子过程)不受前面的诸多过程影响,这说明由第k阶段的状态出发的k子过程,可以视为一个以状态为初始状态的独立过程,过程的过去职能通过当前状态去影响它的未来发展。(3) 决策。决策是指当某一状态取定以后,从该状态到下一阶段的某一阶段的状态到下一阶段的某一状态的不同选择(或决定)。表示决策的变量称为决策变量。由于状态满足无后效性,所以只需要考虑当前的状态并做出相应的决策而不必考虑过去的历史状态。用表示第k阶段中当状态为时的决策变量。在实际问题求解时,往往对决策变量的取值有一定的限制范围,决策变量可取值的全体,称为允许决策集合,通常用表示第k阶段从出发的允许决策集合,显然有。(4) 策略。所谓策略是指在第k阶段开始的k子过程中整个问题的所有决策函数序列称之为k子过程策略简称子策略,记为,即=当k=1时,此决策函数列称为全过程策略,简称子策略,记为,即 = 对于每个实际问题,都存在若干个可供选择的策略,这些策略的全体,称为允许策略集合,用P表示。其中使得所研究问题达到最优效果的策略称为最优策略。 (5) 状态转移方程。动态规划中第k+1阶段的状态变量由第k阶段的状态变量和决策变量所确定,即随和的变化而变化,他们的关系可表示为 它反映了从k阶段到第k+1阶段的状态转移规律,称之为状态转移方程。(6) 指标函数。任何一个决策过程都有一个衡量所选定的策略优劣的准则,这个准则通常可以表示成一数量指标,称之为指标函数,它是定义在全过程上的数量函数。一个n阶段决策过程,从1到n叫做问题的原过程,对于任意一个给定k(),从第k段到第n段的过程称之为原过程的一个后部子过程,用表示初始状态为采用策略时原过程的效益值,表示在第k阶段,状态为须用策略时,后部子过程的效益值,记最优指标函数为,它表示从第k段状态采用最优策略到过程终止时的最佳效益值,则有以下关系式其中opt(optimization)表示最优。2.2 动态规划方法的基本思想:1、 把整个多阶段决策过程划分为若干个阶段,适当地选取状态变量和决策变量,并定义最优指标函数,把原问题转化为若干个相同形式的子问题,并逐个求解。2、 寻找递推关系,按照逆序方向,逐段递推寻找最优路线。求解每一个子问题时,要用到它前面已经求得的子问题的最优解,那么直到最后一个子问题的最优解,便是整个问题的最优解。3、 在逐段考虑寻求最优解的同时还要注意把各段的当前效益与整体效益联合起来考虑,不可孤立地只考虑某个局部,而应从全局出发,顾及整体利益,以期达到整体最优的目的。2.3 动态规划的求解方法: 动态规划的求解有两种基本的方法,即逆序递推法和顺序递推法。 逆序递推法是指寻求最优解的方向与多阶段决策过程的实际操作方向相反,是从最后一段开始计算,逐段前推,直到求出全过程的最优策略。 顺序递推法是指寻求最优解的方向与多阶段决策过程的行进方向相同,具体计算时应从第一段开始逐段向后递推,计算到后面的每一段需要用到其前一阶段的求优结果,直到最后一段计算的结果便是整个全过程的最优结果。1、 逆序递推法 逆序递推法的基本方程及边界条件为 其中表示第k段从状态出发,到终点后子过程的最优效益值,为整体最优函数值;表示由状态出发,采用策略到达下一阶段点时的两点距离。由第k段的输入状态、决策确定的第k阶段的输出状态为,状态转移方程为: 称为边界条件如图所示: 2、 顺序递推法:顺序递推法的基本方程和边界条件为 其中表示第k段时从起点到状态的前部子过程的最优效益值,为整体最优函数值;由第k段的输入状态决策,确定输出的第k+1段的状态为状态方程为称为边界条件如图所示:3 动态库存的策略问题3.1 问题提出在生产和经营管理中,经常会遇到要合理地安排购买(或生产)与库存的问题,达到既要满足需要,又要尽量降低成本费用或增加效益的目的;因此正确制定采购(或生产)策略,确定不同时期的采购量(或生产量)和库存量,以使总的销售额和库存费用的效益最大,这就是销售与库存计划问题的目标。 永安公司从事一种中药材的订购与销售业务,它有一个最大可存放500t中药材的仓库。它与每月15日提出订货,并与下个月的1日收到该批货。已知该种中药材各月的每t进货价、销售价如下表所示。已知该公司年初有库存200t,年末需库存300t,试确定该公司的各个月的订购、销售及库存的量,使全年收益为最大。月份123456789101112该月15日进价150155165160160160155150155155150150下月售价165165185175170155155155160170175170 3.2 模型识别与建立根据永安公司从事的中药材的订购与销售业务,它是采购销售模型,有明显阶段性,可以用动态规划方法解决。 模型建立如下:(1)设置阶段变量k 将每一个月作为一个阶段 k=1,2,12(2)设置状态变量 选择每阶段初的库存量为状态变量,可满足无后效性 ,(3)选择决策变量: 第k月卖出的货物量; 第k月订购的货物量(4)写出状态转移方程: (5)建立基本方程(用逆序法) 设最优值函数是从第k阶段的 状态出发到第k个月末所销售额的最大值,则有:当k=1时k-1取12为销售价格为采购价格3.3 模型求解 当k=12时: =要取得最大值取决于才有最大值当k=11时: 这个阶段需求解一个线性规划的问题: 求得在时有最大值当k=10时: 这个阶段需求解一个线性规划的问题:求得在时有最大值当k=9时: 这个阶段需求解一个线性规划的问题:求得在时有最大值当k=8时: 这个阶段需求解一个线性规划的问题:求得在有最大值当k=7时: 这个阶段需求解一个线性规划的问题:求得在时有最大值当k=6时: 这个阶段需求解一个线性规划的问题:求得在时有最大值当k=5时: 这个阶段需求解一个线性规划的问题:求得在时有最大值当k=4时: 这个阶段需求解一个线性规划的问题:求得在时有最大值当k=3时: 这个阶段需求解一个线性规划的问题:求得在时有最大值当k=2时: 这个阶段需求解一个线性规划的问题:求得在时有最大值当k=1时: 这个阶段需求解一个线性规划的问题:求得在时有最大值因为=200t故:(百元)最优策略表:月份期前库存()单位(t)销售量()单位(t)订购量()单位(t)120020050025005005003500500500450050050055005005006500007500008500009500500500105005005001150050050012500500300本表只是最优策略的其中一个结果,但是无论策略的取值如何不影响最终的最优解到12月份末有库存300t,求得最大利润3900万元4 小 结: 本文采取动态规划的方法,建立了动态库存问题的数学模型并利用动态规划的逆序解法得到最大利润值。本模型对大多数库存问题都适用。由于在实际问题中需要考虑的东西特别多,而在模型中忽略了许多外界的因素,比如说供销关系了,库存价格了。由于局限性,可能在现实中会影响最终的结果。 我通过写此论文,学到了许多知识,对运筹学这门课程在本科理论学习的基础上,能够运用于实际初步解决相关的实际问题,提高了利用数学知识解决实际问题的能力,尤其是对运筹学有了更深层次的理解,对我以后会有很大的帮助。 5 致 谢:从写开题报告到论文的结束,在这几个月的时间里,我学到了许多知识,从对论文的一无所知到论文的完成,和自己的努力是分不开的,但也有老师和同学们的帮助。我最需要感谢的是我的指导教师李亦芳老师,我的毕业设计是在她的指导下完成的。我在论文中的许多思想和方法得益于李老师的指导,从对课题研究到审阅和批注本文都倾注了导师巨大的心血。在此向帮助我的同学和老师表示感谢,并向导师致以最崇高的敬意和最诚挚的感谢!参考文献1 运筹学教程(第二版)胡运权主编 清华大学出版社2 数学建模(第三版)姜启源 谢金星 叶俊 编 高等教育出版社3 运筹学基础及应用 胡运权编 高等教育出版社4 运筹学基础 何坚勇 编著清华大学出版社5 数学建模 么焕民等主编 哈尔滨工业大学出版社6 运筹学导论(第8版)(美)希利尔 利伯曼 编著 清华大学出版社7 Beasley, J.E. and B. Cao, “A Dynamic Programming Based Algorithm for the Crew Scheduling Problem,” Computers & Operations Research, Vol. 25, No. 7/8, pp. 567-582,2005.8 Chatwin, R.E., Multiperiod Airline Overbooking with a Single Fare Class,” Operations Research, Vol. 46, No. 6, pp. 805-819 2004.附录:英语原文Dynamic Programming ModelsMany planning and control problems in manufacturing, telecommunications and capital budgeting call for a sequence of decisions to be made at fixed points in time. The initial decision is followed by a second, the second by a third, and so on perhaps infinitely. Because the word dynamic describes situations that occur over time and programming is a synonym for planning, the original definition of dynamic programming was “planning ove time.” In a limited sense, our concern is with decisions that relate to and affect phenomena that are functions of time. This is in contrast to other forms of mathematical programming that often, but not always, describe static decision problems. As is true in many fields, the original definition has been broadened somewhat over the years to connote an analytic approach to problems involving decisions that are not necessarily sequential but can be viewed as such. In this expanded sense, dynamic programming (DP) has come to embrace a solution methodology in addition to a class of planning problems. It is put to the best advantage when the decision set is bounded and discrete, and the objective function is nonlinear. This chapter is primarily concerned with modeling of deterministic, discrete systems. Although it is possible to handle certain problems with continuous variables, either directly or indirectly by superimposing a grid on the decision space, such problems will not be pursued here because they are better suited for other methods. In any case, modeling requires definitions of states and decisions, as well as the specification of a measure of effectiveness. For the usual reasons, a reduction in complexity of the real problem is also necessary. From a practical point of view, it is rarely possible to identify and evaluate all the factors that are relevant to a realistic decision problem. Thus the analyst will inevitably leave out some more or less important descriptors of the situation. From a computational point of view, only problems with relatively simple state descriptions will be solvable by dynamic programming. Thus abstraction is necessary to arrive at a formulation that is computationally tractable. Often a particular problem may have several representations in terms of the state and decision variables. It is important that the analyst realize that the choice of formulation can greatly affect his or her ability to find solutions. Dynamic programming has been described as the most general of the optimization approaches because conceivably it can solve the broadest class of problems. In many instances, this promise is unfulfilled because of the attending computational requirements. Certain problems, however, are particularly suited to the model structure and lend themselves to efficient computational procedures; in cases involving discontinuous functions or discrete variables, dynamic programming may be the only practical solution method. In the next section, we present an investment example to introduce general concepts and notation. The solution approach common to all dynamic programming is then outlined to motivate the need for the new notation. In the remainder of the chapter we describe several problem classes and their individual model characteristics. Solution procedures are left to the DP Methods chapter, as are situations with stochastic elements.Model ComponentsThe language of dynamic programming is quite different from that used in other areas of mathematical programming. Although it is common to have an objective to be optimized and a set of constraints that limits the decisions, a DP model represents a sequential decision process rather than an algebraic statement of a problem. The two principal components of the dynamic programming model are the states and decisions. A state is like a snapshot of the situation at some point in time. It describes the developments in sufficient detail so that alternative courses of action starting from the current state, can be evaluated. A decision is an action that causes the state to change in some predefined way. Thus a decision causes a movement from one state to another. The state-transition equations govern the movement. A sequential decision process starts in some initial state and advances forward, continuing until some final state is reached. The alternating sequence of states and decisions describes a path through the state space. Although many situations can be modeled in this way, the principal difficulty is to define the state space so that sufficient information is provided to evaluate alternative choices. For a chess game, the state must describe the arrangement of pieces on the board at any point in the game. Enumerating the states is a well defined task, but not practical because the number of possible board arrangements is unmanageably large. The same is true for many combinatorial optimization problems such as the traveling salesman problem (TSP). The state space of the TSP grows exponentially with the number of cities. Another aspect of the model that requires careful consideration is the measure of effectiveness used to evaluate alternative paths through the state space. The optimal path is the one that maximizes or minimizes this measure. A dynamic programming algorithm aims at finding at least one such path or sequence. There are a number of ways of doing this, but for the moment it is sufficient to mention that solution methods are closely linked to modeling conventions. This follows from the desire to make the computational procedures as universally applicable as possible. If a procedure is to solve a wide variety of problems, a standard form must be established for model input. In this section, we define the notation more carefully using the investment problem as an example.General FormatAs we have seen, the components of a DP model consist of the state vector,the decision vector, the feasible state space, the feasible decision set for each state, the initial states, the final states, the transition function, the form of the path objective, and the final value function. Although several of these terms are similar to those used in describing the mathematical programming models discussed up until now, the differences are what stand out. Table 4defines the individual components of a dynamic program in such a way that allows for a broad range of applications.Table 4. Components of the general dynamic programming modelComponentDescriptionStatewhere is the value of state variable i and m is the number of state variablesInitial state setFinal state setState spaceDecisionwhere is the value of the jth decision variable and p is the number of decision variablesFeasible decision setTransition function,a function that determines the next state, s, reachedwhen decision d is taken from state sDecision objective,the measure of effectiveness associated with decision d taken in state sPath objective,the measure of effectiveness defined for path P. This function describes how the objective terms for each state on the path and the final value function are combined to obtain a measure for the entire path.Final value functionSequential Decision ProblemTo formulate a problem as a DP, it must be stated in terms of a sequential set of decisions. As presented, the investment problem does not have this characteristic. In particular, the solution is a statement of investment allocations presumably all to be made at the same time rather than serially over a finite time horizon. To accommodate the sequential nature of dynamic programming, the numbers assigned to the investments were used to provide an artificial order. Thus we first decide on the amount to invest in opportunity 1, then in opportunity 2, and finally in opportunity n.StatesThe problem must be described in a manner such that a solution is a sequence of alternating states and decisions. The state represents the current alternative under consideration and the amount of the budget used to this point. Thus two pieces of information are described by the state requiring the introduction of two state variables. In general, we call the m-dimen-sional vector the state, and its components s the state 1 m i variables. For the investment problem, m = 2 and index of the current alternative being considered amount of the budget used prior to this investment opportunityIn some textbook expositions on dynamic programming, the term“stage” is used to identify the sequence of decisions. In our exposition, we will not use the concept of a stage but rather include the stage information as the first component of the state vector. Although this approach may seem awkward at first to those already familiar with the stage terminology, it allows a more general class of problems to be modeled as dynamic programs.DecisionsThe decision at any particular state is the amount to invest in the opportunity identified by . In general, the set of all possible decisions is denoted by D, whereas a particular decision as a function of states is denoted by d(s). To accommodate cases in which the decision has more than one dimension, is specified as a vector with each component identified by a lowercase subscripted letter . In the present instance the decision is just the amount to invest, so d has only one dimension.damount to invest in opportunity SolutionA solution is an alternating sequence of states and decisions that have indices indicating their place in the sequence. The process starts in state called the initial state. For the investment problem, the initial state isindicating that this is the first opportunity and none of the budget has been allocated. The first decision is , the investment in opportunity 1. The 1 new state must be equal to since the value of the decision variable is precisely the amount invested in the first alternative and the next opportunity is 2. The decision moves the process from state to . The value of must beThe first component, = 3, of the vector indicates the index of the next alternative to be considered, and the second component gives the total investment associated with the first two decisions.Transition FunctionAs each decision is made, the state changes in a predictable way. The function that gives the value of the next state vector in terms of the current state and decision is called the transition function. Let the state at the kth step of the process be , and let the decision taken at this step be . The next state is and is determined by the transition function as follows. (1)Very often does not depend on the sequence number of the decision, or, as in our example, the sequence number is included as the first component of the state vector. In such cases, one can simplify the notation by denoting the current state by s, the current decision by d, and the next state by s. Now the transition function can be writte
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2025年宝鸡石油机械有限责任公司春季招聘(10人)考前自测高频考点模拟试题及答案详解(名校卷)
- 2025年西安未央区汉城社区卫生服务中心招聘(15人)考前自测高频考点模拟试题含答案详解
- 2025年浙江宁波市医疗中心李惠利医院招聘编外工作人员2人考前自测高频考点模拟试题参考答案详解
- 2025年江西中医药大学高层次人才招聘116人模拟试卷及答案详解(典优)
- 城市设备买卖合同6篇
- 2025年电商平台大数据与大数据驱动的用户参与度提升策略研究报告
- 2025年短视频平台内容监管与行业健康发展研究报告
- 2025年城市新区规划调整可能引发的社会稳定风险分析报告
- 2025年新能源商用车辆市场需求与新能源车辆市场潜力拓展研究分析报告
- 2025年教育数字化教材开发与智能教学助手应用报告
- 甜米酒创业计划书
- 塔吊租赁服务技术实施方案技术标
- 员工组织承诺的形成过程内部机制和外部影响基于社会交换理论的实证研究
- 优质课件:几代中国人的美好夙愿
- 2023年真空镀膜机行业市场分析报告及未来发展趋势
- 物业礼仪规范培训方案
- 约谈记录表模板
- 外科护理学阑尾炎教案
- 注塑成型技术培训之工艺理解课件
- 广西佑太药业有限责任公司医药中间体项目环评报告书
- 海绵城市公园改造施工组织设计
评论
0/150
提交评论