毕业设计(论文)文献翻译:用BP神经网络算法和改进的BP神经网络算法研究预测电渗析过程的分离百分比_第1页
毕业设计(论文)文献翻译:用BP神经网络算法和改进的BP神经网络算法研究预测电渗析过程的分离百分比_第2页
毕业设计(论文)文献翻译:用BP神经网络算法和改进的BP神经网络算法研究预测电渗析过程的分离百分比_第3页
毕业设计(论文)文献翻译:用BP神经网络算法和改进的BP神经网络算法研究预测电渗析过程的分离百分比_第4页
毕业设计(论文)文献翻译:用BP神经网络算法和改进的BP神经网络算法研究预测电渗析过程的分离百分比_第5页
已阅读5页,还剩28页未读 继续免费阅读

付费下载

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

毕业设计(论文)--文献翻译原文题目StudiesonpredictionofseparationpercentinelectrodialysisprocessviaBPneuralnetworksandimprovedBPalgorithms译文题目用BP神经网络算法和改进的BP神经网络算法研究预测电渗析过程的分离百分比专业信息与计算科学姓名学号指导教师用BP神经网络算法和改进的BP神经网络算法研究预测电渗析过程的分离百分比摘要在电渗析过程中,分离百分比(SP)与一些影响因素(进料浓度(C)、稀室的流通率(Q)、反应温度(T)和应用电压(V))有非线性关系,并且这些关系很难用一个简单的公式来表示。四个影响因素对SP产生了显著影响。在这篇论文中,对四个因素进行了电渗析实验研究。反向传播(BP)神经网络算法和改进BP算法被用于SP的预测,并且他们的预测能力可以反映出他们在非线性关系的复杂数据上的适应能力。使用不同的神经网络结构、神经元传递函数和学习率,可以获得最佳训练参数。比较BP神经网络算法和改进的BP神经网络算法,由于改进BP神经网络算法适当地改变了正确学习率和权值的比率,所以它比BP神经网络算法更好。在高温和高电压的情况下,改进BP神经网络算法将有更好的预测性能,这是因为改进BP神经网络算法有大数值的泛化能力。关键字:BP神经网络算法;改进BP算法;电渗析法分离百分比;改进BP算法;自适应学习算法1.导言电渗析(ED)是在电势差驱动力的帮助下,离子从溶液的一个带电膜分离到另一个电膜的过程。这一过程广泛用于生产饮用水及从半咸水和海水中处理水、处理工业废水、从污水和盐生产恢复有用的物料。文献[1-6]中对电渗析的应用和基本原则进行了检验。因为其化学稳定性高、灵活性和由于其强大的离子特征带来的高离子电导率,人们开发了许多使用离子交换膜的电渗析应用并且商业化[7-10]。两种不同类型的离子交换膜用于常规电渗析:阳离子交换(CEM)和阴离子交膜换(AEM),分别渗透阳离子和阴离子[11]。然而,在性能经营过程中,电流密度应保持小于极限电流密度,因为水的离解引起的结垢和膜破损[12]。所以测定的极限电流密度和系统的潜力也被执行。极限电流密度是最大电流密度(当前每单位面积膜),它可以使用而且不会导致高的电阻和较低的电流功率等负面影响。在极限电流密度下,在阳离子交换膜或阴离子交换膜表面的阳离子或阴离子的浓度,恰到好处的,在稀室内消耗的溶液是零[12-14]。超越极限电流密度后,水分解时生成的H+和OH−运输一部分的电流[15]。人工神经网络(人工神经网络)利用相互联系的数学节点或神经元以形成一个网络,可以模拟复杂的功能关系[16]。它的发展始于20世纪40年代以帮助认知科学家理解中枢神经系统的复杂性。它已经稳步发展,并适应科学的许多领域。基本上,人工神经网络是源自人类的大脑在学习的过程中的数值结构灵感。他们构造作为替代的数学工具用于解决不同领域问题的系统辨识、预测、模式识别、分类、过程控制及其他许多[17]。人工神经网络已广泛的在膜过程中应用(反渗透、纳滤、超滤、微滤膜、膜过滤、气体分离、膜生物反应器和燃料电池)[18]。然而,在文献中很少有几条应用人工神经网络预测电渗析过程中SP(分离百分比)的记录。人工神经网络中受到最多关注的是反向传播网络(BPN)[19]。BPNs有分层的前馈网络框架。在经典BPNs结构中,产出的每个图层被直接送到每个神经元的下一层。BPNs有许多层,但人们认为至少三个图层:一个接收和分发输入的输入层、捕捉输入和输出的非线性关系的中间层或隐藏层,和一个生产计算数据的输出层。BPNs神经元也可能包含产生恒定的输出但没有收到输入这样的偏差[20-21]。BP算法基于神经网络错误最小化。这些错误被描述为期望的产出与实际之间的差异[22]。当训练精度满足时即训练完成(图1)。BPNs操作具有更好的推广和容错功能,然而,它也有一些不足之处:(1)慢收敛性可能导致较长的训练时间。(2)在训练过程中出现可能局部极值点。所以在这项研究中,改进的BP算法被用来改进电渗析过程中分离百分比的预测能力。在这篇论文中,自适应学习算法和灵活的BP算法是改进的BP算法在电渗析过程中的应用。2.材料和方法2.1.实验仪器和材料电渗析过程中,在表1中显示了实验仪器和材料。此外,其他实验材料:量筒、烧杯、去离子水等。这些实验的目的是研究进料浓度、温度、电压和流通率对电渗析电池性能的影响。2.2.电渗析池和电渗析膜有机玻璃(PMMA)做成的板和框架的电渗析池被用来进行电渗析实验(图2)。电渗析池由三部分组成,一对CEM(阳离子交换膜)和AEM(阴离子交换膜)和一对电极。电渗析池的长度、宽度和高度分别为0.191m、0.021m和0.181m。阳离子交换膜和阴离子交换膜有效区域都是0.11×0.09m2。这两个电极由纯铂金制成,每个电极的表面积是0.115×0.09m2。稀释容量和浓缩容量分别是0.12×0.1×0.003m3和0.12×0.1×0.006m3。NaCl(氯化钠)溶液被输入到电渗析池的三个单元格中。阳离子交换膜(CEM)和阴离子交换膜(AEM)分别有阳离子渗透性和阴离子渗透性。两层膜被并排沉浸在溶液中,并有电流通过溶液。正离子迁移到阴极,阴离子迁移到阳极。注入的溶液被划分成两个流。一个是稀释水,另一个是浓缩水。在一定的流速下,可以操作电渗析池去除电离子。离子在电渗析池中被电解,阴极和阳极发生的反应如下所示:阳极反应:2Cl−−2e→Cl2↑H2O−2e→1/2O2↑+2H+阴极反应:2H2O+2e→H2↑+2OH−。在阳极反应中,生成了Cl2和O2。此外,在阴极反应中生成了H2。这些气体可能增加电渗析池的电阻,因此两个小洞被用来释放电渗析池板上的气体。集中的气流被处理掉,以防止这些气体积聚。电解膜的物理和化学特性列表(表2)。2.3.电渗析设备安装程序电渗析设备安装包括注入溶液的一个水槽(TK-01)、直流电源供应器和控制注入溶液的流动速率的两个阀门(GB-01,GB-02)(图3)。电渗析设备的总高度是0.5m。两个集中的浓缩流和稀释流没有循环利用,并为预测和分析电导率收集稀释流。2.4.实验原理电渗析过程是膜分离技术之一。根据直接电场电渗析技术利用选择性的离子交换膜将电解质从溶液中分开的能力,以实现稀释、浓缩或纯化溶液的目的(图4)。2.5.极限电流密度的测定极限电流密度(LCD)在电渗析过程中是一个重要参数,用于确定电阻和目前的利用率。通常情况下,极限电流密度取决于膜、溶液属性、电渗析栈结构以及各种操作参数,如稀释流的速度[23]。极限电流的测量方法是伏安法、pH值法等等。本文用伏安方法来确定极限电流密度,具体步骤如下。在恒定温度、浓度和流量的情况下,调整电压表按钮并记录每一组的电压和电流。当电压很低时,电流与电压有线性关系。随着电压的增加,电流发生微妙的变化。极限电流是转折点,我们就得到了极限电流密度。例如,T=35°C,C=0.5g/L,Q=0.5mL/s,随着电压逐渐增加,电流与电压有线性关系。当电流达到0.51A时,电压对电流有轻微影响,所以极限电流即拐点是0.51A(图5)。膜的有效面积是两个0.11×0.09m2,所以极限电流密度是51.515A/m2.在实验中,工作电流不能超过极限电流。否则电渗析过程中将发生极化,分离百分比的预测也就毫无意义。使用伏安法,在所有的实验条件下获得的极限电流密度(表3)。为了防止发生电渗析池极化,所有电流被控制限制电流下。在电渗析实验中,极限电流最大值是0.80A,离子交换膜的有效面积0.11×0.09m2,所以极限电流密度的最大值是80.808A/m2.2.6.实验数据 实验在极限电流密度下开展(表4)。对四个因素进行研究:进料浓度(0.5g/L,1g/L和1.5g/L),稀释舱的流量(0.05毫升/秒,0.25毫升/秒,0.5毫升/秒,0.75毫升/秒和1毫升/s),反应温度(20°C,25°C,30°C,35°C,40°C和45°C),外加电压(2V,5V和8V)。3.基于BP神经网络算法和改进的BP算法3.1.BP神经网络典型BP神经网络是一个全神经网络包括一个输入层、隐藏层和输出层[24,25],训练过程的目标是调整权值。网络训练是一个无约束非线性最小化问题[26]。研究人员声称,一个隐层的网络所需的任何精度逼近任意连续函数[27-29]。BP神经网络由向前反馈和误差反向传播两部分组成。向前传播,输入从输入层开始传播,被一些隐藏层处理后到达输出层,输出层的输出预测值与实际输出比较,它们之间的区别是聚合生成的误差。在误差反向传播中,当误差超出误差范围,误差被调整回传播权值。学习过程一直持续到误差聚合目标值(图6)。3.2.BP神经网络的构建有关于BP神经网络的一些预防措施:(1)预处理样品。通常样品没有直接使用网络训练,而是从原始数据预处理。实验数据包含一些不确定因素训练。预处理方法是提高训练和测试数据可靠性的必要准备。(2)优化初始权值。网络的初始权值影响最终的训练结果,影响网络是否能达到可接受的精度。(3)选择隐藏的图层数和神经元的隐藏图层。隐藏图层和神经元的隐藏图层的选择,是直接影响复杂问题的映射能力的最关键的一步。现在可靠的算法从隐藏层和大量的神经元开始,训练和测试,然后增加它们的数量。比较不同训练和测试样本的结果,选择更合适的隐藏层个数及其神经元。在这项研究中,具有不同神经元隐层的三层神经网络用于这次电渗析实验。(4)选择训练样本。网络所需的样本取决于复杂程度的映射关系。一般情况下,映射关系越复杂,需要越多训练样本。从所有数据中选择样本时,BPNs需要遵守以下原则:足够数量的、典型的和均匀的。3.3.改进的BP算法BPNs基于坚实的理论和严格的推导,然而BPNs包括慢收敛、偶尔出现的局部极值点,所以训练过程中有许多不足之处。在实际应用中,BP算法很难是见效,因此人们提出了一些改进的BP算法,以提高预测能力。有几种方法改进的BP算法,如附加动量方法,自适应学习率方法,灵活的BP算法,等等。在此论文中,自适应学习率方法和灵活的BP算法被用来预测在电渗析过程中的分离百分比并用于与BPNs比较预测能力。3.3.1.自适应学习算法BP算法的训练过程受不当学习速率的影响有慢收敛的缺点。BP算法中的权值调整取决于学习速率和斜率。在BP算法中,学习速率是恒定的。事实上,当学习速率较低、训练时间变长,收敛变得较慢。当学习率太高,出现了振荡和分歧,这会造成系统不稳定。自适应学习率如图7中所示。自适应学习速率的基本原理:当学习速率(η)增加,将造成学习时间缩短;学习速率越高,越难收敛。在这种情况,学习速率应该会减少直到训练过程的收敛。可以通过改变误差和斜率,也可以根据误差函数通过改变学习速率梯度调整学习速率。此外,通过探性地进行调整试可能改变总误差,规则如下所示:(1)如果的总误差(E)减少,学习速率需要增加。(2)如果的总误差(E)增加,学习速率需要减少。当新误差与原来误差的比值超过某一个值时(例如1.04),学习速率迅速下降。3.3.2.灵活的BP算法一般来说,用sigmoid函数来将数据从输入层传递到隐藏层。此外,sigmoid函数保持无限输入和有限界内输出。当输入大变量时,sigmoid函数的斜率将接近0。即使梯度发生了微妙的变化,这可能引起权值发生巨大的变化,所以权值逐渐倾斜最佳值,甚至使网络权值在修改过程中停止。当训练过程振动时,数量可变的权值将会减少。在几个迭代中权值变化方向保持不变时,可变数量的权值将会增加。因此,灵活的BP算法在收敛速度上有超越其他改进BP算法的优势。4.结果和讨论BP神经网络和改进的BP算法的设计中,需要确定四个重要方面:1)神经元传递函数的选择2)隐藏图层节点的选择3)学习率增加比率的选择4)BP神经网络和改进的BP算法的泛化测试4.1神经元传递函数的选择 传递函数用于彼此相邻的两层中的神经元之间的传递。而且这些传递函数必须是可诱导的。BP神经网络有一个或多个隐藏层,隐藏层神经元使用Sigmoid传递函数。输出层神经元使用纯线性传递函数且输出可以是任意值。如果输出层神经元的传递函数是Sigmoid传递函数,则整个网络输出会被限制在(−1,1)内。 Sigmoid传递函数包括log-sigmoid和tan-sigmoid两个函数。对于这两个函数,输入分别映射到(0,1)和(−1,+1)。log-sigmoid函数是一个单向传递函数。Tansigmoid函数是一个双向传递函数,并被称为双曲正切函数(图8)。 在实际应用中,根据输入输出的关系使用某一个传递函数。如果输入不含负值,所以采纳log-sigmoid函数。如果包括负值,则采纳tan-sigmoid函数。在本文中,隐藏层神经元使用Sigmoid传递函数,输出层神经元用纯线性传递函数。4.2隐藏图层节点的选择 许多学者曾从事研究隐藏层的最佳节点。柯尔莫哥洛夫定理证明,只要一个隐层的节点足够多,神经网络的隐藏层可以以任意精度逼近非线性函数。然而,对于一个从输入到输出有限的映射,无限多的隐藏层节点是没有必要的。以及如何选择隐藏层节点仍是一个尚未解决的问题。隐藏层节点是通过经验和实验设计确定的。一般地,基于对输入和输出关系准确的反映,选择一个隐藏层的小节点,以保持网络的结构简单。但规模越小的节点,神经网络的泛化能力越糟糕。在相反,如果隐藏层的节点越大,训练过程中的复杂度也会随之升高,那么这种情况将导致过度拟合现象。在设计过程中,许多因素必须结合起来。在具体设计中,首先选择一个隐藏层。如果增加隐含层节点无法获得更好的网络,层号和隐藏层节点仍需要添加。 在本文中,四个影响因素(电压、浓度、温度、流量)是一个输入层节点,并且分离百分比是一个输出层节点。所以一个隐藏层的节点数应在4和12之间。网络的结构是4:4:1,4:10:1,4:12:1。通过MATLAB软件训练,性能图表如图9所示。 在图9(A)中,总训练时间是74,最佳验证性能高于10−3,训练数据的MSE(均方差)低于10−3并且MSE的测试样品和有效数据约0.005。在图9(B)中,总训练时间是33,最佳验证性能高于10−3,训练数据的MSE低于10−3并且MSE的测试样品和有效数据可能略高于10−3。在图9(C)中,总训练时间是101,最佳验证性能高于10−3,训练数据的MSE、测试样品和有效数据均高于10−3。这三个图的结果表明,4:10:1是最好的网络结构,因为它有最低的MSE值、最短的训练时间。5.结论BP神经网络和改进的BP算法作为预测氯化钠溶液电渗析实验分离百分比两种方法,改进的BP算法比BP神经网络优越。改进的BP算法弥补了BP神经网络不合适的学习速率和权重训练过程中的缺陷,并且改进的BP算法使用增加学习速率和权值的方法。灵活的BP算法是一种改进的BP算法的方法,其预测显然比BP神经网络更好。在不同的训练参数(神经元的传递函数,大量的隐藏层的神经元和学习速率)这个条件下讨论、研究BP神经网络和改进的BP算法的预测能力。我们获得了最优训练参数。本文的隐藏层神经元使用Sigmoid传递函数,并且输出层神经元使用纯线性传递函数。4:10:1网络是最好的网络结构,因此最优的隐藏层节点是10.1.05在学习速度训练数据中作为最佳的增加比率。然而,由于实验器材的定位和极化,导致最优训练参数值受限。浓度、流量、温度和电压与分离比例呈现非线性关系,温度和电压与分离比例呈现正相关关系,浓度和流率与分离比例呈现负相关关系。对于非线性关系,改进的BP算法能够更好的预测。实验结果表明,改进的BP算法对复杂的数据组有泛化的、高效的和自适应的能力,使得为复杂系统建模成为有吸引力的选择,比如水处理过程和膜技术。StudiesonpredictionofseparationpercentinelectrodialysisprocessviaBPneuralnetworksandimprovedBPalgorithmsabstractIntheelectrodialysisprocess,separationpercent(SP)hadnonlinearrelationshipswithanumberofinfluencingfactors(feedconcentration(C),flowrateofdilutecompartment(Q),reactiontemperature(T)andappliedvoltage(V)),andtherelationshipswerehardtoexpressbyasimpleformula.AndfourinfluencingfactorshadremarkableeffectsonSP.Inthispaper,thefourfactorswerestudiedintheelectrodialysisexperiments.Backpropagation(BP)neuralnetworksandimprovedBPalgorithmswereappliedonthepredictionofSP,andtheirpredictioncapabilitiescouldreflectgeneralizationandadaptiveabilitiesoncomplexdatawhichhadnonlinearrelationshipswitheachother.Andwithdifferentstructuresofneuralnetworks,transferfunctionsofneuronsandlearningrates,theoptimumtrainingparameterswereobtained.ComparingBPneuralnetworkswithimprovedBPalgorithms,improvedBPalgorithmswerebetterthanBPalgorithm,duetochangingwithincreasingratiosoflearningratesandweightsproperly.Andintheconditionofhightemperaturesandvoltages,theimprovedBPalgorithmswerepredictedtohavebetterperformance,thiswasbecauseimprovedBPalgorithmshadthegeneralizationabilityforhighvalues.Keywords:BPneuralnetworks;ImprovedBPalgorithms;Electrodialysis Separationpercent;FlexibleBPalgorithm;Adaptivelearningratemethod1.Introduction Electrodialysis(ED)isanelectro-membraneprocessforseparationofionsacrosschargedmembranesfromonesolutiontoanotherwiththeaidofanelectricalpotentialdifferenceusedasadrivingforce.Thisprocesshasbeenwidelyusedforproductionofdrinkingandprocessedwaterfrombrackishwaterandseawater,treatmentofindustrialeffluents,recoveryofusefulmaterialsfromeffluentsandsaltproduction.ThebasicprinciplesandapplicationsofEDwerereviewedintheliteratures[1–6].NumerousversatileindustrialapplicationsofEDusingion-exchangemembranesweredevelopedandcommercializedbecauseoftheirhighchemicalstability,flexibilityandhighionicconductivityduetotheirstrongioniccharacteristics[7–10].Twodifferenttypesofion-exchangemembranesareusedinconventionalelectrodialysis:cation-exchange(CEM)andanion-exchange(AEM)membranes,whicharepermeabletocationicandanionicspecies,respectively[11]. However,inoperatinganelectrodialyzer,thecurrentdensityshouldbemaintainedlessthanthelimitingcurrentdensitybecausewaterdissociationgivesrisetoscaleformationandmembranebreakages[12].Sothedeterminationofthelimitingcurrentdensityandpotentialforthesystemisalsoperformed.Thelimitingcurrentdensityisthemaximumcurrentdensity(currentperunitmembranearea)thatcanbeusedwithoutcausingnegativeeffectssuchashigherelectricalresistanceandlowercurrentefficiency.Atthelimitingcurrentdensity,theconcentrationofacationorananionatthesurfacesofthecationexchangeoranion-exchangemembrane,asappropriate,inthecellswiththedepletedsolutionwillbezero[12–14].Atandbeyondthelimitingcurrentdensity,H+andOH−generatedupondissociationofwatertransportapartoftheelectriccurrent[15]. Artificialneuralnetworks(ANNs)utilizeinterconnectedmathematicalnodesorneuronstoformanetworkthatcanmodelcomplexfunctionalrelationships[16].Itsdevelopmentstartedinthe1940stohelpcognitivescientiststounderstandthecomplexityofthenervoussystem.Ithasbeenevolvedsteadilyandwasadoptedinmanyareasofscience.Basically,ANNsarenumericalstructuresinspiredbythelearningprocessinthehumanbrain.Theyareconstructedandusedasalternativemathematicaltoolstosolveadiversityofproblemsinthefieldsofsystemidentification,forecasting,patternrecognition,classification,processcontrolandmanyothers[17].Artificialneuralnetworkshavebeenusedinawiderangeofmembraneprocessapplications(reverseosmosis,nanofiltration,ultrafiltration,microfiltration,membranefiltration,gasseparation,membranebioreactorandfuelcell)[18].However,thereareafewrecordsintheliteraturewhichapplyartificialneuralnetworksforthepredictionofSPintheelectrodialysisprocess. OneANNwhichhasreceivedthemostattentionisbackpropagationnetwork(BPN)[19].BPNshavehierarchicalfeedforwardnetworksframe.IntheclassicalstructureofBPNs,theoutputsofeachlayeraresentdirectlytoeachneuronofthenextlayer.Therearemanylayers,butatleastthreelayersareconsidered:aninputlayerreceivesanddistributesinputs,amiddleorhiddenlayercapturesthenonlinearrelationshipsofinputsandoutputs,andanoutputlayerproducescalculateddata.BPNsalsomaycontainabiasneuronthatproducesconstantoutputs,butreceivesnoinputs[20,21].BPalgorithmisbasedonminimizationoferrorsinneuralnetworks.Theerrorsaredescribedasdifferencebetweenthedesiredoutputsandtheactualones[22].Thetrainingiscompletedwhentheprecisionofthetrainingismet(Fig.1). BPNscanbeoperatedwithbettergeneralizationandfault-tolerantcapabilities,however,ithassomeshortcomings: (1)Slowastringencycanleadtoalongertrainingtime. (2)Localextremumpointmayemergeinthetrainingprocess. Sointhisstudy,improvedBPalgorithmswereusedtoimprovethepredictioncapabilityofseparationpercentintheelectrodialysisprocess.Inthispaper,adaptivelearningratemethodandflexibleBPalgorithmwerethemethodsofimprovedBPalgorithmstobeappliedintheelectrodialysisprocess.2.Materialsandmethods2.1.Experimentalinstrumentsandmaterials Intheelectrodialysisprocess,theexperimentalinstrumentsandmaterialswereshowninTable1. Moreover,otherexperimentalmaterialsweremeasuringcylinders,beakers,deionizedwaterandsoon.Thepurposeoftheseexperimentswastostudytheeffectsoffeedconcentration,temperature,voltageandflowrateontheelectrodialysiscellperformance.2.2.Cellandmembranes Aplateandframeofelectrodialysiscellwhichwasmadefrompolymethylmethacrylate(PMMA)wasusedtoconducttheelectrodialysisexperiments(Fig.2).TheelectrodialysiscellconsistedofthreepartsandpackedwithapairofCEM(cationexchangemembrane)andAEM(anionexchangemembrane)andapairofelectrodes.Theoveralldimensionsoflength,widthandheightoftheelectrodialysiscellwere0.191m,0.021mand0.181m,respectively.TheeffectiveareasoftheCEMandAEMwereboth0.11×0.09m2.Bothelectrodesweremadeofpureplatinum.Thesurfaceareaofeachelectrodewas0.115×0.09m2.Andthevolumesofdiluteandconcentratecompartmentswere0.12×0.1×0.003m3and0.12×0.1×0.006m3,respectively. NaCl(sodiumchloride)solutionwasenteredintothethreecompartmentsofthecell.Cation-exchange(CEM)andanion-exchange(AEM)membraneswerepermeabletocationicandanionicspecies,respectively.Thetwomembraneswereimmersedinparallel,andanelectriccurrentwaspassedthroughthesolution.Thecationsmigratedtothecathode,andtheanionsmigratedtotheanode.Thefeedsolutionwasdividedintotwostreams.Onestreamwasdilutedwater,andtheotherwasconcentratedwater.Underacertainflowrate,theelectrodialysiscellcouldoperatetoremoveelectrolyticions.Theionswereelectrolyzedintheelectrodialysiscell,sothecathodeandanodeoccurredreactionsasfollows: Anodereactions: 2Cl−−2e→Cl2↑ H2O−2e→1=2O2↑ þ2Hþ Cathodereaction: 2H2Oþ2e→H2↑þ2OH−. Intheanodereaction,Cl2andO2wereproduced.Moreover,H2wasproducedinthecathodereaction.Thesegasescouldincreasetheresistanceoftheelectrodialysiscell,sotwolittleholeswereusedtoreleasegasesontheplateoftheelectrodialysiscell.Andtheconcentratedstreamsweredisposedofftopreventaccumulationofthesegases.Listthephysicalandchemicalcharacteristicsofthemembranes(Table2).2.3.Electrodialysissetup Electrodialysissetupconsistedofatankoffeedsolution(TK-01),aDCpowersupplyandtwovalves(GB-01,GB-02)forcontrollingflowratesoffeedsolution(Fig.3).Thetotalheightoftheelectrodialysisprocesswas0.5m.Nocyclicregimewasusedfortwoconcentratedstreamsandadilutedstream,andthedilutedstreamwascollectedforconductivitiestopredictandanalysis.2.4.Experimentalprinciple Theelectrodialysisprocesswasoneofmembraneseparationtechnologies.Underthedirectelectricfield,electrodialysistechnologyutilizedtheselectivityabilityofion-exchangemembranestoseparateelectrolytesfromthesolution,inordertorealizethepurposesofdilution,concentrationorpurificationofthesolution(Fig.4).2.5.Determinationofthelimitingcurrentdensities Thelimitingcurrentdensities(LCDs)intheelectrodialysisprocessareanimportantparameterwhichdeterminestheelectricalresistanceandthecurrentutilization.Usually,LCDsdependonmembraneandsolutionpropertiesaswellasontheelectrodialysisstackconstructionandvariousoperationalparameterssuchastheflowvelocityofthedilutedsolution[23].Themethodsofmeasurementsofthelimitingcurrentdensityarevoltage–currentmethod,pH–currentmethodandsoon.Inthispaper,voltage–currentmethodwasusedtodeterminetheLCDs,thespecificstepswereasfollows. Intheconditionofconstanttemperature,concentration,andflowrate,adjustthebuttonsofvoltmeter,andrecordthegroupsofvoltagesandcurrents.Whenthevoltageswerelow,thecurrentshadalinearrelationshipwiththevoltages.Andasthevoltagesincreased,thecurrentschangedslightly.Thelimitingcurrentsweredeterminedbyinflectionpoints,andthelimitingcurrentdensitieswerealsoobtained.Forexample,T=35°C,C=0.5g/L,Q=0.5mL/s,asthevoltagesincreased,thecurrentshadalinearrelationshipwiththevoltages.Thecurrentwasupto0.51A,thevoltageshadslightinfluenceonthecurrents,sothelimitingcurrent,namely,theinflectionpointwas0.51A(Fig.5).Theeffectiveareasofthemembraneswereboth0.11×0.09m2,soLCDwas51.515A/m2. Intheexperiments,theoperatingcurrentscouldnotexceedthelimitingcurrent.Otherwise,polarizationoccurredintheelectrodialysisprocessandthepredictionofseparationpercentwouldbemeaningless.Usingvoltage–currentmethod,LCDswereobtainedinalltheexperimentalconditions(Table3). Topreventtheoccurrenceofpolarizationoftheelectrodialysiscell,alloperatingcurrentswerecontrolledunderthelimitingcurrents.Andintheelectrodialysisexperiments,themaximumlimitingcurrentwas0.80A,andtheeffectiveareasofion-exchangemembraneswere0.11×0.09m2,sothemaximumlimitingcurrentdensitywas80.808A/m2.2.6.Experimentaldata Experimentswerecarriedoutunderthelimitingcurrentdensities(Table4).Fourfactorswerestudied:feedconcentration(0.5g/L,1g/Land1.5g/L),flowrateofdilutecompartment(0.05mL/s,0.25mL/s,0.5mL/s,0.75mL/sand1mL/s),reactiontemperature(20°C,25°C,30°C,35°C,40°Cand45°C),andappliedvoltage(2V,5Vand8V).3.BPneuralnetworksandimprovedBPalgorithms3.1.BPneuralnetworks AtypicalBPneuralnetworkwasafull-connectedneuralnetworkincludinganinputlayer,ahiddenlayerandanoutputlayer[24,25].Thegoalofthetrainingprocesswastoadjusttheweights.Thenetworkstrainingwasanunconstrainednonlinearminimizationissue[26].Someresearchersclaimedthatthenetworkswithasinglehiddenlayercouldapproximateanycontinuousfunctiontoanydesiredaccuracy[27–29].BPneuralnetworkswerecomposedoftwopartswhichwereforwardanderrorbackpropagation.Intheforwardpropagation,theinputsspreadfromtheinputlayer,afterbeingprocessedbysomehiddenlayers,thenreachingtheoutputlayer,thepredictivevaluesoftheoutputsintheoutputlayerwerecomparedwiththeactualoutputsandthedifferencesbetweenthemwereaggregatedtogeneratetheerrors.Intheerrorbackpropagation,whentheerrorswerenotintherangesoferrors,theerrorswerebackpropagatedbyadjustingtheweights.Thelearningprocesscontinueduntiltheerrorsconvergedtoatargetedvalue(Fig.6).3.2.ConstructionofBPneuralnetworks ThereweresomeprecautionsaboutBPneuralnetworks: (1)Pretreatsamples.Usuallysampleswerenotuseddirectlyfornetworktraining,butengagedinpretreatmentforrawdata.TheexperimentaldatacontainedsomeuncertainfactorsinBPNtraining.Apreprocessingmethodwassonecessaryforpreparingthetrainingandtestingdatatoenhancethereliability.(2)Optimizeinitialweights.Initialweightsofthenetworkshadimpactonfinaltrainingresults,andaffectedthenetworkswhethertoachieveanacceptableaccuracyornot.(3)Selectanumberofhiddenlayersandneuronsofhiddenlayers.Theselectionofhiddenlayersandneuronsofhiddenlayerswhichaffectedmappingcapabilitiesofcomplexissuesdirectlywerethemostcriticalstep.Nowthereliablealgorithmwastostartwithafewhiddenlayersandanumberofneurons,trainandtest,thenincreasetheirnumbers.Comparingtheresultsofdifferenttrainingandtestsamples,selectmoreappropriatenumbersofhiddenlayersandtheirneurons.Inthisstudy,three-layerneuralnetworkswhichhaddifferentneuronsofahiddenlayerwereappliedintheelectrodialysisexperiments.(4)Choosetrainingsamples.Therequiredsamplesofthenetworksdependedoncomplexdegreesofmappingrelationships.Generally,themorecomplicatedmappingrelationships,themoretrainingsamplesrequired.Whenchoosingsamplesfromalldata,BPNsneededtoobeythefollowingprinciples:sufficientnumbers,representative,andwell-distributed.3.3.ImprovedBPalgorithms BPNswerebasedonsolidtheoryandrigorousderivation,however,itwasfoundthatthereweremanyshortcomingsinthetrainingprocessofBPNs,includingslowconvergence,theemergenceoflocalextremumandsoon.Inthepracticalapplication,BPalgorithmwasdifficulttobecompetent,sosomeimprovedBPalgorithmswereraisedtoenhancepredictioncapability. ThereweresomemethodsaboutimprovedBPalgorithms,suchas,additionalmomentummethod,adaptivelearningratemethod,flexibleBPalgorithmandsoon.Inthispaper,adaptivelearningratemethodandflexibleBPalgorithmwereusedtopredictseparationpercentintheelectrodialysisprocessandcomparedpredictioncapabilitywiththatofBPNs.3.3.1.Adaptivelearningratemethod ThetrainingprocessofBPalgorithmhadashortcomingofslowconvergencewhichwasaffectedbyaninappropriatelearningrate.InBPalgorithm,theadjustmentsofweightsdependedonlearningratesandgradients.AndlearningratewasconstantinBPalgorithm.Infact,whenlearningratewaslower,trainingtimegotlongerandconvergencebecameslower.Whenlearningratewastoohigh,oscillationanddivergencehademerged,thiscausedanunstablesystem.AdaptivelearningratewasshowninFig.7. Thebasicprincipleofadaptivelearningrate:whenlearningrate(η)wasincreased,thiscausedshorteninglearningtime;thehigherthelearningrate,thehardertheconvergence,andinthiscondition,learningrateshouldbedecreaseduntiltheconvergenceofthetrainingprocess.Learningratewasadjustedbythechangingoferrorsandgradients,andalsobythegradientsofthelearningrateaccordingtoerrorfunction.Moreover,thechangingofthetotalerrormaybeproceededbyadjustingheuristically,theruleswereasfollows: (1)Ifthetotalerror(E)decreased,learningrateneededtoincrease. (2)Ifthetotalerror(E)increased,learningrateneededtodecrease.Whentheratioofanewerrortotheoriginalerrorexceededacertainvalue(e.g.1.04),learningratedecreasedrapidly.3.3.2.FlexibleBPalgorithm Generallyspeaking,Sigmoidfunctionwasusedtotransferdatafromtheinputlayertothehiddenlayer.Moreover,Sigmoidfunctionkeptinfiniteinputswithinthelimitedrealmsoftheoutputs.Wheninputvariableswerelarge,theslopeofSigmoidfunctionwouldbecloseto0.Evenifthegradientchangedslightly,thiscouldinducetheweightstomakegreatchanges,sotheweightsdeviatedgraduallyfromthemostoptimisticvalue,andevenmadetheweightsofthenetworksceaseintheamendmentprocess. Whenthetrainingprocessvibrated,variablequantitiesoftheweightswouldbedecreased.Andwhenthechangingdirectionoftheweightsiskeptconstantinseveraliterations,thevariablequantitiesoftheweightswouldbeincreased.Therefore,theconvergencespeedofflexibleBPalgorithmhadanadvantageoverotherimprovedBPalgorithms.4.Resultsanddiscussion FourimportantaspectsneededbedeterminedinthedesignofBPneuralnetworksandimprovedBPalgorithm:1)Selectionoftransferfunctionsofneurons2)Selectionofnodesofahiddenlayer3)Selectionofincreasingratiosoflearningrates4)TestingthegeneralizationofBPneuralnetworksandimprovedBPalgorithm4.1.Choiceoftransferfunctionsofneurons Thefunctionswereusedtotransferbetweenneuronsoftwolayersthatwereadjacentwitheachother.Andthesetransferfunctionsmustbederivable.BPneuralnetworkshadoneormorehiddenlayers,theneuronsofhiddenlayersusedSigmoidtransferfunctions.TheneuronsofanoutputlayerusedPurelinetransferfunctions,andtheoutputscouldbearbitraryvalues.IfthetransferfunctionsoftheneuronsoftheoutputlayerwereSigmoidtransferfunctions,theentireoutputsofthenetworkswererestrictedintherangeof[−1,1]. Sigmoidtransferfunctionsincludedlog-sigmoidandtan-sigmoidfunctions.Theinputsweremappedto(0,1)and(−1,+1),respectively.Log-sigmoidfunctionwasaunipolartransferfunction.Tan-sigmoidfunctionwasabipolartransferfunctionandwascalledahyperbolictangentfunction(Fig.8). Inthepracticalapplications,basedontherelationshipsofinputsandoutputs,acertaintransferfunctionwasused.Inputscontainednonegativevalues,solog-sigmoidfunctionhadbeenadopted.Ifnegativevalueswereincluded,tag-sigmoidfunctionhadbeenused.Inthispaper,theneuronsofthehiddenlayersusedSigmoidtransferfunctions,andtheneuronsoftheoutputlayerusedPurelinetransferfunctions.4.2.Choiceofnodesofhiddenlayers Manyscholarshadengagedinstudyingtheoptimumnodesofhiddenlayers.KolmogorovTheoremprovedthataslongasthenodesofahiddenlayerwereenough,ahiddenlayeroftheneuralnetworkscouldapproachanonlinearfunctionwitharbitraryprecision.However,forafinitemappingfrominputstooutputs,unlimitednodesofahiddenlayerwerenotneeded.Andhowtochoosethenodesofthehiddenlayerwasaproblemthathadnotbeensolvedyet.Thenodesofthehiddenlayerweredeterminedbyexperienceandexperimentdesign.Generally,basedonreflectingontherelationshipsofinputsandoutputsaccurately,thesmallnodesofahiddenlayerwerechosentokeepthestructureofthenetworkssimple.Butthesmallernodes,theworsegeneralizationcapabilityoftheneuralnetworks.Onthecontrary,ifthenodesofthehiddenlayerwerelarger,thecomplexitydegreesoftrainingprocesswereincreased,andthisconditioncausedover-fittingphenomenon.Duringthedesignprocess,manyfactorsmustbeintegrated.Inthespecificdesign,atfirst,ahiddenlayerwaschosen.Ifincreasingthenodesofthehiddenlayercouldnotacquirebetternetworks,layernumbersandnodesofhiddenlayersneededtobeadded. Inthispaper,fourinfluencingfactors(voltage,concentration,temperature,flowrate)werenodesofaninputlayer,andsep

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论