MATLAB人工神经网络.doc_第1页
MATLAB人工神经网络.doc_第2页
MATLAB人工神经网络.doc_第3页
MATLAB人工神经网络.doc_第4页
MATLAB人工神经网络.doc_第5页
已阅读5页,还剩5页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Artificial Neural NetworksAbstract: The Artificial Neural Network (ANN) is a functional imitation of simplified model of the biological neurons and their goal is to construct useful computers for real-world problems and reproduce intelligent data evaluation techniques like pattern recognition, classification and generalization by using simple, distributed and robust processing units called artificial neurons.This paper will present a simple application of the artificial neural network: process, design and performance analysis.1. Working process of Artificial Neural NetworksAn artificial neuron models the dendrites of a biological neuron by receiving one or more inputs, then applying appropriate weights (positive or negative multipliers) to the signals, and passes the weighted inputs to the soma. The soma or body of an artificial neuron, receives the weighted inputs and computes a single output signal (a discrete or continuous function of the sum of the weighted inputs) by applying a threshold or transfer function (Yoon, 1989). The last section of each artificial neuron is analogous to the axon of a biological neuron, and passes the output signal to one or more subsequent artificial neurons or to the outside world. Every non-trivial artificial neural network contains an input layer and an output layer. Most also contain one or more intermediate processing layers.Types of Neural Networks: Both feed-forward and recurrent networks are examples of supervised learning. In unsupervised learning, no external teacher is required. The system self-organizes the input data, discovering for itself the regularities and collective properties of the data. These feed-forward networks have the ability to learn. To do so, an artificial neural network must learn to produce a desired output by modifying the weights from its inputs. The process of how this is done is simple. 2. Problems A. 9 training samples, 361 testing samples.B.9 training samples, 361 testing samples.C.11*11 training samples, 41*41 testing samples.3.DesigningWeighted Sumactivation functionerror functionStep 1: initialize the weight parameters and other parametersdefaultpoints=50; %隐含层节点数inputpoints=2; %输入层节点数outputpoints=2; %输出层节点数Testerror=zeros(1,100); %每个测试点的误差记录a=zeros(1,inputpoints); %输入层节点值y=zeros(1,outputpoints); %样本节点输出值w=zeros(inputpoints,defaultpoints);%输入层和隐含层权值%初始化权重很重要,比如用rand函数初始化则效果非常不确定,不如用zeros初始化v=zeros(defaultpoints,outputpoints); %隐含层和输出层权值bin=rand(1,defaultpoints); %隐含层输入bout=rand(1,defaultpoints);%隐含层输出base1=0*ones(1,defaultpoints);%隐含层阈值,初始化为0cin=rand(1,outputpoints);%输出层输入cout=rand(1,outputpoints);%输出层输出base2=0*rand(1,outputpoints);%输出层阈值error=zeros(1,outputpoints);%拟合误差errors=0;error_sum=0; %误差累加和error_rate_cin=rand(defaultpoints,outputpoints);%误差对输出层节点权值的导数error_rate_bin=rand(inputpoints,defaultpoints);%误差对输入层节点权值的导数alfa=1; % alfa 是隐含层和输出层权值-误差变化率的系数,影响很大belt=0.5; % belt 是隐含层和输入层权值-误差变化率的系数,影响较小gama=3; % gama 是误差放大倍数,可以影响跟随速度和拟合精度trainingROUND=5;% 训练次数,有时训练几十次比训练几百次上千次效果要好sampleNUM=100; % 样本点数x1=zeros(sampleNUM,inputpoints); %样本输入矩阵y1=zeros(sampleNUM,outputpoints); %样本输出矩阵x2=zeros(sampleNUM,inputpoints); %测试输入矩阵y2=zeros(sampleNUM,outputpoints); %测试输出矩阵observeOUT=zeros(sampleNUM,outputpoints); %拟合输出监测点矩阵i=0;j=0;k=0; %其中j是在一个训练周期中的样本点序号,不可引用i=0;h=0;o=0; %输入层序号,隐含层序号,输出层序号x=0:0.1:50; %步长Step 2: select sample input and outputfor j=1:sampleNUM %这里给样本输入和输出赋值,应根据具体应用来设定 x1(j,1)=x(j); x2(j,1)=0.3245*x(2*j)*x(j); temp=rand(1,1); x1(j,2)=x(j); x2(j,2)=0.3*x(j); y1(j,1)=sin(x1(j,1); y1(j,2)=cos(x1(j,2)*cos(x1(j,2); y2(j,1)=sin(x2(j,1); y2(j,2)=cos(x2(j,2)*cos(x2(j,2);endfor o=1:outputpointsy1(:,o)=(y1(:,o)-min(y1(:,o)/(max(y1(:,o)-min(y1(:,o);%归一化,使得输出范围落到0,1区间上,当激活函数为对数S型时适用y2(:,o)=(y2(:,o)-min(y2(:,o)/(max(y2(:,o)-min(y2(:,o);endfor i=1:inputpointsx1(:,i)=(x1(:,i)-min(x1(:,i)/(max(x1(:,i)-min(x1(:,i);%输入数据归一化范围要和输出数据的范围相同,0,1x2(:,i)=(x2(:,i)-min(x2(:,i)/(max(x2(:,i)-min(x2(:,i);end for i=1:inputpoints %样本输入层赋值 a(i)=x1(j,i); end for o=1:outputpoints %样本输出层赋值 y(o)=y1(j,o); endStep 3: compute the input and output of the neural network hidden layer for h=1:defaultpoints bin(h)=0; for i=1:inputpoints bin(h)=bin(h)+a(i)*w(i,h); end bin(h)=bin(h)-base1(h); bout(h)=1/(1+exp(-bin(h);%隐含层激励函数为对数激励endStep 4: compute the input and output of the neural network output layer, compute the error functions partial derivatives for each neuron of the output layer.temp_error=0;for o=1:outputpoints cin(o)=0; for h=1:defaultpoints cin(o)=cin(o)+bout(h)*v(h,o); end cin(o)=cin(o)-base2(o); cout(o)=1/(1+exp(-cin(o); %输出层激励函数为对数激励 observeOUT(j,o)=cout(o); error(o)=y(o)-cout(o); temp_error=temp_error+error(o)*error(o);%记录实际误差,不应该乘伽玛系数 error(o)=gama*error(o);endTesterror(j)=temp_error; error_sum=error_sum+Testerror(j);for o=1:outputpoints error_rate_cin(o)=error(o)*cout(o)*(1-cout(o); endStep 5: compute the error functions partial derivatives for each neuron of the hidden layer, using the errors and weightsfor h=1:defaultpoints error_rate_bin(h)=0; for o=1:outputpoints error_rate_bin(h)= error_rate_bin(h)+error_rate_cin(o)*v(h,o); end error_rate_bin(h)=error_rate_bin(h)*bout(h)*(1-bout(h);endStep 6: modify the link weights of hidden layer and output layer, and hidden layer and input layer. for h=1:defaultpoints base1(h)=base1(h)-5*error_rate_bin(h)*bin(h); for o=1:outputpoints v(h,o)=v(h,o)+alfa*error_rate_cin(o)*bout(h); % % base1(i)=base1(i)+0.01*alfa*error(i); end for i=1:inputpoints w(i,h)=w(i,h)+belt*error_rate_bin(h)*a(i); % %base2=base2+0.01*belt*out_error; endendStep 7: compute the overall error sum.temp_error=temp_error+error(o)*error(o);Testerror(j)=temp_error;error_sum=error_sum+Testerror(j);Step 8: judge whether the error rate satisfy the precision, if it does then return, else go back to step 3 until it exceed the limit time.parameter designing1. 50 hidden nodes, when alfa *gama =3, the error sum is least,belt only influence a little.2. 100 hidden nodes, when alfa *gama =1.5, the error sum is least,belt only influence a little.And the minimum error sum is approximately the same as 50 hidden nodes. 3. 200 hidden nodes, when alfa *gama =0.7, the error sum is least,belt only influence a little.And the minimum error sum is approximately the same as 50 hidden nodes. 4. base1 influence the minimum error sum very little, but it does help stabilize the system.4. Performance analysisA 50 hidden points, training for 200 times, test the network with testing samples, the shape is basically expected, and the error sum is 1.49. 10 hidden points, training for 200 times, test the network with testing samples, the shape is basically expected, and the error sum is 0.89416. 20 hidden points, training for 200 times, test the network with testing samples, the shape is basically expected, and the error sum is 0.89416. Problem B:20 hidden points, training for 200 times, test the network with testing samples, the shape is basically expected, and the error sum is 2.3833. alfa=0.5; belt=0.5; gama=3; learning rate: alfa *gama=1.5From the first and second figure, we come to the conclusion: ANN with more hidden points performances better;From figure 2 4, we come to the conclusion: this ANN performance best at the learning rate 1.5.Problem C:From the two figure, we can see that with all the other parameters the same, ANN with 50 hidden nodes only has an error sum of 0.89102, but ANN with 20 hidden nodes has an error sum of 1.8654. Thus, we can come to a conlusion that ANN with more hidden nodes performances better. From the two figures, ANN with training time of 10 produces the same e

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论