ArtificialNeuralNetworks人工神经网络PPT学习教案_第1页
ArtificialNeuralNetworks人工神经网络PPT学习教案_第2页
ArtificialNeuralNetworks人工神经网络PPT学习教案_第3页
ArtificialNeuralNetworks人工神经网络PPT学习教案_第4页
ArtificialNeuralNetworks人工神经网络PPT学习教案_第5页
已阅读5页,还剩64页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、会计学1ArtificialNeuralNetworks人工神经网络人工神经网络5/1/2022Artificial Neural Networks - I2ISupervised ANNsExamplesApplicationsFurther topicsIIUnsupervised ANNsExamplesApplicationsFurther topicsIII第1页/共69页5/1/2022Artificial Neural Networks - I3第2页/共69页5/1/2022Artificial Neural Networks - I4 billion synapses in

2、human brain Chemical transmission and modulation of signals Inhibitory synapses Excitatory synapses610第3页/共69页5/1/2022Artificial Neural Networks - I5第4页/共69页5/1/2022Artificial Neural Networks - I6Refractory timeAction potentialAction potential 100mVActivation threshold 20-30mVRest potential -65mVSpi

3、ke time 1-2msRefractory time 10-20ms第5页/共69页5/1/2022Artificial Neural Networks - I7第6页/共69页5/1/2022Artificial Neural Networks - I8第7页/共69页5/1/2022Artificial Neural Networks - I9 jjijitxwtuStimulusurest = resting potentialxj(t) = output of neuron j at time twij = connection strength between neuron i

4、and neuron ju(t) = total stimulus at time tyi(t)x1(t)x2(t)x5(t)x3(t)x4(t)wi1wi3wi2wi4wi5(t)( fiiuy )(txjjijwNeuron i tuuftyirestiResponse第8页/共69页5/1/2022Artificial Neural Networks - I10第9页/共69页5/1/2022Artificial Neural Networks - I11 OFFelseONzzf“Hard” threshold= threshold ex: Perceptrons, Hopfield

5、NNs, Boltzmann Machines Main drawbacks: can only map binary functions, biologically implausible.offonjjijixwuStimulusirestiuufyResponse第10页/共69页5/1/2022Artificial Neural Networks - I12 112zezf“Soft” threshold ex: MLPs, Recurrent NNs, RBF NNs. Main drawbacks: difficult to process time patterns, biolo

6、gically implausible.offonjjijixwuStimulusirestiuufyResponse第11页/共69页5/1/2022Artificial Neural Networks - I13 jjijitxwtu tifrestiutttufty0,)()( OFFelseONdtdzzzf0& = spike and afterspike potentialurest = resting potential(t,u() = trace at time t of input at time = thresholdxj(t) = output of neuron j a

7、t time twij = efficacy of synapse from neuron i to neuron ju(t) = input stimulus at time tResponseStimulus第12页/共69页5/1/2022Artificial Neural Networks - I14y(t)urest+(t-tf)第13页/共69页5/1/2022Artificial Neural Networks - I15第14页/共69页5/1/2022Artificial Neural Networks - I16第15页/共69页5/1/2022Artificial Neu

8、ral Networks - I17第16页/共69页5/1/2022Artificial Neural Networks - I18第17页/共69页5/1/2022Artificial Neural Networks - I19Output layerInput layerHidden layersfully connectedsparsely connected第18页/共69页5/1/2022Artificial Neural Networks - I20第19页/共69页5/1/2022Artificial Neural Networks - I21第20页/共69页5/1/2022

9、Artificial Neural Networks - I22第21页/共69页5/1/2022Artificial Neural Networks - I23第22页/共69页5/1/2022Artificial Neural Networks - I24第23页/共69页5/1/2022Artificial Neural Networks - I25第24页/共69页5/1/2022Artificial Neural Networks - I26第25页/共69页5/1/2022Artificial Neural Networks - I27第26页/共69页5/1/2022Artifi

10、cial Neural Networks - I28第27页/共69页5/1/2022Artificial Neural Networks - I29Presented by Martin Ho, Eddy Li, Eric Wong and Kitty Wong - Copyright 2000Linear Separability in Perceptrons 第28页/共69页5/1/2022Artificial Neural Networks - I30Presented by Martin Ho, Eddy Li, Eric Wong and Kitty Wong - Copyrig

11、ht 2000Learning Linearly Separable Functions (1)What can these functions learn ?Bad news:- There are not many linearly separable functions.Good news:- There is a perceptron algorithm that will learn any linearly separable function, given enough training examples.第29页/共69页5/1/2022Artificial Neural Ne

12、tworks - I31jijxewiiyde =learning coefficientwij=connection from neuron xj to yix=(x1,x2,.,xn) ANN inputy=(y1,y2,.,yn) ANN outputd=(d1,d2,.,dn) desired output(x,d) training examplee=ANN errorw11w12w13w14y1y2y3x1x2x3x4第30页/共69页5/1/2022Artificial Neural Networks - I32第31页/共69页5/1/2022Artificial Neural

13、 Networks - I33Hebb postulate (1948)Correlation-based learningConnections between concurrently firing neurons are strengthenedExperimentally verified (1973)w12w11x1x2y1jiijijxywFwdtd, =learning coefficientwij=connection from neuron xj to yijiijxywdtdjixyFijjiwxyFijjiijwxywdtdGeneral FormulationHebb

14、postulateKohonen & Grossberg (ART)第32页/共69页5/1/2022Artificial Neural Networks - I34ENERGY MINIMIZATIONWe need an appropriate definition of energy for artificial neural networks, and having that we can use mathematical optimisation techniques to find how to change the weights of the synaptic connecti

15、ons between neurons.ENERGY = measure of task performance error第33页/共69页5/1/2022Artificial Neural Networks - I35InputsOutput),(),(),(),(14414133131221211111wxfywxfywxfywxfy),(),(),(231232212221121wyfywyfywyfy141312111yyyyy),(312wyfyOut2323232yyyy第34页/共69页5/1/2022Artificial Neural Networks - I36Neural

16、 network: input / output transformation),(WxFyoutW is the matrix of all weight vectors.第35页/共69页5/1/2022Artificial Neural Networks - I37MLP = multi-layer perceptronPerceptron:MLP neural network:xwyToutxyoutxyout23212322212213121111),(2 , 1,11),(3 , 2 , 1,1121211ywywyyyykeyyyyykeyTkkkoutTaywkTaxwkkkT

17、kkT第36页/共69页5/1/2022Artificial Neural Networks - I38RBF = radial basis function222|)(|)(|)(awxexfcxrxrExample:Gaussian RBFxyout41)(2|222, 1kawxkoutkkewy第37页/共69页5/1/2022Artificial Neural Networks - I39 control classification prediction approximationThese can be reformulated in general as FUNCTION AP

18、PROXIMATION tasks.Approximation: given a set of values of a function g(x) build a neural network that approximates the g(x) values for any input x.第38页/共69页5/1/2022Artificial Neural Networks - I40Task specification:Data: set of value pairs: (xt, yt), yt=g(xt) + zt; zt is random measurement noise.Obj

19、ective: find a neural network that represents the input / output transformation (a function) F(x,W) such thatF(x,W) approximates g(x) for every x第39页/共69页5/1/2022Artificial Neural Networks - I41Error measure:NtttyWxFNE12);(1Rule for changing the synaptic weights:jijinewjijijiwwwWwEcw,)(c is the lear

20、ning parameter (usually a constant)第40页/共69页5/1/2022Artificial Neural Networks - I42Perceptron:xwyToutData:),(),.,(),(2211NNyxyxyxError:22)()()(ttTtoutyxtwytytELearning:tjmjjTtittTiiittTiiiixtwxtwxyxtwctwtwwyxtwctwwtEctwtw12)()()()() 1()()()()() 1(A perceptron is able to learn a linear function.第41页

21、/共69页5/1/2022Artificial Neural Networks - I43Only the synaptic weights of the output neuron are modified.An RBF neural network learns a nonlinear function.RBF neural network:Data:),(),.,(),(2211NNyxyxyxError:21)(2|22)()()(22, 1tMkawxktoutyetwytytEkktLearning:22, 1)(2|2222)(,(2)()()() 1(iitawxttiiiie

22、ytWxFwtEwtEctwtwMkawxkoutkkewWxFy1)(2|222, 1),(第42页/共69页5/1/2022Artificial Neural Networks - I44MLP neural network:with p layersData:),(),.,(),(2211NNyxyxyxError:22);()()(tttoutyWxFytytE1221222111111);(.),.,(,.,1,11),.,(,.,1,112212111ppToutTMaywkTMaxwkywWxFyyyyMkeyyyyMkeykkTkkTxyout 1 2 p-1 p第43页/共6

23、9页5/1/2022Artificial Neural Networks - I45Learning: Apply the chain rule for differentiation: calculate first the changes for the synaptic weights of the output neuron; calculate the changes backward starting from layer p-1, and propagate backward the local error terms.The method is still relatively

24、 complicated but it is much simpler than the original optimisation problem.第44页/共69页5/1/2022Artificial Neural Networks - I46In general it is enough to have a single layer of nonlinear neurons in a neural network in order to learn to approximate a nonlinear function.In such case general optimisation

25、may be applied without too much difficulty.Example: an MLP neural network with a single hidden layer:MkaxwkoutkkTewWxFy12, 111);(第45页/共69页5/1/2022Artificial Neural Networks - I47itiTaxwttiiiieytWxFwtEwtEctwtw, 111)(,(2)()()() 1(2222Synaptic weight change rules for the output neuron:Synaptic weight c

26、hange rules for the neurons of the hidden layer:)(1)(,(2)() 1(11111)(,(2)()()() 1(2, 1, 1, 1, 1, 1, 12, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1tjaxwaxwttijijtjitiTijitiTijaxwaxwaxwijaxwijttijijijijxeeytWxFctwtwxaxwwaxwweeewewytWxFwtEwtEctwtwitiTitiTitiTitiTitiTitiT第46页/共69页5/1/2022Artificial Neural Netwo

27、rks - I48Bayesian learning:the distribution of the neural network parameters is learntSupport vector learning:the minimal representative subset of the available data is used to calculate the synaptic weights of the neurons第47页/共69页5/1/2022Artificial Neural Networks - I49第48页/共69页5/1/2022Artificial N

28、eural Networks - I50Artificial Neural NetworksFeedforwardRecurrent(Kohonen)(MLP, RBF)UnsupervisedSupervised(ART)(Elman, Jordan,Hopfield)UnsupervisedSupervised第49页/共69页5/1/2022Artificial Neural Networks - I51第50页/共69页5/1/2022Artificial Neural Networks - I52第51页/共69页5/1/2022Artificial Neural Networks

29、- I53第52页/共69页5/1/2022Artificial Neural Networks - I54第53页/共69页5/1/2022Artificial Neural Networks - I55第54页/共69页5/1/2022Artificial Neural Networks - I56第55页/共69页5/1/2022Artificial Neural Networks - I57第56页/共69页5/1/2022Artificial Neural Networks - I58第57页/共69页5/1/2022Artificial Neural Networks - I59第

30、58页/共69页5/1/2022Artificial Neural Networks - I60第59页/共69页5/1/2022Artificial Neural Networks - I61第60页/共69页5/1/2022Artificial Neural Networks - I62第61页/共69页5/1/2022Artificial Neural Networks - I63第62页/共69页5/1/2022Artificial Neural Networks - I64Presented by Martin Ho, Eddy Li, Eric Wong and Kitty Wong - Copyright 2000Neural Network ApproachesALVINN - Autonomous Land Vehicle In a Neural Network第63页/共69页5/1/2022Artificial Neural Networks - I65Presented by Martin Ho, Eddy

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论