Neural_Network_1_(神经网络自学的英文材料)_第1页
Neural_Network_1_(神经网络自学的英文材料)_第2页
Neural_Network_1_(神经网络自学的英文材料)_第3页
Neural_Network_1_(神经网络自学的英文材料)_第4页
Neural_Network_1_(神经网络自学的英文材料)_第5页
已阅读5页,还剩49页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、1Introduction2Course Objectives This course gives an introduction to basic neural network architectures and learning rules. Emphasis is placed on the mathematical analysis of these networks, on methods of training them and on their application to practical engineering problems in such areas as patte

2、rn recognition, signal processing and control systems.3What Will Not Be Covered Review of all architectures and learning rules Implementation VLSI Optical Parallel Computers Biology Psychology4Historical Sketch Pre-1940: von Hemholtz, Mach, Pavlov, etc. General theories of learning, vision, conditio

3、ning No specific mathematical models of neuron operation 1940s: Hebb, McCulloch and Pitts Mechanism for learning in biological neurons Neural-like networks can compute any arithmetic function 1950s: Rosenblatt, Widrow and Hoff First practical networks and learning rules 1960s: Minsky and Papert Demo

4、nstrated limitations of existing neural networks, new learning algorithms are not forthcoming, some research suspended 1970s: Amari, Anderson, Fukushima, Grossberg, Kohonen Progress continues, although at a slower pace 1980s: Grossberg, Hopfield, Kohonen, Rumelhart, etc. Important new developments c

5、ause a resurgence in the field5Applications Aerospace High performance aircraft autopilots, flight path simulations, aircraft control systems, autopilot enhancements, aircraft component simulations, aircraft component fault detectors Automotive Automobile automatic guidance systems, warranty activit

6、y analyzers Banking Check and other document readers, credit application evaluators Defense Weapon steering, target tracking, object discrimination, facial recognition, new kinds of sensors, sonar, radar and image signal processing including data compression, feature extraction and noise suppression

7、, signal/image identification Electronics Code sequence prediction, integrated circuit chip layout, process control, chip failure analysis, machine vision, voice synthesis, nonlinear modeling6Applications Financial Real estate appraisal, loan advisor, mortgage screening, corporate bond rating, credi

8、t line use analysis, portfolio trading program, corporate financial analysis, currency price prediction Manufacturing Manufacturing process control, product design and analysis, process and machine diagnosis, real-time particle identification, visual quality inspection systems, beer testing, welding

9、 quality analysis, paper quality prediction, computer chip quality analysis, analysis of grinding operations, chemical product design analysis, machine maintenance analysis, project bidding, planning and management, dynamic modeling of chemical process systems Medical Breast cancer cell analysis, EE

10、G and ECG analysis, prosthesis design, optimization of transplant times, hospital expense reduction, hospital quality improvement, emergency room test advisement7Applications Robotics Trajectory control, forklift robot, manipulator controllers, vision systems Speech Speech recognition, speech compre

11、ssion, vowel classification, text to speech synthesis Securities Market analysis, automatic bond rating, stock trading advisory systems Telecommunications Image and data compression, automated information services, real-time translation of spoken language, customer payment processing systems Transpo

12、rtation Truck brake diagnosis systems, vehicle scheduling, routing systems8Biology Neurons respond slowly 10-3 s compared to 10-9 s for electrical circuits The brain uses massively parallel computation 1011 neurons in the brain 104 connections per neuron 9Neuron ModelandNetwork Architectures10Single

13、-Input Neuron11Transfer Functions12Transfer Functions13Multiple-Input NeuronAbreviated Notation14Layer of Neurons15Abbreviated NotationWw1 1,w1 2, w1 R,w2 1,w2 2, w2 R,wS 1,wS 2, wS R,=b12S=bbbpp1p2pR=aa1a2aS=16Multilayer Network17Abreviated NotationHidden LayersOutput Layer18Delays and Integrators1

14、9Recurrent Networka2 satlins Wa1 b+=a1 satlins Wa0 b+satlins Wpb+=20AnIllustrativeExample21Apple/Banana Sorter22Prototype Vectorspshapetextureweight=p2111=Prototype BananaPrototype AppleShape: 1 : round ; -1 : elipticalTexture: 1 : smooth ; -1 : roughWeight: 1 : 1 lb. ; -1 : 1 lb.MeasurementVectorp1

15、111=23Perceptron24Two-Input Caseahardlims n hardlims1 2p2+=w1 1,1=w1 2,2=Wpb+0=1 2p2+0=Decision Boundary25Apple/Banana Exampleahardlimsw1 1,w1 2,w1 3,p1p2p3b+=The decision boundary shouldseparate the prototype vectors.p10=10 0p1p2p30+0=The weight vector should be orthogonal to the decision boundary,

16、 and should point in the direction of the vector which should produce an output of 1. The bias determines the position of the boundary26Testing the Networkahardlims10 01110+1 banana=Banana:Apple:ahardlims10 01110+1 apple=“Rough” Banana:ahardlims10 01110+1 banana=27Hamming Network28Feedforward LayerF

17、or Banana/Apple RecognitionW1p1Tp2T111111=b1RR33=a1W1pb1+p1Tp2Tp33+p1Tp3+p2Tp3+=S2=29Recurrent LayerW211=1S1- - - - - - -a2t1+poslin11a2t poslina12t a22t a22t a12t =30Hamming Operationp111=Input (Rough Banana)a11111 1111133+13+13+42=First Layer31Hamming Operationa21 poslin W2a20 poslin10.50.5142posl

18、in3030=a22 poslin W2a21 poslin10.50.5130poslin31.530=Second Layer32Hopfield Network33Apple/Banana ProblemW1.20000.2000 0.2b,00.90.9=a1t1+satlins 1.2a1t =a2t1+satlins 0.2a2t 0.9+=a3t1+satlins 0.2a3t 0.9=a0 111=a1 10.71=a2 111=a3 111=Test: “Rough” Banana(Banana)34Summary Perceptron Feedforward Network

19、 Linear Decision Boundary One Neuron for Each Decision Hamming Network Competitive Network First Layer Pattern Matching (Inner Product) Second Layer Competition (Winner-Take-All) # Neurons = # Prototype Patterns Hopfield Network Dynamic Associative Memory Network Network Output Converges to a Protot

20、ype Pattern # Neurons = # Elements in each Prototype Pattern35Perceptron Learning Rule36Learning Rulesp1t1, p2t2, pQtQ, Supervised LearningNetwork is provided with a set of examplesof proper network behavior (inputs/targets) Reinforcement LearningNetwork is only provided with a grade, or score,which

21、 indicates network performance Unsupervised LearningOnly network inputs are available to the learningalgorithm. Network learns to categorize (cluster)the inputs.37Perceptron ArchitectureWw1 1,w1 2, w1 R,w2 1,w2 2, w2 R,wS 1,wS 2, wS R,=wiwi 1,wi 2,wi R,=WwT1wT2wTS=aihardlim nihardlimwTipbi+=38Single

22、-Neuron PerceptronahardlimwT1pb+hardlim w1 1,p1w1 2,p2b+=w1 1,1=w1 2,1=b1=39Decision BoundarywT1pb+0=wT1pb= All points on the decision boundary have the same inner product with the weight vector. Therefore they have the same projection onto the weight vector, and they must lie on a line orthogonal t

23、o the weight vector40Example - ORp100=t10=,p201=t21=,p310=t31=,p411=t41=,41OR Solutionw10.50.5=wT1pb+0.5 0.500.5b+0.25b+0=b0.25=Weight vector should be orthogonal to the decision boundary.Pick a point on the decision boundary to find the bias.42Multiple-Neuron PerceptronEach neuron will have its own

24、 decision boundary.wTipbi+0=A single neuron can classify input vectors into two categories.A multi-neuron perceptron can classify input vectors into 2S categories.43Learning Rule Test Problemp1t1,p2t2, pQtQ,p112=t11=,p212=t20=,p301=t30=,44Starting Pointw11.00.8=Present p1 to the network:ahardlimwT1p

25、1hardlim1.00.812=ahardlim0.60=Random initial weight:Incorrect Classification.45Tentative Learning Rule Set 1w to p1 Not stable Add p1 to 1wIf t1 and a0, then w1neww1oldp+=w1neww1oldp1+1.00.812+2.01.2=Tentative Rule:46Second Input VectorIf t0 and a1, then w1neww1oldp=ahardlimwT1p2hardlim2.0 1.212=aha

26、rdlim 0.41=(Incorrect Classification)Modification to Rule:w1neww1oldp22.01.2123.00.8=47Third Input VectorPatterns are now correctly classified.ahardlimwT1p3hardlim3.00.801=ahardlim 0.81=(Incorrect Classification)w1neww1oldp33.00.8013.00.2=If ta, then w1neww1o ld.=48Unified Learning RuleIf t1 and a0, then w1neww1oldp+=If t0 and a1, then w1n eww1oldp=If ta, then w1neww1old=eta=If e1, then w1neww1oldp+= =If e1, then w1neww1oldp=If e0, then w1neww1old=w1neww1oldep+w1oldtap+=bnewbold

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

最新文档

评论

0/150

提交评论