人工智能与数据挖掘教学课件lect-7-13_第1页
人工智能与数据挖掘教学课件lect-7-13_第2页
人工智能与数据挖掘教学课件lect-7-13_第3页
人工智能与数据挖掘教学课件lect-7-13_第4页
人工智能与数据挖掘教学课件lect-7-13_第5页
已阅读5页,还剩25页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、2020/10/7,AI otherwise Y=1 Delta = Z-Y Wi(final) = Wi(initial) + Alpha*Delta*Xi Initial Parameters: Rate of learning: alpha = 0.2 Threshold =0.5; Initial weight: 0.1, 0.3 Notes: Weights are initially random The value of learning rate - alpha, is set low first.,2020/10/7,AI & DM,12,Processing Informa

2、tionin an Artificial Neuron,x1,w1j,x2,Yj,w2j,Neuron j wij xi,Weights,Output,Inputs,Summations,Transfer function,2020/10/7,AI & DM,13,What & Why ANN (8.1 Feed forward Neural Network) How ANN works - working principle (8.2.1 Supervised Learning) Most popular ANN - Backpropagation Network (8.5.1 The Ba

3、ckpropagation Algorithm: An example),Content,2020/10/7,AI & DM,14,3. Back-propagation Network,Network Topology multi-layer: Input layer, Hidden layer(s), Output layer Fully connected Feed forward Error back-propagation Initialize weights with random values,2020/10/7,AI & DM,15,Back-propagation Netwo

4、rk,Output nodes,Input nodes,Hidden nodes,Output vector,Input vector: xi,wij,2020/10/7,AI & DM,16,3. Back-propagation Network,For each node 1. Compute the net input to the unit using summation function 2. Compute the output value using the activation function (i.e. sigmoid function) 3. Compute the er

5、ror 4. Update the weights (and the bias) based on the error 5. Terminating conditions: all wij in the previous epoch (周期) were so small as to be below some specified threshold the percentage of samples misclassified in the previous epoch is below some threshold a pre-specified number of epoch has ex

6、pired,2020/10/7,AI & DM,17,Backpropagation Error Output Layer,2020/10/7,AI & DM,18,Backpropagation Error Hidden Layer,2020/10/7,AI & DM,19,The Delta Rule,2020/10/7,AI & DM,20,Root Mean Squared Error,2020/10/7,AI & DM,21,3. Back-propagation (cont.),Increase network accuracy and training speed Network

7、 topology number of nodes in input layer number of hidden layers (usually is one, no more than two) number of nodes in each hidden layer number of nodes in output layer Change initial weights, learning parameter, terminating condition Training process: Feed the training instances Determine the outpu

8、t error Update the weights Repeat until the terminating condition is met,2020/10/7,AI & DM,22,Supervised Learning with Feed-Forward Networks,Backpropagation Learning,2020/10/7,AI & DM,23,Summary: Decisions the builder must make,Network Topology: number of hidden layers, number of nodes in each layer

9、, and feedback Learning algorithms Parameters: initial weight, learning rate Size of training and test data,Structure and parameters determine the length oftraining time and the accuracy of the network,2020/10/7,AI & DM,24,Neural Network Input Format(Normalization: categorical to numerical),All inpu

10、t and output must numerical and between 0,1 Categorical Attributes. e.g. attribute with 4 possible values Ordinal: Set to 0, 0.33, 0.66, 1 Nominal: Set to 0,0, 0,1, 1,0. 1,1 Numerical Attributes:,2020/10/7,AI & DM,25,Neural Network Output Format,Categorical Attributes: (Numerical to categorical) Typ

11、e 0 & 1 Type 0.45 Numerical Attributes: (0,1 to ordinary value) Min+X*(Max-min),2020/10/7,AI & DM,26,Homework,P264, Computational Questions -2 r=0.5, Tk = 0.65 Adjust all weights for one epoch,2020/10/7,AI & DM,27,Case Study,Example: Bankruptcy Prediction with Neural Networks Structure: Three-layer

12、network, back-propagation Training data: Small set of well-known financial ratios Data available on bankruptcy outcomes Supervised network,2020/10/7,AI & DM,28,Architecture of the Bankruptcy Prediction Neural Network,X4,X3,X5,X1,X2,Bankrupt 0,Not bankrupt 1,2020/10/7,AI & DM,29,Bankruptcy Prediction

13、: Network architecture,Five Input Nodes X1: Working capital/total assets X2: Retained earnings/total assets X3: Earnings before interest and taxes/total assets X4: Market value of equity/total debt X5: Sales/total assets Single Output Node: Final classification for each firm Bankruptcy or Nonbankruptcy Development Tool: NeuroShell,2020/10/7,AI & DM,30,Development Three-layer network with back-error propagation (Turban, figure 17.12, p669) Continuous valued input Single output node: 0 = bankrupt, 1 = not bankrupt (Nonbankruptcy) Training Data Set: 129 firm

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论