哈工大深圳-08ML试题+答案.doc_第1页
哈工大深圳-08ML试题+答案.doc_第2页
哈工大深圳-08ML试题+答案.doc_第3页
哈工大深圳-08ML试题+答案.doc_第4页
全文预览已结束

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1 Give the definitions or your comprehensions of the following terms.(12)1.1 The inductive learning hypothesisP171.2 OverfittingP491.4 Consistent learner P1482 Give brief answers to the following questions.(15)2.2 If the size of a version space is , In general what is the smallest number of queries may be required by a concept learner using optimal query strategy to perfectly learn the target concept?P272.3 In genaral, decision trees represent a disjunction of conjunctions of constrains on the attribute values of instanse,then what expression does the following decision tree corresponds to ?OutLookHumidityWindSunnyOvercastRainYesHighNormalYesNoStrongYesWeakNo3 Give the explaination to inductive bias, and list inductive bias of CANDIDATE-ELIMINATION algorithm, decision tree learning(ID3), BACKPROPAGATION algorithm.(10)4 How to solve overfitting in decision tree and neural network?(10)Solution:l Decision tree:u 及早停止树增长(stop growing earlier)u 后修剪法(post-pruning)l Neural Networku 权值衰减(weight decay)u 验证数据集(validation set)5 Prove that the LMS weight update rule performs a gradient descent to minimize the squared error. In particular, define the squared error E as in the text. Now calculate the derivative of E with respect to the weight , assuming that is a linear function as defined in the text. Gradient descent is achieved by updating each weight in proportion to . Therefore, you must show that the LMS training rule alters weights in this proportion for each training example it encounters. ( ) (8)Solution:As Vtrain(b)(Successor(b) we can get E= =As mentioned in LMS: We can get Therefore, gradient descent is achievement by updating each weight in proportion to ;LMS rules alters weights in this proportion for each training example it encounters.6 True or false: if decision tree D2 is an elaboration of tree D1, then D1 is more-general-than D2. Assume D1 and D2 are decision trees representing arbitrary boolean funcions, and that D2 is an elaboration of D1 if ID3 could extend D1 to D2. If true give a proof; if false, a counter example.(Definition: Let and be boolean-valued functions defined over .then is more_general_than_or_equal_to(written ) If and only if then ) (10)The hypothesis is false. One counter example is A XOR Bwhile if A!=B, training examples are all positive, while if A=B, training examples are all negative,then, using ID3 to extend D1, the new tree D2 will be equivalent to D1, i.e., D2 is equal to D1.7 Design a two-input perceptron that implements the boolean function .Design a two-layer network of perceptrons that implements . (10)8 Suppose that a hypothesis space containing three hypotheses, , and the posterior probabilities of these typotheses given the training data are 0.4, 0.3 and 0.3 respectively. And if a new instance is encountered, which is classified positive by , but negative by and ,then give the result and detail classification course of Bayes optimal classifier.(10)P1259 Suppose S is a collection of training-example days described by attributes including Humidity, which can have the values High or Normal. Assume S is a collection containing 10 examples, 7+,3-. Of these 10 examples, suppose 3 of the positive and 2 of the negative examples have Humidity = High, and the remainder have Humidity = Normal. Please calculate the information gain due to sorting the original 10 examples by the attribute Humidity.( log21=0, log22=1, log23=1.58, log24=2, log25=2.32, log26=2.58, log27=2.8, log28=3, log29=3.16, log210=3.32, ) (5)Solution: (a)Here we denote S=7+,3-,then Entropy(7+,3-)= =0.886;(b)Gain(S,a2) Values()=High, Normal ,=4 ,=5Thus Gain=0.886-=0.0410 Finish the following algorithm. (10)(1) GRADIENT-DESCENT(training examples, )Each training example is a pair of the form , where is the vector of input values, and t is the target output value. is the learning rate (e.g., 0.05).l Initialize each to some small random valuel Until the termination condition is met, Dol Initialize each to zero.l For each in training_examples, Dol Input the instance to the unit and compute the output ol For each linear unit weight , Do l For each linear unit weight , Do (2) FIND-S Algorithml Initialize h to the most sp

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

最新文档

评论

0/150

提交评论