斯坦福大学机器学习所有问题及答案合集.pdf_第1页
斯坦福大学机器学习所有问题及答案合集.pdf_第2页
斯坦福大学机器学习所有问题及答案合集.pdf_第3页
斯坦福大学机器学习所有问题及答案合集.pdf_第4页
斯坦福大学机器学习所有问题及答案合集.pdf_第5页
已阅读5页,还剩92页未读 继续免费阅读

斯坦福大学机器学习所有问题及答案合集.pdf.pdf 免费下载

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

CS 229 机器学习  (问题及答案) 斯坦福大学 目录 (1) 作业1(Supervised Learning)            1 (2) 作业1解答(Supervised Learning)          5 (3) 作业2(Kernels, SVMs, and Theory)         15 (4) 作业2解答(Kernels, SVMs, and Theory)       19 (5) 作业3(Learning Theory and Unsupervised Learning)                      27 (6) 作业3解答(Learning Theory and Unsupervised Learning)                      31 (7) 作业4(Unsupervised Learning and Reinforcement Learning) 39 (8) 作业4解答(Unsupervised Learning and Reinforcement Learning)                      44 (9) Problem Set #1: Supervised Learning          56 (10) Problem Set #1 Answer               62 (11) Problem Set #2: Problem Set #2: Naive Bayes, SVMs, and Theory                       78 (12) Problem Set #2 Answer               85 CS229 Problem Set #11 CS 229, Public Course Problem Set #1:Supervised Learning 1. Newtons method for computing least squares In this problem, we will prove that if we use Newtons method solve the least squares optimization problem, then we only need one iteration to converge to . (a) Find the Hessian of the cost function J() = 1 2 Pm i=1( Tx(i) y(i)2. (b) Show that the fi rst iteration of Newtons method gives us = (XTX)1XT y, the solution to our least squares problem. 2. Locally-weighted logistic regression In this problem you will implement a locally-weighted version of logistic regression, where we weight diff erent training examples diff erently according to the query point. The locally- weighted logistic regression problem is to maximize () = 2 T m X i=1 w(i) h y(i)logh(x(i) (1 y(i)log(1 h(x(i) i . The 2 T here is what is known as a regularization parameter, which will be discussed in a future lecture, but which we include here because it is needed for Newtons method to perform well on this task. For the entirety of this problem you can use the value = 0.0001. Using this defi nition, the gradient of () is given by () = XTz where z Rm is defi ned by zi= w(i)(y(i) h(x(i) and the Hessian is given by H = XTDX I where D Rmmis a diagonal matrix with Dii= w(i)h(x(i)(1 h(x(i) For the sake of this problem you can just use the ab

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论