台大-李宏毅-B站机器学习视频-课件神经网络与深度学习SVM v6_第1页
台大-李宏毅-B站机器学习视频-课件神经网络与深度学习SVM v6_第2页
台大-李宏毅-B站机器学习视频-课件神经网络与深度学习SVM v6_第3页
台大-李宏毅-B站机器学习视频-课件神经网络与深度学习SVM v6_第4页
台大-李宏毅-B站机器学习视频-课件神经网络与深度学习SVM v6_第5页
已阅读5页,还剩17页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

SupportVectorMachine,Outline,BinaryClassification,Step1:Functionset(Model)Step2:Lossfunction:Step3:Trainingbygradientdescentisdifficult,=,Thenumberoftimesggetincorrectresultsontrainingdata.,=+1,1,=,=,=,Step2:Lossfunction,Idealloss,Largervalue,smallerloss,Idealloss:,=,Approximation:,Step2:Lossfunction,Idealloss,Squareloss,Largervalue,smallerloss,=12,SquareLoss:,closeto1,不合理,closeto1,If=1,If=1,12,12,+12,Step2:Lossfunction,Idealloss,Squareloss,Largervalue,smallerloss,=12,Sigmoid+Squareloss,Sigmoid+SquareLoss:,closeto1,closeto0,If=1,If=1,12,12,112,2,Step2:Lossfunction,Idealloss,Squareloss,Largervalue,smallerloss,Sigmoid+crossentropy,=1+,Sigmoid+Squareloss,Sigmoid+crossentropy,(logisticregression),努力可以有回報,沒有回報不想努力,Dividedbyln2here,=+1,=1,1,1.0,GroundTruth,crossentropy,Step2:Lossfunction,Idealloss,Squareloss,Largervalue,smallerloss,=0,1,Sigmoid+Squareloss,Sigmoid+crossentropy,HingeLoss,及格就好,好還要更好,0,1,If=1,If=1,11,0,1+,1+0,1,penalty,Goodenough,LinearSVM,Step1:Function(Model)Step2:LossfunctionStep3:gradientdescent?,Comparedwithlogisticregression,linearSVMhasdifferentlossfunction,=+,=1,Neww,Newx,=,=0,1,=,Recallrelu,maxoutnetwork,YichuanTang,“DeepLearningusingLinearSupportVectorMachines”,ICML2013ChallengesinRepresentationLearningWorkshop,+2,convex,Deepversion:,ShaiShalev-Shwartz,YoramSinger,NathanSrebro,“Pegasos:PrimalEstimatedsub-GrAdientSOlverforSVM“,ICML,2007,LinearSVMgradientdescent,=0,1,=,=,=,0,1=,0,otherwise,If1,=0,LinearSVManotherformulation,=0,1,0,1,1,MinimizinglossfunctionL:,=,:slackvariable,QuadraticProgramming(QP)Problem,Outline,DualRepresentation,=,Linearcombinationofdatapoints,111,=,Hingeloss:usuallyzero,maybesparse,withnon-zeroaresupportvectors,c.f.forlogisticregression,itisalwaysnon-zero,Ifwinitializedas0,DualRepresentation,=,=,1,2,=,=12,Step1:,=,=,=,12,1,=,=,DualRepresentation,Find1,minimizinglossfunctionL,Step2,3:,=,Step1:,=,=,Wedontreallyneedtoknowvectorx,Weonlyneedtoknowtheinnerprojectbetweenapairofvectorsxandz,KernelTrick,KernelTrick,Directlycomputing,canbefasterthan“featuretransformation+innerproduct”sometimes.,=12,=1221222,=1212+21212+2222,=11+222,=12122,=2,Kerneltrickisusefulwhenwetransformallxto,=,=12212221221222,KernelTrick,=1,=122212213223,=1,=11+22+2,=2,Directlycomputing,canbefasterthan“featuretransformation+innerproduct”sometimes.,=1212+2222+22,+21212+21313+,+22323+22424+,=,=122,=?,hasinfdim!,=122122+,=122122,=,=0!,RadialBasisFunctionKernel,=+122,=12,=12,1212,12122121212212,SigmoidKernel,Whenusingsigmoidkernel,wehavea1hiddenlayernetwork.,=,=,Thenumberofsupportvectorsisthenumberofneurons.,Theweightofeachneuronisadatapoint,x,1,2,2,1,=,AudioSegment,AudioSegment,HiroshiShimodaira,Ken-ichiNoma,MitsuruNakai,ShigekiSagayama,“DynamicTime-AlignmentKernelinSupportVectorMachine”,NIPS,2002,MarcoCuturi,Jean-PhilippeVert,OysteinBirkenes,TomokoMatsui,Akernelfortimeseriesbasedonglobalalignments,ICASSP,2007,Youcandirectlydesign,insteadofconsidering,Whenxisstructuredobjectlikesequence,hardtodesign,issomethinglikesimilarity,(Mercerstheorycancheck),MoreaboutkerneldesigninBishopchapter6.2,Evaluatethesimilaritybetweensequencesas,SVMrelatedmethods,SupportVectorRegression(SVR)Bishopchapter7.1.4RankingSVMAlpaydin,Chapter13.11One-classSVMAl

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论