deep learning techniques and applications深度学习技术与应用_第1页
deep learning techniques and applications深度学习技术与应用_第2页
deep learning techniques and applications深度学习技术与应用_第3页
deep learning techniques and applications深度学习技术与应用_第4页
deep learning techniques and applications深度学习技术与应用_第5页
已阅读5页,还剩83页未读 继续免费阅读

付费下载

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

DeepDeepLearning:Techniques DeepChallengetoThetruechallengetoartificialinprovedtobesolvingthetasksthatareeasyforpeopletoperformbuthardforpeopletodescribeformally;solvingtheproblemsthatwesolveintuitively,thatfeelautomatic;likerecognizingspokenwordsorfacesinFlowchartsshowinghowthedifferentpartsofanAIsystemrelatetoeachotherwithindifferentAIdisciplines.Shadedboxesindicatecomponentsthatareabletolearnfromdata.Thefigureshowstwoofthethreehistoricalwavesofartificialneuralnetsresearch,asmeasuredbythefrequencyoftheand“connectionism”or“neuralnetworks”accordingtoDeeplearninghasemoreusefulastheamountofavailabletrainingdatahasKeyTrendsaboutDeepDeeplearningmodelshavegrowninsizeovertimeascomputerhardwareandsoftwareinfrastructurefordeeplearninghasKeyTrendsaboutDeepDeeplearningmodelshavegrowninsizeovertimeascomputerhardwareandsoftwareinfrastructurefordeeplearninghasKeyTrendsaboutDeepDeeplearninghassolvedincreasinglywithincreasingaccuracyovertime.2009

微 的语音识别专家LiDeng和DongYu和Hinton开始合作2011微软的语音识别研究取得成把连续多帧的语音特征并在一起,构 特征逐级地进行信息特征抽取无缝地和传统的语音识别技术相结合,在不引起任何系统额外耗况下大幅度地提升语音识别系统的识别率,LeCun,ConvolutionNeuralNetworks,2012年10月 团队 分类问题上用更深 N取最好结果,使得图像识 2015 公布 错误率DeepDeepLearninginNLP&WhyNeuralThehumancerebralcortexis2to4millimetresinThedifferentcorticallayerseachcontainacharacteristicdistributionneuronalcelltypesandconnectionswithothercorticalandsubcorticalthemoreancientpartofthecerebralcortex,thehippocampus,hasatthreecellularThemostrecentpartofthecerebralcortex,theneocortex(alsocalledisocortex),NeuronsinvariouslayersconnectverticallytoformsmallThecortexisorganizedverticallyincolumnsandhorizontallyinThedifferentregionsofsomatosensoryreceivetheirmaininputsfromdifferentkindsofreceptors.Area3breceivesmostofitsprojectionsfromthesuperficialArea3areceivesinputfromreceptorsinthemuscleInputfromthethalamusarrivesatlayerIV,whereneuronsdistributeinformationupanddownlayers.[Kaasetal.,1979.]Hubel-WieselHubel,DavidH.,andTorstenN.Wiesel."Receptivefieldsofsingleneuronesinthecat'sstriatecortex."TheJournalofphysiology148,no.3(1959):574-591.Hubel-WieselTheNobelPrizeinPhysiologyorMedicine,FurtherBrunoDavidCornell 上不同物UniversityofToronto-MachineLearningGroup(GeoffHinton,RichZemel,RuslanSalakhutdinov,BrendanFrey,RadfordUniversitédeMontréal-LisaLab(YoshuaBengio,PascalVincent,AaronCourville,RolandNewYorkUniversity–YannLecun‘sandRobFergus‘StanfordUniversity–AndrewNg‘sUniversityofOxford–Deeplearninggroup,NandodeFreitasandPhilResearch–JeffDean,SamyBengio,JasonWeston,Marc’AurelioRanzato,DumitruErhan,QuocLeetResearch–LiDengetSUPSI–IDSIA(Jurgen UCBerkeley–BrunoOlshausen‘sUniversityofWashington–PedroDomingos‘IDIAPResearchInstitute-RonanCollobert‘sUniversityofCaliforniaMerced–MiguelA.Carreira-Perpinan‘sUniversityofHelsinki-AapoHyvärinen‘sNeuroinformaticsUniversitédeSherbrooke–Hugo e‘sUniversityofGuelph–GrahamTaylor‘sUniversityofMichigan–HonglakLee‘sTechnicalUniversityofBerlin–Klaus-RobertMuller‘sBaidu–KaiYu‘sAaltoUniversity-JuhaKarhunenandTapaniRaikoU.Amsterdam–MaxWelling‘sU.CaliforniaIrvine–PierreBaldi‘sGhentUniversity–BenjaminShrauwen‘sUniversityofTennessee–ItamarArel‘sIBMResearch–BrianKingsburyetUniversityofBonn–SvenBehnke’sGatsbyUnit@UniversityCollegeLondon–ManeeshSahani,Yee-WhyeTeh,PeterComputationalCognitiveNeuroscienceLab@UniversityofColoradoThere'saninterestinghistoryaboutpeople'schangesintheirattitudestowarddeeparchitecturesandtheshallow2012-Asurveyondeeplearning-onesmallsteptowardHistoricalContextofDeepInaround1960,the1stgenerationofneuralnetworkwasborn(byIt'scapabilityofclassifyingsomebasicshapesliketrianglesandsquaresletpeopleseethepotentialthatarealin ligentmachinewhichcansense,learn,rememberandrecognizelikehuman-beingscanbeinventedwiththistrend.BUT,itsfundamentallimitationssoonbrokepeople'sCriticizingfromMarvinMinsky,OneoftheapparentreasonsisthatthefeaturelayerofthisPerceptronisfixedandcraftedbyhumanbeings,whichisabsolu againstthedefinitionofareal“in Anotherreasonisitssingle-layerstructurelimitsthefunctionsitlearn,e.g,anexclusive-orfunctionisoutofitslearningInaround1985,basedonthePerceptrons,Geoffrey cedtheoriginalsinglefixedfeaturelayerwithseveralhiddenlayers,creatingthe2nd-generationneuralnetwork.viaBack-propagationalgorithm(proposedin1969,practicableinBPdidnotworkwellinThecorrectingsignalwillbeweakenedwhenitpassesbackviamultipleItoftengetstrappedinpoorlocaloptimawhenthebatch-modeorevengradientdescentBPalgorithmisTheseverityincreasessignificantlyasthedepthofthenetworksLearningistooslowacrossmultiplehiddenIn1989,YannLeCunetal.builtadeepneuralnetworkwiththepurposerecognizinghandwrittenZIPcodesonDespitethesuccessofapplyingthealgorithm,thetimetotrainthenetworkondatasetwas y3SVMsloweddownthedevelopmentsofWhenpeopleweretryingtomakeimprovementstoHinton'sneuralnetworkswithrespecttothoseadvantagesTryingtoincreasethetrainingdatasetandestimatingtheinitialweight1993-1995VladimirN.Vapnik,etmadeimprovementsontheoriginalTheycreatinganewfamilySupportVectorSVM——GoodorSVMmakeslearningfastandeasy,duetoitssimpleAppropriatefordatawithsimplestructures,e.g,withasmalloffeaturesorthedatawhichdoesn'tcontainhierarchicalBut,forthedatawhichitselfcontainscomplicatedfeatures,tendstoperformworsebecauseofitssimpleOnewaytosolvethisproblemistoaddapriorknowledgetotheSVMmodelinordertoobtainabetterfeaturelayer.But,it'shardtofindageneralsetofpriorSVM——takesusawayfromtheroadtoareal ligentmachineDespitethefactthatSVMcanworkreallywellinsolvingmanyAIproblems,itisnotagoodtrendtoAIduetoitsfataldeficiency,shallowarchitecture.SVMisstillakindofPerceptronwherethefeaturesareobtainednotlearntfromthedataWiththepurposeoffindinganarchitecturethatmeetstherequirementsabove,someresearchersstartedtolookbacktothemulti-layerneuralnetwork,tryingtoexploititsadvantagesrelatedtodeepandethelimitations...After2010年 国防部DARPA计划首次资助深度学习项目,参与方包括福大学、纽约大学和 2011年,微 和谷歌的语音识别研究人员先后采用DNN技术降低语识别错误率20%-30%,是该领域10年来最大突2012年,Hinton将 分类问题的Top5错误率由26%降至至15%Andrewf建 个音和;2013年,Hinton创立的DNNResearch公司 收购2013年,YannLeCun加 的人工智 After CTR预估(Click-Through-RatePrediction, 检索达到了国际领先水平。2014年,AndrewNg加盟。2013年,腾讯着手建立深度学台Mariana。Mariana面向语音识别、图像识别、推荐等众多应用领域,提供默认算法的并行实现以减少新算DNNGPUDNNCPUBMBM->RBM-BM->RBMBM->RBM-DBN模型被视为由若干个RBM堆叠在一起,可过由低到高逐层训练这些RBM来实现(1)底部RBM以原始输入数据训(2)将底部RBM抽取的特征作为顶部的输入训Hintonin自 ,80年代晚期出主要用于降维,后用于主成分分𝒏输入输出层神经元个m隐藏层神经元个p,q各层神经元的偏w输入层与隐藏层之间的权෥隐藏层与输出层之间的权Y.Bengio,P.Lamblin,D.Popovici,andH. e.Greedylayerwisetrainingofnetworks.InProceedingsofNeuralInformationProcessingSystems(NIPS).2012Hinton–Deep2014/12/9NetworkIn2013ZFVisualizingandUnderstandingConvolutional2014/12/9NetworkIn RNN(RecurrentNeuralallbiologicalneuralnetworksareRNNsimplementdynamicalmathematically,SomeOld

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论