




已阅读5页,还剩40页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
IntroductionofCurrentDeepLearningSoftwarePackages,ThreePopularones,1.Caffe/2.Theano/pypi/Theano3.TensorFlow/Thesewebsitesprovideinformationabouthowtoinstallandrunrelateddeeplearningsoftware.,1.Caffe,1.Overview:Caffe(ConvolutionalArchitectureForFeatureExtraction)CreatedbyYangqingJia(贾扬清),UCBerkeley.WritteninC+,hasPythonandMATLABinterface.2.Githubpage:,4.Installmethod(CUDA+Caffe):Ouxinyu.github.io/Blogs/2014723001.html,AnatomyofCaffe,Blob:StoresdataandderivativesLayer:TransformsBottomblobstotopblobsNet:Manylayers;computesgradientsviaforward/backward,Blob,Layer,Net,Blob,ABlobisawrapperovertheactualdatabeingprocessedandpassedalongbyCaffe,andalsounderthehoodprovidessynchronizationcapabilitybetweentheCPUandtheGPU.,Theconventionalblobdimensionsforbatchesofimagedataare(numberN)x(channelK)x(heightH)x(widthW).,Foraconvolutionlayerwith96filtersof11x11spatialdimensionand3inputstheblobis96x3x11x11.Foraninnerproduct/fully-connectedlayerwith1000outputchannelsand1024inputchannelstheparameterblobis1000 x1024.,Layer,Thelayeristheessenceofamodelandthefundamentalunitofcomputation.Layersconvolvefilters,pool,takeinnerproducts,applynonlinearitieslikerectified-linearandsigmoidandotherelement-wisetransformations,normalize,loaddata,andcomputelosseslikesoftmaxandhinge.,Case:ConvolutionLayer,Net,Thenetjointlydefinesafunctionanditsgradientbycompositionandauto-differentiation.Thecompositionofeverylayersoutputcomputesthefunctiontodoagiventask,andthecompositionofeverylayersbackwardcomputesthegradientfromthelosstolearnthetask.,name:LogReglayername:mnisttype:Datatop:datatop:labeldata_paramsource:input_leveldbbatch_size:64layername:iptype:InnerProductbottom:datatop:ipinner_product_paramnum_output:2layername:losstype:SoftmaxWithLossbottom:ipbottom:labeltop:loss,HowtouseCaffe?,Just4steps!1.Convertdata(runascript)2.Definenet(editprototxt)3.Definesolver(editprototxt)4.Train(withpretrainedweights)(runascript)TakeCifar10imageclassificationforexample.,DataLayerreadingfromLMDBistheeasiest,createLMDBusingconvert_imagesetNeedtextfilewhereeachlineis“path/to/image.jpeglabel”(useimageDataLayerread)CreateHDF5fileyourselfusingh5py(useHDF5Layerread),Step1:ConvertDataforCaffe,ConvertDataonCIFAR10,Step2:DefineNet(cifar10_quick_train_totxt),Layername,Blobsname,Learningrateofweight,Learningrateofbias,Inputimagenumperiteration,Trainingimagedata,Datatype,Blobsname,Numberofoutputclass,Outputaccuracyduringtest,Outputlossduringtrain,Ifyoufinetunesomepre-trainmodel,youcansetlr_mul=0,Step2:DefineNet(cifar10_quick_train_totxt),VisualizetheDefinedNetwork,http:/ethereon.github.io/netscope/#/editor,Step3:DefineSolver(cifar10_quick_totxt),#reducethelearningrateafter8epochs(4000iters)byafactorof10#Thetrain/testnetprotocolbufferdefinitionnet:examples/cifar10/cifar10_quick_train_totxt“#test_iterspecifieshowmanyforwardpassesthetestshouldcarryout.#InthecaseofMNIST,wehavetestbatchsize100and100testiterations,#coveringthefull10,000testingimages.test_iter:100#Carryouttestingevery500trainingiterations.test_interval:500#Thebaselearningrate,momentumandtheweightdecayofthenetwork.base_lr:0.001momentum:0.9weight_decay:0.004#Thelearningratepolicylr_policy:fixed“#Displayevery100iterationsdisplay:100#Themaximumnumberofiterationsmax_iter:4000#snapshotintermediateresultssnapshot:4000snapshot_prefix:examples/cifar10/cifar10_quick“#solvermode:CPUorGPUsolver_mode:GPU,DefinedNetfile,Keyparameters,Importantparameters,Step4:Train,Writeashellfile(train_quick.sh):,Thenenjoyacupofcaffe,ModelZoo(Pre-trainedModel+Finetune),Wecanfinetunethesemodelsordofeatureextractionbasedonthesemodels,Sometricks/skillsabouttrainingCaffe,1NeuralNetworks:tricksofthetrade,1.DataAugmentationtoenlargetrainingsamples2.ImagePre-Processing3.NetworkInitializations4.DuringTraining5.ActivationFunctions6.Regularizationsmoredetailscanreferto1,2,2,DataAugmentation,DataAugmentation,Veryusefulforfaceandcarrecognition!,DataAugmentation,Togetridofocclusionandscalechange,likevisualtracking,DataAugmentation,DataAugmentation,ImagePre-Processing,Step1:subtractthedataset-meanvalueineachchannel,Step2:swapchannelsfromRGBtoBGR,Step3:moveimagechannelstooutermostdimension,Step4:rescalefrom0,1to0,255,NetworkInitializations,DuringTraining,helpalleviateoverfittingduringtraininginCaffe,1Srivastava,Nitish,etal.Dropout:asimplewaytopreventneuralnetworksfromoverfitting.JournalofMachineLearningResearch15.1(2014):1929-1958.2S.IoffeandC.Szegedy.Batchnormalization:Acceleratingdeepnetworktrainingbyreducinginternalcovariateshift.arXivpreprintarXiv:1502.03167,2015,Overfitting,ProsandConsofCaffe,ApracticalexampleofCaffe,ObjectdetectionRCNN/Fast-RCNN/Faster-RCNNCaffe+MATLAB,lr=0.1xbaselearningrate,lr=baselearningrate,2.Theano,1.Overview:APythonlibrarythatallowstodefine,optimizeandevaluatemathematicalexpression.FromYoshuaBengiosgroupatUniversityofMontreal.Embracingcomputationgraphs,symboliccomputation.High-levelwrappers:Keras,Lasagne.2.Github:,ProsandConsofTheano,3.TensorFlow,1.Overview:VerysimilartoTheano-allaboutcomputationgraphs.Easyvisualizations(TensorBoard).Multi-GPUandmulti-nodetraining.2.Tutorial:http:/terryum.io/ml_practice/2016/05/28/TFIntroSlides/,LoaddataDefinetheNNstructureSetoptimizationparametersRun!,BasicFlowofTensorFlow,1.Loaddata,1.Loaddata,2.DefinetheNNstructure,3.Setoptimizationparameters,4.RUN,TheProsandConsofTensorFlow,Ove
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 扫黑除恶宣传课件
- 2025年中招考试常出题目及答案
- 徐绽范猛考研英语阅读理解总结
- 2025年赣州上犹中考试题及答案
- 慕课德育原理课件
- 贵州中考模考试卷及答案
- 盐城招生考试题目及答案
- 急性脑卒中护理课件
- 医院技能培训考试题及答案
- 快递转运安全培训课件
- 安宁疗护知到智慧树章节测试课后答案2024年秋沧州医学高等专科学校
- 胸外科快速康复护理
- 动火作业消防安全管理制度(4篇)
- 水土保持员培训课件
- 第三类商标租赁合同范本
- 海上平台油泵智能监控系统设计
- 《欧洲签证知识培训》课件
- 2024年秋季1530安全教育记录
- 高一上学期数学人教A版(2019)必修第一册教材例题+练习+课后习题
- 中国人寿养老保险股份有限公司江西省分公司招聘笔试题库2024
- 《矿物学》全套教学课件
评论
0/150
提交评论