深度学习的简要历史_第1页
深度学习的简要历史_第2页
深度学习的简要历史_第3页
深度学习的简要历史_第4页
深度学习的简要历史_第5页
已阅读5页,还剩6页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

A

brief

history

of

deep

learningAlice

GaoLecture10Readings:

this

article

by

Andrew

KurenkovBased

on

work

by

K.

Leyton-Brown,

K.

Larson,

and

P.

van

Beek1/112/11OutlineLearning

GoalsA

brief

historyofdeep

learningLessons

learned

summarizedby

Geoff

Hinton

Revisiting

the

Learning

goals3/11Learning

GoalsBy

the

end

of

the

lecture,

you

should

beable

to▶

Describethe

causes

andtheresolutionsof

thetwoAIwintersin

thepast.▶

Describethe

fourlessons

learned

summarized

by

Geoff

Hinton.4/11Abrief

history

of

deep

learningA

brief

history

of

deep

learning

based

on

this

article

by

Andrew

Kurenkov.5/11The

birth

of

machine

learning(Frank

Rosenblatt

1957)▶

A

perceptron

can

be

used

to

represent

and

learn

logicoperators

(AND,OR,

and

NOT).▶

It

was

widely

believed

that

AI

is

solved

if

computers

canperformformallogical

reasoning.6/11First

AI

winter(Marvin

Minsky

1969)▶

A

rigorous

analysis

of

the

limitations

of

perceptrons.▶

(1)

We

need

to

use

multi-layer

perceptrons

to

representsimple

non-linear

functions

such

as

XOR.▶

(2)

No

one

knows

a

good

way

to

train

multi-layer

perceptrons.▶

Led

to

the

first

AI

winter

from

the

70s

to

the

80s.7/11The

back-propagation

algorithm(Rumelhart,

Hinton

andWilliams1986)▶

In

their

Naturearticle,

theyprecisely

and

concisely

explainedhow

we

can

use

the

back-propagation

algorithm

to

trainmulti-layer

perceptrons.8/11Second

AI

Winter▶

Multi-layer

neural

networks

trained

with

back-propagationdon’t

work

well,

especially

compared

to

simpler

models

(e.g.support

vector

machines

and/or

random

forests).▶

Led

to

the

second

AI

winter

in

the

mid

90s.9/11The

rise

of

deep

learning▶

In

2006,

Hinton

,

Osindero

and

Teh

showed

thatback-propagation

works

if

the

initial

weights

are

chosen

in

asmart

way.▶

In

2012,

Hintonentered

the

ImageNet

competition

using

deepconvolutional

neural

networks

anddid

far

better

than

the

nextclosest

entry

(an

error

rate

15.3%

rather

than

26.2%

for

thesecond

best

entry).▶

The

deep

learning

tsunami

continues

today.10/11Lessons

learned

summarized

by

Geoff

Hinton▶

Our

labeled

data

sets

were

thousands

of

times

too

small.▶

Our

computers

were

millions

of

times

too

slow.▶

We

initialized

the

weights

in

astupid

way.▶

We

used

the

wrong

typeof

non-linearity.11/11Revisiting

the

Learning

GoalsBy

the

end

of

the

lecture,

you

should

beable

to▶

Describe

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论