自适应信号处理课件_第1页
自适应信号处理课件_第2页
自适应信号处理课件_第3页
自适应信号处理课件_第4页
自适应信号处理课件_第5页
已阅读5页,还剩28页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

The

LMS

AlgorithmDr

Y.L.FuAnother

algorithm

for

descending

on

theperformance

surface

is

proposed

as

LMS(least-mean-square)

algorithm,

in

where

somespecial

estimates

of

the

gradient

are

used.

So,it

is

used

strictly.LMS

algorithm

is

important

because

ofits

simplicity

and

easy

of

computation.Derivation

of

the

LMS

algorithmFrom

previous

chapters,

we

have

seen

theadaptive

algorithms.

The

basic

structure

isW

(k

+1)=

W

(k

)-

m

error(k

)The

error

may

be

the

gradient

in

chapter

4

and

5W

(k

+1)=

W

(k

)-

m

~

(k

)Dr

Y.L.Fue(k

)=

d

(k

)-

X

T

(k

)WDr

Y.L.FuFor

the

linear

combination

filterthe

algorithm

is

just

the

LMS

algorithmW

(k

+1)=

W

(k

)+

m

2e(k)X

(k

)No

squaring,

averaging

and

or

differentiation

in

thealgorithm.

It’s

really

simple

and

effect.Convergence

of

the

weight

vector

is

consideredDr

Y.L.Fufirst=

-2E(d

(k

)X

(k

)-

X

(k

)=

2(RW

-

P)=E

~

(k

))=

-2E(e(k

)X

(k

)So,

the

estimate is

a

unbiased

estimate

of~In

chapter

2,

the

convergence

can

be

seen

forstationary

input

processes.

(E(W(k))

converges

toW*=R-1P)Taking

expectation

we

haveE(W

(k

+1

)=

E(W

(k

)+

m

2E(e(k)X

(k

)=

E(W

(k

)+

2m

(E(d(k)X

(k

)-

E(X

(k

)X

T

(k

)W

(k

))Assumption:

X

(k

)and

W

(k

)are

independentE

W k

+1

)=

E

W

k

)+

2m

P

-

RE

W

k

)=

(I

-

2mR)E(W

(k

)+

2mRW

*Dr

Y.L.FuTaking

the

transform

of

rotation,

we

get'0kE(V

(k

)=

(

L

)V¢I

-

2mThe

convergence

is

guaranteed

only

if1>

m

>

0lmaxlmax

£

tr(L

)=

tr(R),Dr

Y.L.Fu1tr

(R

)0

<

m

<SinceExample∑Z-1∑sin

2pkNDr

Y.L.Furk0ww∑yke(k

)1

dk2pk

N=

2

cos2Dr

Y.L.Fukf

=

E

r

)Suppose

rk

are

independent

of

each

other.

2

k+

r

=

0.5

+f2pk

NE(x2

(k

))=

E

sinNNN=

0.5

cos

2p

k

E(x(k

)x(k

-1

)=

E

sin

2pk

+

r

sin

2p

(k

-1)

r

+k

-1

Dr

Y.L.Fu

1+

2f

coscos

2p

1+

2f2pNN

R=

0.5

0

1

1220

1N

N+

w

w

cos

2p

+

2w

sin

2p

+

2e

=

(0.5

+f)w

+

w

)

Dr

Y.L.Fu

N

w

w

N

2p

-

sin01+

2f

2pN

coscos

2p

1+

2f*

1

*0

=

2

w1+

2

2-

cos22220

1

w*

0

=

(

f)

-

cos

(

p

/

N

)

-

2(1+

2f)sin(2p

/

N

)

2p

/

N(1+

2f)

(

)

2

cos(2p

/

N

)sin(2p

/

N

)

tr

R)=

2

0.5

+f)=1.02,

if

f

=

0.010

<

m

<

0.98Learning

curvern

=1-

2mln

,

n

=

0,1,...,

Lnn2ml1t

=nnmse1,

n

=

0,1,...,

L4ml(t

)

=nDr

Y.L.Funmsenmse1,

n

=

0,1,...,

L4ml(T

)

=

(t

)

=Time

constantCalculate

the

eigenvalues

of

R

with

φ=0.01Dr

Y.L.Ful1

=

0.972,

l2

=

0.048The

corresponding

time

constants

areTmse

)1

=

5

iterations(Tmse

)2

=104

iterationsAs

the

analysis

of

adaptive

algorithms

in

previouschapters,

we

need

to

consider

the

noise.Dr

Y.L.Fu=

k

+

Nkˆkcov(Nk

)=

E

Nk

Nk

)=

4E

e

(k

)X

(k

)X

(k

))T

2

Te2

(k

)

is

approximately

uncorrelated

with

the

signal=

P

-

P

=

0vector

since

E(e(k

)X

(k

)W

=W

*Thus,k

)X

(k

)

)e

(k

)E X

(Tkmin2cov(N

)=

4e

R=

4E(

)Transform

it

into

principle

axisDr

Y.L.Fu-1'

-1Q=

4e

L=

Q

cov(N

N

)=

cov

Q

N

)cov

N

)Tk

k

minkkUsing

the

result

obtained

in

the

last

chapter2'2'2'kkk'kcov

Ncov(N

)

cov(-1=(L

-

mL

)

(

)m4=

(I

-

2mL

)

V

)+

m

cov(N

)it

gives)4=

m

e

min

(L-

m

L

)

L2

-1cov

(V

¢(k

)=

m

(L-

m

L

2

)-1

cov

(N'kNeglect

the

small

termDr

Y.L.Fuwe

get

an

approximationmL2mink

min'cov

V

)=

me

L-1L

=

me

ITransforming

back

to

the

space

VQk'cov(V

(k

)=

memin

I=

Q

cov

V

)-1V

(k

)=

QV

(k

)cov(V

(k

)=

cov(QV

¢(k

)=

E

QV

¢(k

)V

¢T

(k

)QT

)whereMisadjustment

=Nn

nkk

kTn=0'2''excess

MSE

=

E(V

L

V

)l

E(v

)LM

=

excess

M

=

mtr(R)Dr

Y.L.Fuexcess

MSE

=

memin

ln

=

memintr(R)n=0From

the

definitioneminIn

order

to

design

a

system

when

the

eigenvaluesare

unknown,

we

should

establish

somerelationshipThe

time

constant

for

the

nth

mode

of

the

learningcurvennmse4ml1(t

)

=nDr

Y.L.FumseL

tr(R)=

m

tmse

av(t

)=4L

+1

1ln

=

4mL

1

1

n=0n=0ThenSpecially,

for

equal

eigenvaluesM

=

L

+14tminThe

trace

of

the

R-matrix

is

the

total

power

ofthe

input

to

the

weights,

which

is

generallyknown.

So,

we

apply

it

to

produce

a

desired

Mby

choosing

a

value

of

μ.In

the

case

of

equal

eigenvaluesDr

Y.L.Fumset4mtr(R)=

L

+1

This

can

be

an

approximation

of

the

timeconstant,

in

the

general

case.Misadjustment

equals

number

of

weightsdivided

by

settling

time,

when

the

eigenvaluesare

equal.Dr

Y.L.FuWe

get

a

rule

of

thumb

to

estimate

the

MPerformanceCompare

the

steepest-descent

method

with

theLMS

algorithmM

T

av

mse

mse=4L

+

1

18

P

T

aveminmtr

(R

)=4

Nd

2m

(L

+

1)

(L

+

1)2

1-P2

l

av

e

mindMtotnDr

Y.L.FunmsennmseTt4

ml2

ml4

ml1M14

mlM

+

P

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

最新文档

评论

0/150

提交评论