外文翻译基于lms算法的自适应组合滤波器大学论文_第1页
外文翻译基于lms算法的自适应组合滤波器大学论文_第2页
外文翻译基于lms算法的自适应组合滤波器大学论文_第3页
外文翻译基于lms算法的自适应组合滤波器大学论文_第4页
外文翻译基于lms算法的自适应组合滤波器大学论文_第5页
已阅读5页,还剩10页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、英文原文Combined Ada ptive Filter with LMS-Based AlgorithmsBo V zo Krstaji c, LJubi sa Stankovi c,and Zdravko UskokoviAbstract: A comb ined ada ptive ?lter is propo sed. It con sists of p arallel LMS-based ada ptive FIR ?lters and an algorithm for choosing the better among them. As a criterion for compa

2、rison of the con sidered algorithms in the propo sed ?lter, we take the ratio betwee n bias and varia nee of the weighti ng coef?cie nts. Simulati ons results con?rm the adva ntages of the prop osed ada ptive ?lter.Keywords: Ada ptive ?lter, LMS algorithm, Combi ned algorithm,Bias and varia nee trad

3、e-off1. IntroductionAda ptive ?lters have bee n app lied in sig nal pro cess ingand con trol, as well as in many p ractical problems, 1, 2. Performanee of an adaptive ?lter depends mainly on the algorithm used for updating the ?lter weighting coef?cients. The most commonly used adaptive systems are

4、those based on the Least Mean Square (LMS) adaptive algorithm and its modi?cations (LMS -based algorithms).-3. However,The LMS is simple for imp leme ntati on and robust in a nu mber of app licati ons 1 since it does not always con verge in an acce ptable manner, there have bee n many atte mpts to i

5、mprove its performanee by the appropriate modi?cations: sign algorithm (SA) 8, geometric mea n LMS (GLMS) 5, variable ste p-size LMS(VS LMS) 6, 7.Each of the LMS-based algorithms has at least one p arameter that should be de?ned p rior to the ada ptati on pro cedure (ste p for LMS and SA; ste p and

6、smooth ing coef?cie nts for GLMS; various p arameters affect ing th e ste p for VS LMS). These p arameters crucially in?uence the ?lter out put during two adaptation phases:transient and steady state. Choice of these parameters is mostly based on some kind of trade-off betwee n the quality of algori

7、thm p erforma nee in the men ti oned ada ptati on p hases.We propose a possible approach for the LMS-based adaptive ?lter performanee improvement.Namely, we make a comb in atio n of several LMS- based FIR ?lters with differe nt p arameters, and p rovide the criteri on for choos ing the most suitable

8、 algorithm for differe nt ada ptati on p hases. This method may be app lied to all the LMS-based algorithms, although we here con sider only several of them.The paper is orga ni zed as follows. An overview of the con sidered LMS-based algorithms is give n in Secti on 2.Sectio n 3 propo ses the crite

9、ri on for evaluati on and comb in ati on of ada ptive algorithms. Simulatio n results are p rese nted in Sectio n 4.2. LMS based algorithmsTLet us de?ne the input signal vector Xk = x(k)x(k -1)x(k - N +1) and vector of weighting coef?cients as W= aS 二(1 一 a jeXk_L,0 c a 1), and the SA(E匕kXXksign(ek

10、),1,2,5,8 .The VS LMS has the same form as the LMS, but(k) is changed 6, 7.in the ada ptati on the ste pThe considered adaptivesystem out put, yk =Wjxk , tracks a reference sig nal, assumed?ltering problem consists in trying to adjust a set of weighting coef?cients so that the*T 2 asd k =W kXk + nk

11、,where nk is a zero mean Gaussian noise with the varianee 码,and* * W is the optimal weight vector (Wiener vector). Two cases will be considered:% = W is a*constant (stationary case) and Wk is time-varying (nonstationary case). In nonstationary case the unknown system parameters( i.e. the optimal vec

12、tor Wk )are time variant. It is often assumed that* * * variation of Wk may be modeled as 叫卅=Wk + ZK is the zero-mean random perturbation, independent on Xkand nk with the autocorrelation matrix G = E ZkZj 】=| .Note that2analysis for the stationary case directly follows for = 0.The weighting coef?ci

13、ent vector con verges to the Wiener on e, if the con diti on from 1,2 is satis?ed. *De?ne the weighting coef?cientsmisalignment, 13, Vk = W Wk . It is due to both the effects of gradie nt no ise (weighti ng coef?cie nts variati ons around the average value) and the weighti ngvector lag (differe nee

14、betwee n the average and the op timal value), 3. It can be exp ressed as: Vk = Wk - E(Wk )+仁何)-Wk* According to (2), theith element of Vk is:wherebias(W(k) istheVi(k )=(E(Wi(k )-Wi*(k)+(Wi(k)-E(Wi(k) hbiasfWjk )+ Pjk )weighting coef?cient bias and片(k ) is a zero-mean random variable with the variane

15、e 2 .The varia nee depends on the type of LMS-based algorithm, as well as on the exter nal no ise varia nee2 2bn .Thus, if the noise varianee is constant or slowly-varying,b is time invariant for a particular2LMS-based algorithm. In that sense, in the analysis that follows we will assume that c dep

16、ends only on the algorithm typ e, i.e. on its p arameters.An imp orta nt p erforma nee measure for an ada ptive ?lter is its mea n square deviati on (MSD) of weighting coef?cients. For the adaptive ?lters, it is given by, 3: MSD = lim E VkTVk 】.k_3. Combined adap tive ?lterThe basic idea of the comb

17、ined adaptive ?lter lies in parallel impiementation of two or more adaptive LMS-based algorithms, with the choice of the best among them in each iteration 9.Choice of the most approp riate algorithm, in each iterati on, reduces to the choice of the best value for the weighting coef?cients. The best

18、weighting coef?cient is the one that is, at a given instant, the closest to the corres ponding value of the Wiener vector.Let W (k,q ) be the i -th weighting coef?cient for LMS-based algorithm with the chosen parameter q at an instant k. Note that one may now treat all the algorithms in a uni?ed way

19、 (LMS: q = gGLMS: q = a,SA:q = 卩).LMS-based algorithm behavior is crucially dependent on q.In each iterati on there is an op timal value qopt , p roduc ing the best p erforma nee of the ada ptive algorithm. Analyze no w a combined adaptive ?lter, with several LMS-based algorithms of the same type, b

20、ut with differe nt p arameter q.*The weighting coef?cients are random variables distributed around the W (k) ,with2biasWi (k,q )and the varianee c-q , related by 4, 9:Wi (k,q )-W(k )- bias(W (k,q )h e q,where (4) holds with the p robability P(Kode peFdeeXa mple, forK = 2 and a Gaussia ndistributio n

21、,P(K ) = 0.95 (two sigma rule).De?ne the con?denee intervals for W (k, q )4,9:bias(W(k,q )v8q,w*(k 戶 Di(k),Di (k )= Wi (k, q2kbq ,W (k,q )+ qThen, from (4) and (5) we conclude that, as long asindependently on q. This means that, for small b ias, the con?dence intervals, for differentq s of the same

22、LMS-based algorithm, of the same LMS-based algorithm, in tersect. When, on the otherhand, the bias becomes large, then the central positions of the intervals for differentq s are far ap art, and they do not in tersect.Since we do not have apriori information about the bias(W(k, q ),we will use a spe

23、ci?c statistical app roach to get the criteri on for the choice of ada ptive algorithm, i.e. for the values of q.The criteri on follows from the trade-off con diti on that bias and varia nee are of the same order ofmag nitude, i.e.bias阿(k,q三 tccTq, 4,The propo sed comb ined algorithm (CA) can now be

24、 summarized in the follow ing ste ps:Step 1. Calculate W(k, qfor the algorithms with different q s from the prede?ned set Q =G,q2,.2Step 2. Estimate the varianee cq for each considered algorithm.Ste p 3. Check if Dk ) in tersect for the con sidered algorithms. Start from an algorithm with largest va

25、lue of varia nee, and go toward the ones with smaller values of varia nces. Accord ing to (4), (5) and the trade-off criterio n, this check reduces to the check ifW (k,qm )W (k,qj瓷(cTqm +crql )is satis?ed, where qm,q Q ,and thefollowingrelationholds:222c_Vqh : CTqm A CTqh A CTqi ,= q Q .If no Di (k

26、) in tersect (large bias) choose the algorithm with largest value of varia nee. If theDi (k ) in tersect, the bias is already small. So, check a new p air of weighti ng coef?cie nts or, if that is the last p air, just choose the algorithm with the smallest varia nee. First two in tervals that do not

27、 in tersect mean that the prop osed trade-off criteri on is achieved, and choose the algorithm with large varia nee.Step 4. Go to the n ext in sta nt of time.The smallest number of elements of the set Q is L =2. In that case, one of the q s should provide good tracking of rapid variations (the large

28、st varianee), while the other should provide small varianee in the steady state. Observe that by adding few more q s between these two extremes, one may slightly improve the tran sie nt behavior of the algorithm.Note that the only unknown values in (6) are the varia nces. In our simulati ons we esti

29、mate as in 4:bq = medianfiW (k )-W (k-1)/0675/2,2 2for k = 1,2,. , L and vcbq .2The alter native way is to estimate b as:(8)21 T 2bni/i ,for x(i) = 0.Expressions relating b* and q in steady state, for different types of LMS-based algorithms,2 22 2are known from literature. For the standard LMS algor

30、ithm in steady state, n and q are2 2 2related Cq = qCn ,3. Note that any other estimation of o-q is valid for the proposed filter.Comp lexity of the CA depends on the con stitue nt algorithms (Ste p 1), and on the decisi on algorithm (Ste p 3).Calculati on of weighti ng coefficie nts for p arallel a

31、lgorithms does n ot i ncrease the calculati on time, si nee it is p erformed by a p arallel hardware realizatio n, thus in creas ing the hardware requireme nts. The varia nee estimati ons (Ste p 2), n egligibly con tribute to the in crease of algorithm compi exity, because they are p erformed at the

32、 very begi nning of ada ptatio n and they are using sep arate hardware realizati ons. Simp le an alysis shows that the CA in creases the nu mber of operations for, at most, N(L- 1) additions and N( L- 1) IF decisions, and needs some additional hardware with respect to the con stitue nt algorithms.Il

33、lustration of combined ada ptive filterCon sider a system ide ntificatio n by the comb in atio n of two LMS algorithms with differe nt ste ps.Here, the parameter q is i ,i.e. Q = 62 = 巴卩/10.The unknown system has four time-i nvaria nt coefficie nts,a nd the FIR filters are withN = 4.We give the aver

34、age mean square deviati on ( AMSD) for both in dividual algorithms, as well as for their comb in ati on, Fig. 1(a). Results are obta ined by averagi ng over 100 independent runs (theMonte Carlo method), with i = 0. 1. The reference dk is corr up ted by a zero-mea n un correlated2Gaussian noise with=

35、 0.01 and SNR = 15 dB, and k is 1.75. In the first 30 iterations the varia nee was estimated accord ing to (7), and the CA p icked the weight ing coefficie nts calculated by the LMS with 口 .As presented in Fig. 1(a), the CA first uses the LMS with i and then, in the steady state,the LMS with(1/10. N

36、ote the regio n, betwee n the 200th and 400th iteratio n,where the algorithm can take the LMS with either stepsize,in different realizations. Here, performanee of the CA would be improved by in creas ing the nu mber of p arallel LMS algorithms with ste ps betwee n these two extrems.Observe also that

37、, in steady state, the CA does not ideally pick up the LMS with smaller ste p. The reas on is in the statistical n ature of the app roach.Combi ned ada ptive filter achieves eve n better p erforma nee if the in dividual algorithms, in stead of starti ng an iterati on with the coefficie nt values tak

38、e n from their p revious iterati on, take the ones chosen by the CA. Namely, if the CA chooses, in the k-th iteration, the weighting coefficient vector WP ,then each in dividual algorithm calculates its weighti ng coefficie nts in the (k+1 )-th iterati on accord ing to:w;+1 = w; + 2吒6厂BOC200amCSCn|f

39、H*er PiMr(a)S50toom.o3爰Metwo(9)Fig. 1. Average MSD for con sidered algorithms.Fig. 2. Average MSD for con sidered algorithms.Fig. 1(b) shows this imp roveme nt, app lied on the p revious exa mple. In order to clearly compare the obtained results,for each simulation we calculated the AMSD. For the fi

40、rst LMS ( 口 ) it wasAMSD = 0. 02865, for the seco nd LMS ( 口 /10) it was AMSD = 0. 20723, for the CA (CoLMS) it was AMSD = 0. 02720 and for the CA with modification (9) it was AMSD = 0. 02371.5. Simulation resultsThe proposed combined adaptive filter with various types of LMS-based algorithms is imp

41、i eme nted for stati onary and non stati onary cases in a system ide ntificati on set up.P erforma neelaIV伍L冷1(rT1SA?of the comb ined filter is comp ared with the in dividual on es, that comp ose the p articularcomb in ati on.In all simulati ons p rese nted here, the refere nee dk is corr up ted by

42、a zero-mea n un correlated2Gaussia n no ise with s = 0.1a nd SNR = 15 dB. Results are obta ined by averagi ng over 100 independent runs, with N = 4, as in the p revious sect ion.(a) Time varying optimal weighting vector: The proposed idea may be applied to the SA algorithms in a non statio nary case

43、. In the simulatio n, the comb ined filter is compo sed out of threeSA adaptive filters with different steps, i.e. Q = 口 , 口 /2, 口/8; 口 = 0.2. The optimal2vectors is gen erated accord ing to the p rese nted model with =0.001,a nd with k = 2. In the first 30 iteratio ns the varia nee was estimated ac

44、cord ing to (7), and CA takes the coefficie nts of SA with 口 (SA1).Figure 2(a) shows the AMSD characteristics for each algorithm .In steady state the CA does not ideally follow the SA3 with 口 / 8, because of the nonstationary problem nature and a relatively small differe nee betwee n the coefficie n

45、t varia nces of the SA2 and SA3. However,this does not affect the overall performanee of the proposed algorithm. AMSD for each considered algorithm was: AMSD = 0.4129 (SA1, 口 ), AMSD = 0.4257 (SA2, 口 /2), AMSD = 1.6011 (SA3,口/8) and AMSD = 0.2696 (Comb).(b) Comp arison with VS LMS algorithm 6: In th

46、is simulatio n we take the imp roved CA (9) from 3.1, and compare its performanee with the VS LMS algorithm 6, in the case of abrupt cha nges of op timal vector. Since the con sidered VS LMS algorithm6 up dates its ste p size for each weighti ng coefficie nt in dividually, the comp aris on of these

47、two algorithms is meanin gful.All the p arameters for the improved CA are the same as in 3.1. For the VS LMS algorithm6 , the releva nt p arameter values are the coun ter of sig n cha nge m0 = 11, and the coun ter of sig n continuity m1 = 7. Figure 2(b)shows the AMSD for the compared algorithms, whe

48、re one can observe the favorable prop erties of the CA, esp ecially after the abrupt cha nges. Note that abrupt cha nges are gen erated by mult ip lyi ng all the system coefficie nts by - 1 at the 2000-th iterati on (Fig.2(b). The AMSD for the VS LMS was AMSD = 0. 0425, while its value for the CA (C

49、oLMS) wasAMSD = 0. 0323.For a compiete comparison of these algorithms we consider now their calculation compiexity, exp ressed by the resp ective in crease in nu mber of op erati ons with respect to the LMS algorithm.The CA in creases the nu mber of requres op erati ons for N additi ons and N IF dec

50、isi on s.For the VSLMS algorithm, the resp ective in crease is: 3N mult ip licati ons, N additi ons, and at least 2N IF decisi ons.These values show the adva ntage of the CA with respect to the calculati on compi exity.6. ConclusionComb in ati on of the LMS based algorithms, which results in an ada

51、ptive system that takes the favorable prop erties of these algorithms in track ing p arameter variati on s, is propo sed .In the course of ada ptati on pro cedure it chooses better algorithms, all the way to the steady state whe n it takes the algorithm with the smallest varia nee of the weighti ng

52、coefficie nt deviati ons from the op timal value.Acknowledgement. This work is supported by the Volkswagen Stiftung, Federal Republic ofGerma ny.References1 Widrow, B.; Stearns, S.: Ada ptive Sig nal P rocessi ng. Pren tice-Hall, I nc. En glewood Cliffs, N.J.07632.2 Alexander, S. T.: Adaptive Signal

53、 Processing -Theory and Applications. Springer-Verlag,New York 1986.3 Widrow, B.; McCool, J. M.; Larimore, M. G.; Johnson, C. R.:Stationary and NonstationaryLearni ng Characteristics of the LMS Ada ptive Filter. P roc. IEEE 64 (1976) 1151-1161.4 Stan kovic, L. J.; Katkov nik, V.: Algorithm for the i

54、n sta ntan eous freque ncy estimati on usingtime-freque ncy distributi ons with variable win dow width. IEEE SP .Letters 5 (1998), 224-227. Krstajic, B.; Uskokovic, Z.; Sta nkovic, Lj.: GLMS Ada ptive Algorithm in Lin ear P redictio n.Proc. CCECE 97, pp. 114 -17, Canada, 1997. Harris, R. W.; Chabrie

55、s, D. M.; Bisho p, F. A.: A Variable Ste p (VS) Ada ptive Filter Algorithm.IEEE Trans. ASSP 34 (1986),309 4316.7 Aboul nasr, T.; Mayyas, K.: A Robust Variable Ste p-Size LMST ype Algorithm: An alysis andSimulatio ns. IEEE Trans. SP 45 (1997), 631 -639.8 Mathews, V. J.; Cho, S. H.: Imp roved Con verg

56、e nee An alysis of Stochastic Gradie nt Ada ptiveFilters Usi ng the Sig n Algorithm. IEEE Trans. ASS P 35 (1987), 450-454.9 Krstajic, B.; Stankovic, L. J.; Uskokovic, Z.; Djurovic, I.:Combined adaptive system for identification of unknown systems with varying parameters in a noisy enviroment. Proc. IEEEICECS 99, Phafos, Cyprus, Sept. 1999.中文译文基于LMS算法的自适应组合滤波器摘要:提出了一种自适应组合滤波器。它由并行 LMS的自适应FIR滤波器和一个具有更好的选择性的算法组成。 作为正在研究中的滤波器算法比较标准, 我们 采取偏差和加权

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论