版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1、Chapter 4 Coding and Time-Division Multiple Access14.1 Introduction压 llm.tLl otl户 扛 h5归T口旧mii忙 r沁 心 小心r24.2 Sampling Convert an analog information-bearing signal m(t) to a sequence that spaced uniformly in time without significant loss of information. Sampling Theorem A band-limited signal of finite
2、 energy that has no frequency components greater than W hertz is completely described by specifying the values of the signal at instants of time separated by 1/2W seconds. recovered from a knowledge of its samples taken at the rate of 1/2W samples per seconds.34.2 Sampling Aliasing phenomenon of hig
3、h frequency in the spectrum of information-bearing signal take on the identity of lower frequency in the spectrum of sampled version of signal. The sampling rate of 2W samples per second for a signal bandwidth of W hertz is called Nyquist rate. Corrective measures of aliasing A low-pass antialiasing
4、 filter is used to attenuate high- frequency components of signal m(t) before sampling. The output of the low-pass filter is sampled at a rate slightly higher than the Nyquist rate.4.r4.2 Sampling了 仁,,- ww-r0IIIII|III I)(a|II II IIIj/IixI II IIxI IIIiII II IIIlI。I I,I一2fr一八 W(b)wf)2几扰 WfFIGURE 4. 2(
5、a) Spectrum of a message signal band limited to - W 各f 三 W.(b) Spectrum of the corresponding sampled version of the signal for a sampling rate 几 4.5.2 Multipulse Excited LPC Encoding Steps:1. Computation of free parameters of synthesis filter with the use of actual speech samples as input.2. Optimum
6、 excitation for synthesis filter is computed by minimizing the perceptually weighted error with the loop closed. Decoder which located in the receiver consists of excitation generator and synthesis filter, using the received signal to produce a synthetic version of original speech signal.174.5.3 Cod
7、e-Excited LPCMinimization of perceptuallyr- 一一一一一一, weighted errorSyntheticrII speech Synthesis filter三 气:Code #1 Code #2tCode#NllExcitation codebook of sizeNFIGURE 4 .5Encoder of the code-excited linear predictive codec (CELP). The transmitted signal consists of the address of the code selected fro
8、m the codebook, the quantized gain factor G, and quantized filter parameters.184.5.3 Code-Excited LPC Use of predetermined codebook of stochastic (zero-mean white Gaussian) vectors as the source of excitation for the synthesis filter. CELP is capable of producing good quality speech at bit rates bel
9、ow 8 kb/s Intensive computational complexity194.6 Error-Control Coding1. Forward error-correction (FEC) code Classified into block codes convolutional codes. Rely on the controlled use of redundancy in the transmitted code word for detection and correction of errors.2. Automatic-repeat request (ARQ)
10、 schemes Use redundancy merely for the purpose of error detection.204.6.1 Cyclic Redundancy Check Codes Provide a powerful method of error detection for use in ARQstrategies. Cyclic codes: any cyclic shift of a code word in the code is also a code word. Cyclic codes are suited for error detection Ca
11、n be designed to detect many combinations of errors. Implementation of encoding and error-detecting circuits is very simple.214.6.1 Cyclic Redundancy Check Codes Binary (n,k) CRC codes are capable of detecting the following patterns1. All error bursts of length n-k or less.2. A fraction of error bur
12、sts of length equal or greater then n-k+1; the fraction equals 1-2-(n-k-1)3. All combinations of dmin-1 (or fewer) errors.4. All error patterns with an odd number of errors if the generator polynomial for the code has an even number of nonzero coefficients.224.7 Convolutional codes Convolutional cod
13、er generates redundant bits by using modulo-2 convolutions. The encoder of a binary convolutional code with rate 1/n, measured in bits per symbol, is called finite-state machine (FSM). Code rate of convolutional coder = n(LL + M) bits/symbo lforL M ,r 1 bit/symboln(4.10) The constraint length define
14、s as the number of shifts over which a single message bit can influence the encoder output.234.7 Convolutional Codes Impulse response the response of the path connecting output to input of convolutional encoder to a symbol 1 applied to its input. Generator polynomial the unit-delay transform of the
15、impulse response.g(i )(D) =(i )g0+ g(i )D +g(i )D2+K+g(i )DMfori = 1,2,K, n12M(4.12)244.7.1 Trellis and State Diagrams of Convolutional Codes254.7.1 Trellis and State Diagrams of Convolutional Codes264.8 Maximum-Likelihood Decoding of Convolutional Codes Maximum-likelihood decoder Consider the speci
16、al case of memoryless binary symmetric channel, the conditional probability(4.14) Log-likelihood function for convolutional decoder(4.15)274.8 Maximum-Likelihood Decoding of Convolutional Codes Let the transition probability bep(r| c) = pifri ci (4.16)ii- pifr = c 1ii Then we may rewrite the log-lik
17、elihood function as(4.17) Restate the maximum-likelihood decoding rule for binary symmetric channel as284.9 The Viterbi AlgorithmTABL:E 4A 出ummarJb,j A巴c rlthm.佃 砌 妇 如 n巨 hel the 叩 les uf 临 阮 胆. ( rnm Ilp l凸 加 tum 吓 sll(叩in 四 广,T.1. LabLKI 加lc伽 画 l讨 t he lrcl-_ J is n5 加i = (I. T1li Lial i冗 ihe, l1
18、mml屾 ep“ih m叩 ic t0 5tate s =,0 ,, i. S - l. u i-im妒 1 =1 :irfi: : i:lduo.:s l biLS p r 1r:.ins.ition, lhen we ind仁: lh凸 九江义 iw 壮 l it,口 EIO II i n ,JIn LIpt1f L hils; thu i i t Ir I. 1,.,r 1,L心 I - .,I已f三,r,lT 俨 J, 业 咄 0 la be l th己 L bits Llrnt are OYlpUI 的 m th 巳 匕noodcr 凸,Ill 且 I r,an 沮iO|ITABLE
19、 4.4Summary oilf theViterbi Algoritbrn.tate0 (001 (01 2 (l O1 .( l l Levelj = 0嗣 1c 1, 3 =(1, lIj=229FIGURE T.1巨 b eling of trellis states and tran:sitio I.S.4.9 The Viterbi Algorithmfrom state q tos as (c, q ,1,c1, 1. .,2,. , c牛1 .,1,) and let dq,s be the corresponding bits at the input ,to the enc
20、oder for伽 s transition, as show n in fig ure. Note that dq.s and Cq$ l, arepredetermined by the definition of the code.Define the surv i vo r path to states at time Oto be the empty set, 0.Computation step j + 1Let j = 0 , 1 , 2, ., aod suppo se that at the previous step j we havedone only two thing
21、s: Identify all survivor paths. The survivor path to states has the smallest cumulative path metric to states; that is;,. -j+l(s) = mt -;-(q) + Hj + l ( q, s)Toe smaUest cumuJative path metric is determined from the cumu.lative path metrics at the previous ste p and the branch metric for the current
22、 step. The branch metric at step j + 1 fll()m state q to states is given byHj+ ,( q, s) = Hamming dis tance (ri j+1lJ,1 , ,i.丿+l1:LL), (cq1s,.s,l1 -a.- q,. ss, LL)which is the Hamming distance between the received vector in symbol period j + I and the symbol on the trellis branch fro m state q to s.
23、 If there is no trellis branch from state q to s, thenHJ + l (q, s) aa 。 For each. sta re, the survivor path and its metric are stored. If q (s) is the optimal solution to the minj mizatiou step , then the surviving path is given by the ordered setand its metric is1J+ 1(s) .护初(1 s) (吵0(s,)d q。(s ),
24、s)Final Step30Contimue the computation until the algorithm comple tes its forward search through the trellis and therefore reaches the termination node (i.e. , an all-zero state), at which time it makes a decision on the maximum likelihood path. Then, as wi th a block decoder , the sequence of symbo
25、ls assoc.i ated with that path is released to the desti nat io n as the decoded version of the received sequence. In this sense, it is therefore more correct to refer to the Viterbi algorithm as a maximum-likelihood sequence estimator.4.9.1 Modifications of the Viterbi Algorithm When the received se
26、quence is very long, the storage requirement of Viterbi algorithm becomes too high. Decoding window of acceptable length l is specified and the Viterbi algorithm operates on a frame of received sequence, always stopping after l steps. Decision is made on the best path and symbol associated with 1st
27、branch on that path is released to user. Decoding window is moved forward one time interval and the decision on next code frame is made. No longer truly maximum likelihool.314.10 Interleaving Minimization of information to be transmitted Reducing the amount of data to be transmitted means less power
28、 has to be transmitted. Reducing the spectral (or radio frequency) resources that are required for satisfactory performance. Interleaving Obtain the maximum benefit from FEC coding. Resolving the two conflicting phenomena Wireless channel that produces bursts of correlated bit errors. Convolutional
29、decoder that cannot handle error burst. No need exact statistical characterization of wireless channel but only the coherence time.324.10 Interleaving Coherence time for fast fading0.3Tcoherence2 fD(4.19) Interleaver randomizes the order of encoded bits after thechannel encoder in transmitter. Deint
30、erleaver undoes the randomization before the data reach the channel decoder in the receiver.334.10.1 Block InterleavingData read in co lumns亭,Data read out一, ro ws(a) (b)FIGURE 4.10Block interleaver structure.(a) Data read in.34(b) Data read out.4.10.2 Convolutional Interleavingl恤I I 沁 qmncl.! of hi
31、t仕ornchanne l encoder_ 1广丫Co.,鲁 咖.!睿um bas b 出 1d p roc邸 叩 r i IWol血晕 悬l anncl 七SIlffialion d cqualiz11t.ion(b)35FIGURE 4.12 (出)Cnnvoluiional in terlea ver. (b) Convolu1io;nal deintcde11Yer.4.10.3 Random InterleavingTwo steps algorithm:364.11 Noise Performance of Convolutional Codes374.11 Noise Perf
32、ormance of Convolutional Codes For low values of Eb/N0, the uncoded performance is better than the coded performance. For a prescribed Eb/N0 , the noise performance improves with increasing constraint length K for both AWGN and fading channels.384.11 Noise Performance of Convolutional Codes For a pr
33、escribed constraint length K the Eb/N0 must be increased for the fading channel to exhibit a noise performance comparable to that attainable with corresponding AWGN channel For constraint length K=p in the Rayleigh-fading channel we can realize a bit error rate of 2X10-4 by using an Eb/N0=6,by using
34、 the forward error-correction coding.394.12.1 Turbo Encoding:e ncode r 1Parity BitzlI nfo rma tio n BitSystematic Bits Parit y BitsRando m InterleaverE ncoder2Z2P un cture Map。FIGUR E 4.15Illustration of turbo encoding stra teg .404.12 Turbo EncodingInformation BitSystematic BitsPaiitvBitsFIGURE 4.1
35、 6ymbol duration.ystematic1 气 ecursive convoJutionaf code structure; delay Tis equal to th414.12.2 Turbo DecodingDein ter le ave rFirstD eco der In terleavcrSecond Decode risy Pari tv Bi t(s 女H ardl i mi te rDecoded Bit FIGURE 4.17Bloc.:k diagram of1urbo decoder.424.12.2 Turbo Decoding Soft-input, s
36、oft-output (SISO) decoding algorithm.2 1st decoding stage uses MAP algorithm to produce a soft estimate, which is expressed as the equivalent log-likelihood ratioL (x( j) = log Pr ob(x( j) = 1u,z, L1(x)(4.20)1 Pr ob(x( j) = 0 u,z1 , L2(x) The second decoding stage uses MAP algorithm and the second s
37、et of parity bits to produce a further refined estimate1Prob(x( j)L(x),z)2434.12.2 Turbo DecodingNoiFirstD ecoder InterleaverSecond Decoderlo isy Parit-yIBit s(如isy Pa rity Bi ts(女Ha1-dli m ite r,曹Decoded Bit FIGURE 4.17Block diagram o-f1u.rbo decoder.444.12.2 Turbo Decoding Single-loop feedback sys
38、tem To increase the independence of inputs from one processing stage to the next turbo algorithm use the concepts of intrinsic and extrinsic information. Intrinsic information information inherent in a sample prior to a decoding operation. Extrinsic information incremental information obtained through decoding.454.12.2 Turbo Decoding Extrinsic information at the output of 2nd stage2L(x) =L(x)- L(x)(4.21)21 The extrinsic information supplied to the second stage by 1ststageL1(x) =L (x)- L2 (x)(4.22)1 On the last iteration of the decoding process, a
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 企划管理制度
- 六年级上教材全册分析及单元备课 (一)
- 全域旅游管理与服务规范
- 六年级上册语文教学工作总结
- 2026届江苏省苏州市常熟市第一中学毕业升学考试模拟卷英语卷含答案
- 全程iOA办公自动化软件销售合同
- 内科学2考试重点(章节复习)
- 内部培训注会考试题库大全及答案【考点梳理】
- 2026 自闭症精细动作训练课件
- 2026 学龄前自闭症教师案例分析课件
- 销售分成合作合同范本
- 2025年陪诊师考试经典试题及答案发布
- 黄金三点式讲话课件
- 2025年事业单位考试《综合基础知识》题库与答案
- 2025年江西会考英语试卷及答案
- 2025中国华电集团有限公司校园招聘笔试历年参考题库附带答案详解
- 农投集团笔试题目及答案
- 六化安全培训课件
- 碎石加工设备安装与调试方案
- 2025年云南省高考化学试题(学生版+解析版)
- 农药污染土壤的修复技术
评论
0/150
提交评论