




已阅读5页,还剩38页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
2010 届届毕业毕业生生 毕业论毕业论文文 题题 目:目: TurboTurbo 码的编译码算法研究码的编译码算法研究 院系名称:院系名称: 信息工程学院信息工程学院 专业班级:专业班级: 电子信息工程电子信息工程 学生姓名:学生姓名: 学学 号号: 指导教师:指导教师: 教师职称:教师职称: 教授教授 2010 年年 6 月月 2 日日 II 摘摘 要要 在现代数字通信系统中,信道编码常用来保护系统免遭噪声和外界干扰, 并用于降低系统的比特误码率,提高系统的可靠性。Turbo码,由于性能接近香 农理论限,在低信噪比的应用环境下比其他编码好,因而在第三代移动通信系 统多种方案中,考虑将Turbo码作为无线信道的编码标准之一。本文讨论了 Turbo 码的编译码基本原理,对Turbo 码的几种常用的编译码算法进行了分析, 并在给出编译码器模型的基础上,用MATLAB语言实现了整个系统的计算机仿真并 给出参考设计程序。 关键词关键词: 递归系统卷积码、Turbo 码、 软判决Viterbi 译码、交织器 Title: Turbo code encoding and decoding algorithm III Abstract In modern digital communication systems, channel coding system used to protect against noise and interference, and used to reduce the systems bit error rate and improve system reliability. Turbo code, As the performance approaches the Shannon theoretical limit, the application of a low SNR environment better than the other encodings, and thus the third generation mobile communication systems in a variety of programs, consider the Turbo code as the wireless channel of the coding standard. This article discusses the basic principles of Turbo Codes Encoding and Decoding of Turbo Codes Encoding and Decoding of several commonly used algorithms are analyzed and presented codec based on the model, with the MATLAB language to implement the computer simulation of the system and to the reference design program. Key words:Recursive systematic convolutional code Soft-decision Viterbi decoding Interleaver IV 目目 次次 1 引言.1 2 TURBO 码概述 .2 2.1 TURBO码简介 .2 2.2 TURBO码的优缺点 .2 2.3 TURBO码的发展 .3 3 TURBO 码的编码原理 .4 3.1 编码器组成 .4 3.2 编码原理 .6 3.3 编码算法 .7 4 TURBO 码译码原理 .9 4.1译码器组成.9 4.2 译码原理.11 4.3 MAP(MAXIMUM A POSTERIORI)算法.12 4.4LOG-MAP 算法和 MAX-LOG-MAP 算法 .14 4.5SOVA 算法.15 5 TURBO 码的性能仿真 .16 5.1 仿真软件介绍 .16 5.2 仿真系统构建 .18 5.3 编码子系统 .18 5.4 译码子系统 .19 5.5 仿真结果与分析 .20 结 论.24 致 谢.25 参考文献.26 附 录.27 1 1 1 引言引言 在数字通信系统中,根据不同的目的,编码可分为信源编码和信道编码。 信源编码是为了提高数字信号的有效性以及为了使模拟信号数字化而采取的编 码。信道编码是为了降低误差率,提高数字通信的可靠性而采取的编码。数字 信号在传输过程中,加性噪声、码间串扰等都会生产误码。为了提高系统的抗 干扰性能,可以加大发射功率,降低接受设备本身的噪声,以及合理选择调制、 解调方法等。此外,还可以采用信道编码技术。 长期以来,编码界一直致力于寻找编码率接近香农理论极限值、误码率小、 解码复杂度可以忍受的信道前向差错控制编码方法,提出了可重复解码的编码技 术,包括乘积码、级联码、多级码及其推广。在重复解码、软入软出解码、递归 系统卷积码和非均匀交织等概念的基础上,1993年C.Berrou等在国际通信会议 上最先提出了Turbo码,它是并行级联带反馈系统卷积码(Parallel concatenation of recursivesystematic convolutional codes)的简称。仿真结果表明,在AWGN 信 道中,Turbo 码的纠错性能接近香农极限。从此Turbo码的研究成为了编码界的 一个研究热点,并开始在各种通信系统中实现应用。 MATLAB将高性能的数值计算和可视化集成在一起,并提供了大量的内置 函数,从而被广泛地应用于科学计算、控制系统、信息处理等领域的分析、仿 真和设计工作,而且利用MATLAB产品的开放式结构,可以非常容易地对 MATLAB的功能进行扩充。MATLAB的数据分析和处理功能十分强大,运用它 来进行语音信号的分析、处理和可视化相当便捷,Simulink是MATLAB提供的 动态仿真工具,它采用模块组合的方法来创建动态系统的计算机模型,其最突 出的特点就是它的开放性,用户可以通过S一函数定制自己的模块和模块库,本 文本文首先介绍了Turbo码编译码的基本原理以及研究较深的几种算法,在这个 基础上使用MATLAB建立仿真模型,最后给出仿真结果。 2 2 2 TurboTurbo 码概述码概述 2.12.1 TurboTurbo 码简介码简介 著名的Shannon信道编码定理指出,每一信道都有一定的信道容量C,对任 何RC的传信率,都存在有速率为R的码,用最大似然(ML)译码可达到任意 小的错误概率P。该定理包含两方面的含义:一是Shannon用随机编码方式证明, 当R0, the trellis is perfectly terminated % if terminated0 L_info = length(x); L_total = L_info + m; else L_total = length(x); L_info = L_total - m; end % initialize the state vector state = zeros(1,m); % generate the codeword for i = 1:L_total if terminated0 end a_k = rem( g(1,:)*d_k state, 2 ); output_bits, state = encode_bit(g, a_k, state); % since systematic, first output is input bit output_bits(1,1) = d_k; y(n*(i-1)+1:n*i) = output_bits; end function output, state = encode_bit(g, input, state) % This function takes as an input a single bit to be encoded, % as well as the coeficients of the generator polynomials and % the current state vector. % It returns as output n encoded data bits, where 1/n is the % code rate. 31 % the rate is 1/n % k is the constraint length % m is the amount of memory n,k = size(g); m = k-1; % determine the next output bit for i=1:n output(i) = g(i,1)*input; for j = 2:k output(i) = xor(output(i),g(i,j)*state(j-1); end; end state = input, state(1:m-1); function L_all = logmapo(rec_s,g,L_a,ind_dec) % Log_MAP algorithm using straightforward method to compute branch metrics % no approximation is used. % Can be simplified to Max-Log-MAP by using approximation ln(ex+ey) = max(x,y). % Input: rec_s: scaled received bits. % rec_s = 0.5 * L_c * yk = ( 2 * a * rate * Eb/N0 ) * yk % g: code generator for the component RSC code, in binary matrix form. % L_a: a priori info. for the current decoder, % scrambled version of extrinsic Inftyo. of the previous decoder. % ind_dec: index of decoder. Either 1 or 2. % Encoder 1 is assumed to be terminated, while encoder 2 is open. % % Output: L_all: log-likelihood ratio of the symbols. Complete information. % Total number of bits: Inftyo. + tail L_total = length(rec_s)/2; n,K = size(g); m = K - 1; nstates = 2m; % number of states in the trellis % Set up the trellis next_out, next_state, last_out, last_state = trellis(g); Infty = 1e10; % Initialization of Alpha Alpha(1,1) = 0; Alpha(1,2:nstates) = -Infty*ones(1,nstates-1); % Initialization of Beta if ind_dec=1 Beta(L_total,1) = 0; Beta(L_total,2:nstates) = -Infty*ones(1,nstates-1); elseif ind_dec=2 Beta(L_total,1:nstates) = zeros(1,nstates); else fprintf(ind_dec is limited to 1 and 2!n); end 32 % Trace forward, compute Alpha for k = 2:L_total+1 for state2 = 1:nstates gamma = -Infty*ones(1,nstates); gamma(last_state(state2,1) = (-rec_s(2*k-3)+rec_s(2*k-2)*last_out(state2,2). -log(1+exp(L_a(k-1); gamma(last_state(state2,2) = (rec_s(2*k-3)+rec_s(2*k-2)*last_out(state2,4). +L_a(k-1)-log(1+exp(L_a(k-1); if(sum(exp(gamma+Alpha(k-1,:)1e-300) Alpha(k,state2)=-Infty; else Alpha(k,state2) = log( sum( exp( gamma+Alpha(k-1,:) ) ) ); end end tempmax(k) = max(Alpha(k,:); Alpha(k,:) = Alpha(k,:) - tempmax(k); end % Trace backward, compute Beta for k = L_total-1:-1:1 for state1 = 1:nstates gamma = -Infty*ones(1,nstates); gamma(next_state(state1,1) = (-rec_s(2*k+1)+rec_s(2*k+2)*next_out(state1,2). -log(1+exp(L_a(k+1); gamma(next_state(state1,2) = (rec_s(2*k+1)+rec_s(2*k+2)*next_out(state1,4). +L_a(k+1)-log(1+exp(L_a(k+1); if(sum(exp(gamma+Beta(k+1,:)Mk1 path_metric(state,t+1)=Mk0; Mdiff(state,t+1) = Mk0 - Mk1; prev_bit(state, t+1) = 0; else path_metric(state,t+1)=Mk1; Mdiff(state,t+1) = Mk1 - Mk0; prev_bit(state,t+1) = 1; end end end % For decoder 1, trace back from all zero state, % for decoder two, trace back from the most likely state if ind_dec = 1 mlstate(L_total+1) = 1; 34 else mlstate(L_total+1) = find( path_metric(:,L_total+1)=max(path_metric(:,L_total+1) ); end % Trace back to get the estimated bits, and the most likely path for t=L_total:-1:1 est(t) = prev_bit(mlstate(t+1),t+1); mlstate(t) = last_state(mlstate(t+1), est(t)+1); end % Find the minimum delta that corresponds to a compitition path with different info. bit estimation. % Give the soft output for t=1:L_total llr = Infty; for i=0:delta if t+iL_total+1 bit = 1-est(t+i); temp_state = last_state(mlstate(t+i+1), bit+1); for j=i-1:-1:0 bit = prev_bit(temp_state,t+j+1); temp_state = last_state(temp_state, bit+1); end if bit=est(t) llr = min( llr,Mdiff(mlstate(t+i+1), t+i+1) ); end end end L_all(t) = (2*est(t) - 1) * llr; end function L_all = mapo(rec_s,g,L_a,ind_dec) % MAP algorithm using straightforward method to compute branch metrics % no approximation is used. % Can be simplified to Max-Log-MAP by using approximation ln(ex+ey) = max(x,y). % Input: rec_s: scaled received bits. % rec_s = 0.5 * L_c * yk = ( 2 * a * rate * Eb/N0 ) * yk % g: code generator for the component RSC code, in binary matrix form. % L_a: a priori info. for the current decoder, % scrambled version of extrinsic Inftyo. of the previous decoder. % ind_dec: index of decoder. Either 1 or 2. % Encoder 1 is assumed to be terminated, while encoder 2 is open. % % Output: L_all: log-likelihood ratio of the symbols. Complete information. L_c=2; % Total number of bits: Inftyo. + tail L_total = length(rec_s)/2; n,K = size(g); m = K - 1; nstates = 2m; % number of states in the trellis % Set up the trellis next_out, next_state, last_out, last_state = trellis(g); % Initialization of Alpha 35 Alpha(1,1) = 1; Alpha(1,2:nstates) = zeros(1,nstates-1); % Initialization of Beta ind_dec=1; if ind_dec=1 Beta(L_total,1) = 1; Beta(L_total,2:nstates) = zeros(1,nstates-1); elseif ind_dec=2 Beta(L_total,1:nstates) = zeros(1,nstates); else fprintf(ind_dec is limited to 1 and 2!n); end % Trace forward, compute Alpha % Trace forward, compute Alpha for k = 2:L_total+1 for state2 = 1:nstates gamma = zeros(1,nstates); gamma(last_state(state2,1) = (1/(1+exp(L_a(k-1)*exp(-rec_s(2*k-3)+rec_s(2*k- 2)*last_out(state2,2);%uk=+1 gamma(last_state(state2,2) = (exp(L_a(k-1)/(1+exp(L_a(k-1)*exp(rec_s(2*k- 3)+rec_s(2*k-2)*last_out(state2,4);%uk=-1 Alpha(k,state2) = sum(gamma.*Alpha(k-1,:); end NormlizeSum(k)=sum(Alpha(k,:); Alpha(k,:)=Alpha(k,:)/NormlizeSum(k); end % Trace backward, compute Beta for k = L_total-1:-1:1 for state1 = 1:nstates gamma = zeros(1,nstates); gamma(next_state(state1,1) = (1/(1+exp(L_a(k+1)*exp(- rec_s(2*k+1)+rec_s(2*k+2)*next_out(state1,2); gamma(next_state(state1,2) = (exp(L_a(k+1)/(1+exp(L_a(k+1)*exp(rec_s(2*k+1)+rec_s(2*k+2)*next_out(state1,4); Beta(k,state1) = sum(gamma.*Beta(k+1,:); end Beta(k,:)=Beta(k,:)/NormlizeSum(k+1); end % Compute the soft output, log-likelihood ratio of symbols in the frame for k = 1:L_total for state2 = 1:nstates gamma0 = exp(-rec_s(2*k-1)+rec_s(2*k)*last_out(state2,2)/(1+exp(L_a(k); gamma1 = exp(rec_s(2*k-1)+rec_s(2*k)*last_out(state2,4)*exp(L_a(k)/(1+exp(L_a(k); temp0(state2) = (gamma0 * Alpha(k,last_state(state2,1) * Beta(k,state2); temp1(state2) = (gamma1 * Alpha(k,last_state(state2,2) * Beta(k,state2); end L_all(k) = log(sum(temp1) - log(sum(temp0); end % This script simulates the classical turbo encoding-decoding system. % It simulates parallel concatenated convolutional codes. % Two component rate 1/2 RSC (Recursive Systematic Convolutional) component encoders are assumed. % First encoder is terminated with tails bits. (Info + tail) bits are scrambled and passed to 36 % the second encoder, while second encoder is left open without tail bits of itself. % % Random information bits are modulated into +1/-1, and transmitted through a AWGN channel. % Interleavers are randomly generated for each frame. % % Log-MAP algorithm without quantization or approximation is used. % By making use of ln(ex+ey) = max(x,y) + ln(1+e(-abs(x-y), % the Log-MAP can be simplified with a look-up table for the correction function. % If use approximation ln(ex+ey) = max(x,y), it becomes MAX-Log-MAP. % clear all % Write display messages to a text file diary turbo_logmap.txt % Choose decoding algorithm dec_alg = input( Please enter the decoding algorithm. (0:Log-MAP, 1:SOVA) default 0 ); if isempty(dec_alg) dec_alg = 0; end % Frame size L_total = input( Please enter the frame size (= info + tail, default: 400) ); if isempty(L_total) L_total = 400; % infomation bits plus tail bits end % Code generator g = input( Please enter code generator: ( default: g = 1 1 1; 1 0 1 ) ); if isempty(g) g = 1 1 1; 1 0 1 ; end %g = 1 1 0 1; 1 1 1 1; %g = 1 1 1 1 1; 1 0 0 0 1; n,K = size(g); m = K - 1; nstates = 2m; %puncture = 0, puncturing into rate 1/2; %puncture = 1, no puncturing puncture = input( Please choose punctured / unpunctured (0/1): default 0 ); if isempty(puncture) puncture = 0; end % Code rate rate = 1/(2+puncture); % Fading amplitude; a=1 in AWGN channel a = 1; % Number of iterations niter = input( Please enter number of iterations for each frame:
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 民法课件讲解
- 民法学课件内容
- 并条工考试题库及答案
- 新质生产力形成的必要条件
- 安全小故事合集讲解
- 新质生产力的“四新”核心要素
- 民族课件内容
- 民族节日课件
- 《统计学-SPSS和Excel实现》(第9版)课件 第4章 概率分布
- 幼儿园的工作方案汇报
- 2025年环保知识竞赛考试题库200题(附答案)
- TCTBA 001-2019 非招标方式采购代理服务规范
- 《挠曲电理论及应用》笔记
- 薄弱科目的攻克策略
- 2024年山东省国家安全主题知识竞赛备考试题库(含答案)
- 建筑电气与智能化专业大学生职业生涯发展
- 小学生倾听课件
- 【MOOC】《中国马克思主义与当代》(北京科技大学)中国大学MOOC慕课答案
- 《城市轨道交通车辆段(停车场)物业服务标准》
- 初级招标采购从业人员《招标采购法律法规》近年考试真题试题库(含答案)
- 教学评一体化理念
评论
0/150
提交评论