基于MATLAB的卫星导航系统突变故障处理方法研究【物联网开题报告外文翻译说明书论文】.zip

基于MATLAB的卫星导航系统突变故障处理方法研究【物联网开题报告外文翻译说明书论文】.zip

收藏

压缩包内文档预览:

资源预览需要最新版本的Flash Player支持。
您尚未安装或版本过低,建议您

基于MATLAB的卫星导航系统突变故障处理方法研究【物联网开题报告外文翻译说明书论文】.zip,物联网开题报告外文翻译说明书论文,基于MATLAB,开题报告外文翻译,处理开题报告,基于MATLAB的,基于 MATLAB 的,基于Matlab 的
编号:22399306    类型:共享资源    大小:1.44MB    格式:ZIP    上传时间:2019-10-16 上传人:小*** IP属地:福建
50
积分
关 键 词:
物联网开题报告外文翻译说明书论文 基于MATLAB 开题报告外文翻译 处理开题报告 基于MATLAB的 基于 MATLAB 的 基于Matlab 的
资源描述:
基于MATLAB的卫星导航系统突变故障处理方法研究【物联网开题报告外文翻译说明书论文】.zip,物联网开题报告外文翻译说明书论文,基于MATLAB,开题报告外文翻译,处理开题报告,基于MATLAB的,基于 MATLAB 的,基于Matlab 的
内容简介:
毕 业 设 计(论 文)任 务 书1本毕业设计(论文)课题应达到的目的: 卫星导航应用领域内的激烈竞争使得卫星导航系统性能已经成为卫星导航系统应用领域竞争的制高点。人们不仅能方便地获取时间信息,而且能方便地获得高精度的三维位置、三维速度和姿态信息,它在陆地、海面或航空中的导航和测量应用越来越广泛、从而深刻地改变人们的生活方式。在卫星导航系统中,定位和导航功能得以不断加强和提高的同时,人们对卫星导航的安全性和完备性也有了越来越高的要求。卫星的完备性监测指当导航系统发生任何故障或者导航定位超过允许限值一时,导航系统应当具备及时发现故障并通知用户的能力。本设计要求学生能够熟悉掌握本设计中所涉及的卫星定位原理、故障检测原理、卫星几何结构对定位性能的影响,使用Matlab软件完成卫星故障检测的计算,并通过相关实验数据对所设计算法对定位的结果进行评估。 2本毕业设计(论文)课题任务的内容和要求(包括原始数据、技术要求、工作要求等): 原始数据:(1)卫星观测的n文件和o文件;(2)利用仿真得到卫星观测的n文件和o文件技术要求:(1)利用故障检测隔离方法实现故障的隔离解算;工作要求:(1)查阅资料30篇以上,翻译外文资料3000字以上,撰写文献综述和开题报告;(2)完成故障检测处理算法的研究;(3)毕业设计说明书3万字左右。 毕 业 设 计(论 文)任 务 书3对本毕业设计(论文)课题成果的要求包括图表、实物等硬件要求: 毕业设计成果以毕业设计说明书形式上交,要求完成上述工作任务及达到技术要求,毕业设计说明书层次分明、论据可靠、计算正确、图标规范、语句通顺,其内容应包括仿真模型结构图、仿真结果(相关曲线图表)。 4主要参考文献: 1 吴春祥. 基于GPS的物联网智能终端定位技术研究与应用J. 东莞理工学院学报, 2015, (03):59-62.2 刘基余. GPS卫星导航定位原理与方法M. 科学出版社, 2008.3 刘海颖. 卫星导航原理与应用M. 国防工业出版社, 2013.4 Kaplan E D, Hegarty C. Understanding GPS : principles and applicationsM. Artech House, 2006.5 杨永明. GPS接收机定位可靠性数据处理算法研究D. 沈阳航空航天大学, 2015.6 廖向前, 黄顺吉. 奇偶矢量法用于GPS的故障检测与隔离J. 电子科技大学学报, 1997, (3):262-266.7 吴有龙, 王晓鸣, 曹鹏. 抗差估计及Allan方差在车载组合导航系统中的应用研究J. 兵工学报, 2013, 34(7):889-895.8 吴有龙, 王晓鸣, 杨玲,等. 两级故障检测与隔离方法在组合导航中的应用分析J. 弹道学报, 2014, 25(4):53-58.9 刘剑慰, 姜斌. 基于卡尔曼滤波的GPS导航系统故障检测J. 东南大学学报:自然科学版, 2008, (S2):46-49.10 吕慧彧. 组合导航系统中信息融合与故障检测理论研究D. 西北工业大学, 2004.11 蒋凯. 卫星导航系统完好性指标分析与算法研究D. 国防科学技术大学, 2011. 毕 业 设 计(论 文)任 务 书5本毕业设计(论文)课题工作进度计划:2015.12.142015.12.251、学生查阅相关文献,并在指导教师的指导下,撰写及修改开题报告、翻译专业原文资料;2、指导教师根据具体的指导情况在毕业设计管理系统中实时填写指导记录。2015.12.262016.01.03学生提交开题报告、翻译原文及译文给毕业设计指导教师指导、审阅,定稿由指导教师给出评语;对开题未通过的学生下发整改通知书。2016.01.042016.04.051、学生在指导教师的具体指导下进行毕业设计创作;2、在此阶段,在指导教师的指导下,学生拟定论文提纲或设计说明书(下称文档)提纲;3、指导教师根据具体的指导情况在毕业设计管理系统中实时填写指导记录;4、在2016年4月5日学生要提交基本完成的毕业设计创作成果以及文档的撰写提纲,作为中期检查的依据。2016.04.062016.04.101、学生提交中期课题完成情况报告给毕业设计指导教师审阅。2、各专业组织毕业设计成果验收及中期答辩。2016.04.112016.05.081、学生在指导教师的具体指导下进行毕业设计文档撰写。2、在2016年5月08日为学生毕业设计文档定稿截止日。2016.05.092016.05.10指导教师通过毕业设计(论文)管理系统对学生的毕业设计以及文档进行评阅,包括打分和评语。2016.05.112016.05.15毕业论文(设计)小组答辩。2016.05.162016.06.05根据答辩情况修改毕业设计(论文)的相关材料,并在毕业设计(论文)管理系统中上传最终稿,并且提交纸质稿。所在专业审查意见:通过负责人: 2015 年 12 月17 日 毕 业 设 计(论文) 开 题 报 告 1结合毕业设计(论文)课题情况,根据所查阅的文献资料,每人撰写不少于1000字左右的文献综述: 一、研究背景及意义GPS(Global Position System)导航系统是以全球24颗定位人造卫星为基础,向全球各地全天候地提供三维位置、三维速度等信息的一种卫星导航定位系统。GPS由三部分构成,一是监控部分,由主控站、地面天线、监测站及通讯辅助系统组成。二是空间部分,由24颗卫星组成,分布在6个轨道平面。三是用户装置部分,由GPS接收机和卫星天线组成。随着民航对安全性能要求的日益提高,民航机上的导航系统也呈现多信息化、智能化、集成化的发展趋势。精确性与安全可靠性成为民航导航系统的重要特点。而精确定位定姿、故障检测隔离,已成为民航领域的重要学科技术,越来越受到人们的重视,近几年来得到了迅猛的发展。故障检测与诊断技术是一门应用型的多学科交叉的边缘学科,是一门综合性的技术。它以现代控制理论、信号处理、模式识别、最优化方法、决策论、统计数学等为理论基础,近年来得到了飞速的发展,一些新的理论和方法(如,信息融合、主元分析、遗传算法、小波分析、神经网络、模糊系统、定性理论、模式识别、自适应理论、非线性理论等)被成功应用,使故障诊断方法的内容不断更新和增加。至于卫星导航系统上的应用同时是故障检测技术中的一个重要分支。为了保证飞行安全,GPS用于民用航空导航时,在得到导航解的同时,必须顾及伪距观测量偏差对导航解的影响。引起 GPS 伪距观测量偏差的主要因素有较大的卫星钟漂、导航电文数据的不正确上载以及卫星组成部分的故障,这里统称为卫星故障。在全球导航卫星定位系统(GNSS)中,为了更好地排除所有故障,然后确保可靠的定位结果,我们需要对卫星定位系统突发故障的解决方法进行研究。对于发生故障的概率进行有效评估。然而,迄今为止,人们在评估此类概率上只取得有限的进展。本课题为了寻求更好的研究方法和解决方案,力求更好地处理此类故障,探索将软件仿真与故障检测相结合的方法。二、国内外研究现状自20世纪60年代开始,美国、德国、日本等发达国家十分重视故障诊断技术的研究,纷纷投入大量人力、物力开发故障诊断系统,如,美国Honeywell公司、英国StewartHughes公司等许多世界知名公司也参与故障诊断系统的研制工作。从不同角度出发,故障检测与诊断方法的分类也不完全相同。德国的Frank P M教授于1990年将故障诊断方法分成三类,即基于解析模型的方法、基于信号处理的方法和基于知识的方法。Venkatasubramanian V总结了他之前的各种方法,认为故障诊断可以分为基于定量模型的方法、基于定性模型的方法和基于过程历史的方法。有的文献将故障检测和诊断方法分成四类,基于解析模型的方法、基于知识的方法、基于信号处理的方法和基于数据驱动的方法。还有的文献则简单分为:基于解析数学模型的方法和不依赖于数学模型的方法。一些文献的故障检测与诊断方法分类更具一般性:传统故障诊断方法和基于人工智能的故障诊断方法。参考文献 :1王春霞. 惯性/多卫星组合导航系统信息融合及故障检测算法研究D.南京航空航天大学,2008。2任孝平. 汽车状态软测量和车载组合导航系统故障检测技术研究D.中南大学,2012.3李亮. 陆基增强系统定位与完好性监测技术研究D.哈尔滨工程大学,2012.4杨传森. 卫星导航用户端自主完好性监测理论问题研究D.南京航空航天大学,2011.5陈坡. GNSS/INS深组合导航理论与方法研究D.解放军信息工程大学,2013.6刘海颖,冯成涛,王惠南. 一种惯性辅助卫星导航系统及其完好性检测方法J. 宇航学报,2011,04:775-780.7陈灿辉,张晓林. 全球导航卫星系统多星故障排除新方法J. 北京航空航天大学学报,2011,12:1479-1483.8苗岳旺. SINS/GPS组合导航数据处理方法研究D.解放军信息工程大学,2013.9方峥. 长航无人机多源信息融合自主导航及其仿真关键技术研究D.南京航空航天大学,2014.10陈金平. GPS完善性增强研究D.解放军信息工程大学,2001.11钟丽娜,刘建业,李荣冰,王融. 基于LSSVM的惯性/卫星紧组合导航系统缓变故障检测方法J. 中国惯性技术学报,2012,06:754-758.12方凌. 民航陆基近距无线电建模与机载综合导航可靠融合技术D.南京航空航天大学,2012.13吴秀芝. 机载综合航电系统中的多惯导冗余技术D.南京航空航天大学,2012.14陈计辉. 高空长航无人机多源信息高精度容错自主导航研究D.南京航空航天大学,2012.15夏琳琳. 故障检测与诊断技术在组合导航系统中的应用研究D.哈尔滨工程大学,2006.毕 业 设 计(论文) 开 题 报 告 2本课题要研究或解决的问题和拟采用的研究手段(途径): 本课题要解决不同卫星定位系统轨迹及时间特性的融合,应用卫星定位系统与导航系统,达到提高导航精度的同时,还做到对突发故障的有效处理,并且大大改善卫星导航系统导航精度和导航性能;力求实现卫星导航数据转换、处理,故障检测及故障隔离,进行故障的报警和排除。本课题将通过学校图书馆的电子资源、图书资源、有关课本、期刊杂志及上网搜集大量相关资料,了解目前国内的卫星导航系统突发故障的解决方法,并通过对比研究,寻求一种最优解决方案,最后通过仿真软件MATLAB进行仿真实验,得出最优方案。毕 业 设 计(论文) 开 题 报 告 指导教师意见:1对“文献综述”的评语:文献综述的阅读量和相关性符合请求,文献综述能清楚表达原文献的相关观点。文献翻译的英文文献与本专业相关,翻译量符合要求,译文能表达原文的含义。 2对本课题的深度、广度及工作量的意见和对设计(论文)结果的预测:选题与本专业方向相关,符合要求。选题难度适中,工作量适中。能顺利完成毕业论文呢。 3.是否同意开题: 同意 不同意 指导教师: 2016 年 01 月 06 日所在专业审查意见:同意 负责人: 2016 年 04 月 22 日说明:要求学生结合毕业设计(论文)课题参阅一篇以上的外文资料,并翻译至少一万印刷符(或译出3千汉字)以上的译文。译文原则上要求打印(如手写,一律用400字方格稿纸书写),连同学校提供的统一封面及英文原文装订,于毕业设计(论文)工作开始后2周内完成,作为成绩考核的一部分。Optimal Fault Detection and Exclusion Applied in GNSS PositioningAbstractIn Global Navigation Satellite System (GNSS) positioning, it is standard practice to apply the Fault Detection and Exclusion (FDE) procedure iteratively, in order to exclude all faulty measurements and then ensure reliable positioning results. Since it is often only necessary to consider a single fault in a Receiver Autonomous Integrity Monitoring (RAIM) procedure, it would be ideal if a fault could be correctly identified. Thus, fault detection does not need to be applied in an iterative sense. One way of evaluating whether fault detection needs to be re-applied is to determine the probability of a wrong exclusion. To date, however, limited progress has been made in evaluating such probabilities. In this paper the relationships between different parameters are analysed in terms of the probability of correct and incorrect identification. Using this knowledge, a practical strategy for incorporating the probability of a wrong exclusion into the FDE procedure is developed. The theoretical findings are then demonstrated using a GPS single point positioning example.Key words: Fault Detection ;Wrong Exclusion ;Missed Detection1. IntroductionWhen estimating position using the least-squares estimation technique, it is expected that the calculated position conforms to a normal distribution centred at the true position. The existence of a faulty pseudorange measurement causes the estimated position to become biased. For this reason it is vital that fault detection be applied to detect the presence of a faulty pseudorange. In circumstances where GNSS is used as a primary means of navigation, however, detection alone is not sufficient. Upon detection of a fault, measurements from the “bad” satellite should be excluded before navigation can continue.Using an outlier test for fault detection actually means making a decision between the null and alternate hypotheses (Baarda, 1967, 1968; Kelly, 1998; Koch, 1999). Usually, it is judged that the pseudorange corresponding with the largest outlier statistic is faulty and is subsequently excluded (Kelly, 1998). During such a procedure the probability of drawing wrong conclusions cannot be avoided. These are referred to as Type I and Type II errors and are denoted 0 and 0 respectively. The Type I and Type II error values in fault detection are set, based on the probability of a false alert and the probability of a missed detection.However, the outlier statistics are prone to masking and swamping and thus the wrong pseudorange can be identified (Parkinson and Axelrad, 1988; Lee et al., 1996; Hekimoglu, 1997; Lee and Van Dyke, 2002). Masking means that a pseudorange contaminated by a fault is identified as a good one. Conversely, swamping is when a good pseudorange is identified as faulty (Hekimoglu, 1997). This probability of identifying the wrong pseudorange is the probability of a wrong exclusion (Lee et al., 1996). In statistics this probability is referred to as a Type III error, where the null hypothesis is correctly rejected but the wrong pseudorange is identified as being faulty (Hawkins, 1980). If the probability of a wrong exclusion can be evaluated, then there is a possibility that the position can be classified as available for navigation without even having to reapply fault detection (Lee et al., 1996). In the case where the probability of a wrong exclusion is too high, fault detection would still have to be reapplied after exclusion or the position would be classified as unavailable. Nevertheless, significant operational benefit could still be gained from an algorithm that evaluates the probability of a wrong exclusion, such that the confidence level of fault detection can be assured (Lee et al., 1996).It is for this reason that Lee (1995) and Kelly (1998) attempt to evaluate the probability of a wrong exclusion by taking the difference between two outlier statistics.This is because two outlier statistics that are separated by a small distance have a higher probability of contributing to a wrong exclusion. Conversely, as the difference between two outlier statistics grows, there is less probability of making a wrong exclusion. But, the issue with using the difference between the outlier statistics is that it does not precisely estimate the probability of a wrong exclusion (Ober, 2003). Another method of estimating the probability of a wrong exclusion is given by Pervan et al. (1996; 1998). In this method, it is assumed that the faulty pseudorange conforms to a uniform distribution. Then, using Bayesian statistics, the probability of a wrong exclusion is evaluated. The weakness of this method, though, is that the distribution of the biases is unknown. Consequently, even small changes within the assumed distribution of the faulty pseudorange can have a significant influence on the estimated probability of a wrong exclusion. Outside the field of navigation, Frstner (1983) and Li (1986) have carried out studies on the separability of two outlier statistics. Using the results of these studies Li (1986) then defined the Minimal Separable Bias (MSB) as the smallest bias that can be confidently identified for a set Type III error. Applying the MSB to the field of navigation, the separability of various satellite constellations has been analysed by Hewitson et al. (2004), and Hewitson and Wang (2006).While the basic Fault Detection and Exclusion (FDE) techniques have been well established, the relation between the FDE algorithm performance and the primary means integrity requirements has not. Specifically, although the formulae for the probability of a wrong exclusion and the probability of a missed detection have been developed, there is not yet a practical method that can evaluate them in application. Correctly calculating these probabilities is essential in meeting the integrity requirements of the primary means of navigation. This paper proposes and analyses new methods of correctly calculating these two quantities. Initially, the separability of two alternative hypotheses is analysed. The relationships between the probabilities of false alert, missed detection and wrong exclusion; threshold, correlation coefficient and the noncentrality parameter are discussed in detail. Then it is assumed that each outlier statistic corresponds with a fault and that there is a noncentrality parameter corresponding to it. Since the non-centrality parameter is a function of Types I, II and III error; the correlation coefficient, the probabilities of missed detection, successful identification and wrong exclusion are then estimated using the non-centrality parameter and the correlation coefficient. Thus, for each outlier statistic, the probabilities of making different types of errors are estimated to aid in deciding whether or not the faulty pseudorange can be correctly identified.This paper is organised as follows. First, the outlier detection theory and the models used in hypothesis testing are introduced. Then their applications to FDE are examined. Thereafter, the separability of two outlier statistics is analysed, and extended to the application of FDE. Next an example is given, using real GPS data, to demonstrate the proposed method. Finally, the conclusions drawn from the study are presented.2. Model descriptionsThe linearized Gauss-Markov model applied in navigation is defined by (Schaffrin, 1997; Koch, 1999):(1)where:v is the n by 1 vector of residuals,A is the n by t design matrix reflecting the geometric strength,l is the n by 1 measurement vector containing the pseudorange observations and the distances between satellites to receiver,x is the vector of t unknown and its estimated value is .The mean of l and its positive definite covariance matrix are given by:(2)The least-squares solution x is optimal in the sense that it is unbiased and that it is of minimal variance in the class of linear unbiased estimators. However, these optimality properties only hold true when Equations (1) and (2) are correct.2.1. Local Test for Single Alternative Hypothesis. In the case where there are faulty pseudorange measurements:. Consequently, the least-squares estimator of the position becomes biased:. In order to detect a biased position a fault detection procedure is applied. When a biased position is detected, it can then be corrected by excluding the faulty pseudorange. If it is assumed that the pseudorange is faulty, then the correct model is given by:(3)where the is the fault in the pseudorange, and = is a unit vector with the element equal to one. Solving this for the fault then leads to: (4)which has the variance:(5)where is the co-factor matrix of the estimated residuals (from the original Gauss-Markov model).The outlier test statistic for the ith pseudorange can then be formed as (Baarda, 1968; Kok, 1984): (6)The correlation coefficient between a pair of outlier statistics is given by:(7)Based on Equation (3) the null hypothesis corresponding to the assumption that there are no faulty pseudorange measurements is:(8)Under the null hypothesis, Equations (1) and (3) are equivalent. Otherwise, the alternative hypothesis Hi means that the ith pseudorange is faulty:(9)Taking the expectation of the outlier statistic in Equation (6), the non-centrality parameter can be obtained as:(10)2.2. Fault Detection. In the fault detection phase, the overall validity of the null hypothesis is tested:(11)If this leads to a rejection of the null hypothesis, then it is concluded that a fault is present. Ideally, the probability of a false alert in the ith pseudorange is set such that the probability of any one of the outlier tests failing, when the null hypothesis is true, is equal to the probability of a false alert, PFA. Due to the difficulty in achieving this exactly, the Type I error of the outlier test is conservatively set as (Kelly, 1998):(12)The outlier test statistic in Equation (6) follows a standard normal distribution under H0. The evidence on whether the model error as specified by Equation (3) did, or did not, occur is based on the test:(13)By letting i run from one up to and including n, all of the pseudorange measurements can be screened for the presence of a fault. If one or more of the outlier tests fails then it is concluded that a fault exists.Besides the possibility of making a Type I error, there is also a possibility that the null hypothesis is accepted when in fact it is false. This error, denoted as 0, is a Type II error. Thus, it can be seen that when the null hypothesis is accepted there will be a possibility of making a Type II error. When accepting the alternative hypothesis, there is also a possibility of making a Type I error. Therefore, no matter what decision is made there is always the possibility of making an error. However, steps may be taken to control the possibility of making errors and guarantee that the probability of making a correct decision can be estimated.By setting the threshold based on the probability of a false alert, the Type I error can be controlled. To control the Type II error, protection levels are formulated and compared with the alert limit. If the protection level is contained within the alert limit then the probability of making a Type II error is acceptable. Otherwise, it is not. When formulating the protection levels it is desired to set the size of Type II error for each test such that the probability of a fault going undetected by all of the tests is equal to the probability of a missed detection, PMD. However, due to the difficulty in achieving this, Kelly (1998) uses the approximation:(14)2.3. Fault Exclusion. When the fault detection procedure has detected a fault, the next step is to attempt to identify and remove the faulty pseudorange. Since the null hypothesis has been rejected, the pseudorange measurements conform to one of the alternative hypotheses: i=1,n(15)To determine which alternate hypothesis, the largest outlier statistic, in absolute value, is found. The corresponding pseudorange is then deduced to be faulty. Mathematically this can be expressed as the jth pseudorange is faulty when: and (16)Once the faulty pseudorange has been identified, corrective action must be taken to mitigate its influence on the navigation solution. Here, the identified pseudorange is excluded from the model such that Equation (1) now has one fewer pseudorange measurements. Since the incorrect pseudorange can at times be identified due to the correlation, a FDE procedure would normally be reapplied to the updated model, until the null hypothesis is accepted.3. Separability analyses of fault detection and exclusionDuring the FDE procedure, wrong decisions can sometimes be made. According to Frstner (1983) a Type III error occurs when both Type I and Type II errors are committed, which means making a wrong exclusion. In the following sections, the origins of the three types of errors are presented. Then, this knowledge is applied to the FDE procedure to determine if a fault can be successfully identified.Table1.Decisions when testing two alternative hypotheses3.1. Three Types of Error, Based on Two Alternative Hypotheses In Frstners pioneering studies, the decisions that can be made with two alternative hypotheses are described in Table 1 (Frstner, 1983; Li, 1986). From Table 1, it can be seen that, and because of the symmetry of and,. In addition, the following is satisfied:(17)and:(18)The estimation of parameters shown in Table 1 is based on the distribution of test statistics. It is assumed that there is an outlier in the observation, causing the expectation of to become , which is the non-centrality parameter. The bias also causes the expectation of to become because of the correlation betweenand, which can be computed from Equation (6). Successful identification then actually means accepting the alternative hypothesisrather than. The joint distribution of and is:w = ( )T N(, D)(19)The expectation of the joint distribution of and in Equation (13) is then given by: and D= (20)The probability function of the two outlier statistics is given by:(21)If the critical value c and the distribution of w are known, the probability of successful identification, denoted as (1 ii), can be obtained from:(22)The sizes of Type II and Type III errors can be obtained from:(23)And:(24)In Frstner (1983) and Li (1986) if 0, , i0 and are given, then the non-centrality parameter is obtained from:(25)or:(26)By setting the values of i0 and with the same preset values of 0 and , the non-centrality parameters may be different. In this case, the greater value of is chosen to satisfy the requirements that the probability of a Type II error is not greater than i0 and that the probability of a Type III error is not greater than (Frstner,1983; Li, 1986). In this paper the non-centrality parameter is calculated from:(27)This is because the probability of making errors, ii, remains unchanged with different correlation coefficients. Nonetheless, correlation coefficients do determine the ratio between and. Consequently, given preset values for and ii the non-centrality parameter will change along with the correlation coefficient. For this non-centrality parameter the probability of making Type II and Type III errors will not be greater than ii, and in addition their sum will be equal to.3.2. Relationships between Different Parameters. Although there are many parameter factors that control the probabilities of making different types of error with two alternative hypotheses, there are only three that are fundamentally independent. They are, and . Any other parameters can be obtained as shown in Table 2.From Table 2 it can be seen that once 0 and are given, the threshold c, and the size of the Type I errors 00, 0i and 0j can be estimated. Also by setting as well, , and can be calculated.The changes within and are shown in Figure 1 when ii is equal to 20% and is set to values of 0.1%, 0.3%, 0.5%, and 1%. This illustrates the fact that the Type II error and the Type III error have opposite tendencies as the correlation Table2.The relationship among different parametersFigure1.Type II and III errorscoefficient increases. For small correlation coefficients i0 is large and is small, whereas for large correlation coefficients is large and i0 is small. When is close to zero, i0 is around 20% and is approximately zero. This is irrespective of the size of the Type I error. Conversely, i0 reduces quickly to about zero when approaches 098 and increases rapidly to 20%.The above analyses are based on the preset values of 0 and ii in order to determine the dependence of the other parameters on . In the following analysis, 0 is preset to 1% and changes in and are compared to, and.The value of ii, changing with and , is shown in Figure 2, which demonstrates that a larger correlation coefficient leads to higher value of ii when is kept constant. In addition, larger results in smaller ii for the same value of . This means that higher and smaller will enhance the probability of correct identification. When becomes larger, the impact of the correlation coefficient on ii becomes much more significant. When is zero, ii decreases quickly from around one to zero as increases. This means that when the outlier statistics are independent from each other the probability of committing errors can be controlled to near zero, once is large enough. However, as approaches 1, even increasing the non-centrality parameter to 20 still does not reduce ii below 40%. This indicates that when the correlation coefficient is approximately 1 the non-centrality parameter only has a small effect in decreasing ii. Consequently, to control the probability of making errors, the goal should be to keep the correlation coefficients, between outlier statistics, to minimum values.Figure2.The sum probability of making errors Figure3.Type II errori0The values for i0 and are shown in Figure 3 and Figure 4 respectively. Figure 3 shows that i0 decreases from about one to zero as increases. The decrease within the i0 curves for different are similar. Thus, the correlation coefficient plays a relatively minor role in influencing i0. Figure 4 shows that the correlation coefficient significantly impacts ij. All of the curves quickly increase to a peak and then decrease slowly. The larger the correlation coefficient values the higher the peak and the slower the decrease. For large correlation coefficients, the peak also occurs at larger values of the non-centrality parameters. When is zero, ij is always close to zero no matterFigure4.Type III error what the value of . When the correlation coefficient is close to 1, increases dramatically from about zero to nearly 50% and then decreases rather slowly. Therefore, the dominant challenge in successful identification is avoiding making a Type III error, when the correlation coefficient is high. This is because it is difficult to control the probability of so doing by increasing .Based on the above analysis, it can be deduced that the probabilities of making Type I, II and III errors can be accurately estimated based on their correlation coefficients and their noncentrality parameters. These parameters can also be used to provide an accurate control for making correct decisions in a FDE procedure. For instance, a large correlation coefficient implies a greater chance of incorrect identification. Under these circumstances, a larger non-centrality parameter is required to control the probability of committing a Type III error.3.3. Fault Verification. As can be seen, the test statistic is a function of all observation errors, and the correlation coefficient between each test statistic contributes to missed detection and wrong exclusion. This section presents a practical procedure to estimate the probability of a missed detection and the probability of a wrong exclusion. The calculation is based on the assumption: that if one considers only the bias error on one of the satellites, at a time, and neglects the range errors on all the other satellites, then the position estimation error and the test statistic become linearly proportional, and the slope varies depending on which satellite has the bias error. The satellite that is the most difficult to detect is the one with the maximum correlation coefficient and with the highest probability of a wrong exclusion. The probability of a missed detection is highest for the failure of that satellite (Lee, 1995; 1996).From Equation (6), when only the bias error on the observation is taken into consideration and the random errors on other satellites are neglected, the test statistic simplifies to: (28)The influence of this bias error on another test statistic is:(29)However, as the real circumstance is never known, the hypothesis test is an inverse procedure, which actually uses the estimated statistic to deduce the unknown fault. According to Table 1, the second column of the test result shows that during the classical FDE procedure, when the greatest absolute value is pointed to the test statistic, there are two types of possibility, either resulting in a successful identification or a wrong exclusion. A wrong exclusion means that the outlier occurring on the observation impacts on the test statistic so that becomes greater than the critical value. Consequently, the expectation shift of may originate from:(30)which will lead to a successful identification:(31)which means a wrong exclusion will be committed after the test.Based on Equations (30) and (31), the non-centrality parameter corresponding to the largest test statistic can be obtained. Then, using each non-centrality parameter and the correlation coefficient, the probability of successful identification and wrong exclusion can be calculated from the relationship in Table 2. There is:(32)or:(33)As it is complicated and time consuming to exactly calculate ii and via numerical integration, the approximate solution by interpolation could be obtained via the grid data illustrated in Figures 2, 3 and 4, once the grid data is accurate enough. Comparing the estimated values of ii and ji with the corresponding preset thresholds, decisions about the successful identification and wrong exclusion can be made.4. Quality control for fault detection and exclusionIn this section a practical procedure for controlling the quality of the FDE procedure is introduced based on the above analysis. The main proposal is to estimate the probability of a missed detection and a wrong exclusion based on the separability analysis of two alternative hypotheses. The probability of a missed detection and a wrong exclusion depends on the magnitude of the bias, of which the receiver has no knowledge and which may continuously vary. For this reason the simulation tests for the probabilities of a missed detection and wrong exclusion verification were designed as follows:Step 1. Form the observation equation .Step 2. Calculate the protection level based on the probability of a false alert and the probability of a missed detectionFigure5.Indicator for data snooping procedureStep 3. If the protection level is less than the alert limit, then proceed to Step 4. Otherwise, the system is unavailable for navigation, so proceed to step 1 for next epoch.Step 4. Calculate the outlier statistics and compare them with the threshold.Step 5. If all of the outlier statistics pass, the position is available for navigation, proceed to Step 1 for next epoch. Otherwise, proceed to Step 6.Step 6. Calculate the probability of correct identification for the largest outlier statistics. If the probability of correct identification is higher than its threshold, exclude this observation and proceed to Step 4. Otherwise, proceed to Step 7.Step 7. Calculate the probability of wrong exclusion between the largest and the second largest outlier statistic. If the probability of wrong exclusion is higher than its threshold, both of them are excluded and proceed to Step 4.Otherwise, the position is unavailable for navigation; proceed to Step 1 for the next epoch.Based on the above analysis, the traditional FDE procedure is still applied but we now add new criteria to estimate the probability of a missed detection and a wrong exclusion to improve the successful identification rate.5. Experiments and analysisThe separability analysis theory described in this paper was applied to some GPS pseudorange data collected from Minot, North Dakota, USA on the 18th August 2008. The sample interval is 30 seconds, and the duration of the data is 24 hours. To compare classical FDE and the optimal FDE method proposed in this paper, an outlier of 15 times the MDB was added to the second pseudorange in each epoch. The parameters for the FDE procedure were = 1%, = 20% and the thresholds for the probability of successful identification and wrong exclusion were set to 1 ii = 80% and = 3%.For the classical FDE procedure, there are three types of judgments based on the global and the local tests. The judgment indicator for each epoch is shown in Figure 5. Indicator = 0 indicates that the global test was passed and that no outlier exists; indicator = 1 indicates that global test failed but the local test was passed, which means that the existence of an outlier was detected but the location cannot be identified;Figure6.Fault location identified by FDE procedureFigure7.Indicator for the optimal FDE procedure with wrong exclusion estimationindicator = 2 indicates that both global and local tests were rejected, so the outlier can be identified.Figure 5 shows that although a fault is always added to each epoch there are still some epochs when the fault cannot be detected. In such circumstances both indicator = 0 and indicator = 1 signify a missed detection, and indicator = 2 shows an identification. As the real location of the fault is known in this test, whether the identification is correct or not can be evaluated. The location of the assumed fault at each epoch is shown in Figure 6. This clearly shows that there is a great possibility of identifying the fault in the wrong satellite. Furthermore, wrong identification (or exclusion) will negatively influence the position accuracy, especially when the satellite geometry is weak or when there are no redundant measurements.The judgment indicator for the proposed optimal FDE procedure is shown in Figure 7. When applying the new method, there are two more criteria that should be applied for the local test. They are: the probability of successful identification should be greater than its threshold and the probability of wrong exclusion should be smaller than its threshold. Consequently, the results are more complex: 0 indicating that the global test was passed; 1 indicating that the global test was failed and the local test was passed; 2 indicating that the global test was failed and the local test was rejected with the two new criteria being satisfied (so it is assumed that the outlier can be identified); 3 indicating that although the global and local test were rejected the confidence levelFigure8. Fault location identified by optimal FDEFigure9.Probability of successful identificationfor correct identification was not satisfied, therefore the outlier cannot be identified; 4 indicating that although other criteria are satisfied, the probability of wrong exclusion is higher than its threshold (which implies an unacceptable risk of making a wrong exclusion).Figure10. Probability of wrong exclusionComparing Figure 7 with Figure 5 it can be seen that the results for indicator 0 and indicator 1 are the same for both procedures. However, the results for indicator 2 in Figure 5 are divided into three parts (indicator = 2, 3 and 4) in Figure 7. This means that the optimal FDE procedure requires more restrictions on the identification so as to guarantee a successful rate of identification. Figure 8 shows the location of the fault that is identified by the optimal FDE procedure when the indicator = 2. Compared with Figure 6, it is clear that the possibility of wrong exclusion is smaller, which means that with the stricter criteria the wrong exclusion can largelybe separated from the identification.The corresponding probability of successful identification and probability of wrong exclusion are presented in Figures 9 and 10, respectively. In Figure 9, the red marks show that for many epochs, although the local test can identify the fault, the evaluated probability of successful identification is smaller than its threshold (80%). Consequently it is assumed that the identification is untrustworthy.Figure 10 shows the evaluated probability of a wrong exclusion that is greater than the threshold (3%) when the indicator equals 4. At many epochs the probability of a wrong exclusion is even greater than 20%, which means that the identification is untrustworthy and the position accuracy may be negatively influenced after exclusion.Figure11.Position errors( metres , outlier size:1-5 MDB)The North, East and Vertical position errors for the different methods are shown in Figure 11. From the figure it can be seen that even the least-squares estimation results are much better than those of the classical FDE procedure, which means that the position accuracy is negatively influenced when the FDE procedure is applied. This is caused by the frequent occurrence of wrong exclusion since there is no criterion to check if a wrong exclusion has been committed. The red curves show that the optimal FDE procedure can improve the reliability and stability of the results. In Figure 12, the corresponding results for an outlier with a magnitude of 4 times the MDB are shown. Compared with Figure 11, it is clear that, with a larger outlier, the position accuracies for the different methods are all significantly reduced. The estimation accuracies from both the proposed and classical FDE procedures are much higher than those of the least-squares estimation. This shows that least-squaresestimation is not optimal any more - in the sense of being unbiased and of minimal variance. Since the existence (and the magnitude) of a fault cannot be predicted beforehand, the applied FDE procedure must guarantee the stability and reliability of the results.6. ConclusionsThis paper has studied the separability of two alternative hypotheses, including the relationships between different statistical parameters. The probabilities of making Type I, II, and III errors were found to be dependent on the correlation coefficients between the outlier statistics. The larger the correlation coefficient is, the larger the non-centrality parameter that is required to guarantee successful identification. This also means that it is much more difficult to successfully identify a faulty pseudorange that is highly correlated with other pseudorange measurements. In addition, a larger correlation coefficient also significantly increases the probability of making a wrong exclusion. Increasing the non-centrality parameter does not necessarily increase or decrease the probability of making a wrong exclusion.However, eventually a very large non-centrality parameter will decrease the probability of making a wrong exclusion.In terms of Fault Detection and Exclusion (FDE) procedure, the results presented here can be used to determine the probability of a wrong exclusion. This entails simply inspection of a graphical presentation of, or calculating directly, the probability of a wrong exclusion as a function of the correlation coefficient. The probability of a wrong exclusion can then be used to determine if exclusion is to be trusted, or FDE is required to be reapplied after the removal of a pseudorange measurement.AcknowledgementsThe first author wishes to record her appreciation to the China Scholarship Council (CSC) for supporting her studies at the University of New South Wales, Australia.ReferencesBaarda, W. (1967). Statistical concepts in geodesy. Netherlands Geodetic Commission, Publications on Geodesy, New Series 2, No. 4, Delft, The Netherlands.Baarda, W. (1968). A testing procedure for use in geodetic networks. Netherlands Geodetic Commission, Publications on Geodesy, New Series 2, No. 5, Delft, The Netherlands.Brown, R., Chin, G. (1997). GPS RAIM: Calculation of the threshold and protection radius using Chi-square methods - A geometric approach. In: Global Positioning System, Vol. 5, The U.S. Institute of Navigation, Fairfax, Virginia, USA, 155178.Frstner, W. (1983). Reliability and discernibility of extended Gauss-Marko models, In: Mathematical Models of Geodetic/Photogrammetric Point Determination with Regard to Outliers and Systematic Errors, Deutsche Geodtische Kommission, Reihe A, No. 98, Munich, Germany.Hawkins, D. M. (1980). Identification of outliers. Chapman and Hall, New York.Hekimoglu, S. (1997). Finite sample breakdown points of outlier detection procedures. Journal of Surveying Engineering, 125(1), 1531.Hewitson, S., Lee, H., Wang, J. (2004). Localizability analysis for GPS/Galileo receiver autonomous integrity monitoring. Journal of Navigation, 57, 245259.Hewitson, S., Wang, J. (2006). GNSS receiver autonomous integrity monitoring (RAIM) performance analysis. GPS Solutions, 10(3), 155170.Kelly, R. (1998). The linear model, RNP, and the near-optimum fault detection and exclusion algorithm. In: Global Positioning System, Vol. 5, The U.S. Institute of Navigation, Fairfax, Virginia, USA, 227260.Koch, K. (1999). Parameter estimation and hypothesis testing in linear models, 2nd Edn. Springer, Berlin, Heidelberg, New York.Kok, J. (1984). On data snooping and multiple outlier testing. NOAA Technical Report, NOS NGS 30, U.S. Department of Commerce, Rockville, Maryland, USA.Lee, Y. (1995). New techniques relating fault detection and exclusion performance to GPS primary means integrity requirements. In: ION GPS 1995, 1215 September, Palm Springs, California, USA, 19291939.Lee, Y., Van Dyke, K., Decleene, B., Studenny, J. and Beckmann, M. (1996). Summary of RTCA SC-159 GPS Integrity Working Group Activities. Navigation, 43(3), 195226.Lee, Y. C., Van Dyke, K. (2002). Analysis performed in support of the ad-hoc working group of RTCA SC-159 on RAIM/FDE Issues. In: ION NTM 2002, 2830 January 2002, San Diego, California, USA, 639654.Li, D. (1986). Trennbarkeit und Zuverlssigkeit bei zwei verschiedenen Alternativhypothesen im Gaub-Markoff-Modell. Z. Vermess. 3, 114128.Ober, P. (2003). Integrity prediction and monitoring of navigation Systems. Integricom, Leiden, Netherlands.Parkinson, B., Axelrad, P. (1988). Autonomous GPS integrity monitoring using the pseudorange residual, Navigation, 35(2), 255274.Pervan, B., Lawrence, D., Cohen, C. and Parkinson, B. (1996). Parity space methods for autonomous fault detection and exclusion using GPS carrier phase. In: PLANS 1996, 2226 April 1996, Atlanta, Georgia, USA, 649656.Pervan, B., Lawrence, D. and Parkinson, B. (1998) Autonomous fault detection and removal using GPS carrier phase. IEEE Trans. Aerosp. Electron. System, 34(3), 897906.Schaffrin, B. (1997). Reliability measures for correlated observations. Journal of Surveying Engineering, 123(3), 126137.Wang, J., Kubo, Y. (2008). GNSS Receiver Autonomous Integrity Monitoring. In: Sugimoto, S., Shibasaki, R. (eds), GPS Handbook, Asakura, Tokyo, Japan.应用于全球导航卫星定位系统的最优故障检测与排除摘要在全球导航卫星定位系统(GNSS)中,为了排除所有故障测量,然后确保可靠的定位结果,应用故障检测与排除(FDE)迭代的方法是标准的做法。因为它往往只需要考虑一个接收机自主完整性监视(RAIM)过程的单一故障,理想状态是,如果出现故障,可以正确识别。因此,故障检测不需要迭代的意义来完成。评估故障检测是否需要被重新应用的一种方法是确定错误排除的概率。然而,迄今为止,人们只取得有限的进展来评估此类概率。本文将对于不同参数之间的关系中的正确和不正确的识别的可能性进行了分析。使用这方面的知识,为将错误排除到FDE过程的概率实用到战略。然后使用GPS单点定位的例子对该理论成果证明。关键词:故障检测 错误排除 漏检1. 前言因为使用最小二乘估计技术估计位置,所以预计所计算的位置符合正态分布从而集中在真正的位置。因为有错误的伪距测量的存在,所以会导致所估计的位置成为偏置。出于这个原因,被应用于故障检测以检测有错误的伪距的存在是至关重要。在全球导航卫星系统被用作导航的主要手段的情况下,然而,单独的检测是不够的。在检测到故障的过程中,在“坏”的卫星测量被排除之前导航才能继续。使用一个离群值测试故障检测,其实就是让零和替代假设之间的关系。一般,对判定具有最大统计异常值的伪故障进行排除。在这样的过程不能避免绘制错误结论的可能发生。这些被称为I型和II型误差,并且分别表示为0,0 。I型和II型误差值在故障检测基础中分别是,假警报的概率和错过检测的概率。然而,离群值统计容易发生掩蔽和沼泽化,从而错误的伪距可能被识别。掩蔽意味着由一个故障污染伪距被识别为正确的。相反地,沼泽是当一个良好的伪距被识别为有故障。识别错误的伪距的这种概率的错误排除的概率。在统计中这个概率称为III型误差,其中该零假设被正确地拒绝但错误的伪距被标识为有故障。如果一个错误的排除概率可以被评估,则存在一种可能性,即位置可以被分类为可用于导航,甚至无需重新应用故障检测。在其中的一个错误排除的概率是太高的情况下,故障检测仍然必须在排除后重新应用或这位置将被归类为不可用。尽管如此,显著运行利益仍然可以从一个算法用于评估错误排除的概率,使得故障检测的置信水平可以达到让人放心。正是由于这个原因,李和Kelly尝试通过以两个离群值统计数据之间的差值评估的错误排除的概率。这是因为是由一个小的距离隔开的两个离群统计有一个错误排除的概率较高。相反地,两个离群值统计数据之间的差的增加,有使错误排除的概率更小。但是,这个问题同使用离群值统计数据之间的区别在于,它没有精确地估算出一个错误排除的可能性。推定的错误排除的概率的另一种方法是Pervan等人给出。在这种方法中,假设故障伪距符合一均匀分布。然后,利用贝叶斯统计,讲错误排除的概率计算出来。这种方法的缺点是该偏差的分布是未知的。因此,即使有故障的伪距的假定分布内的微小变化可能对一个错误排除概率的估计一个显著影响。外导航领域,Forstner和Li进行了研究,对两种离群统计进行区分。使用这些研究的结果李,则讲最小可分偏置(MSB)定义作为可以自信地识别为一组的III型误差最小的偏差。施加MSB到导航领域,各种卫星星座的区分已由休伊森,和休伊森和王等进行分析。 虽然基本故障检测和排除(FDE)技术已被很好地建立,但是FDE算法的性能和初级之间的关系完整性要求还没有。具体地,虽然公式的一个错误排斥和错过检测的概率已经开发,目前还没有能够评价它们在应用中的实用方法。正确地计算这些概率是在满足诚信要求发言:导航的主要手段是必不可少的。本文提出并正确地分析计算这两个新方法。起初,对两个备选假设分离性进行分析。假警报,漏检和误排除的概率之间的关系;阈值时,相关系数和非集中参数进行了详细讨论。再假定每个离群数据对应一个故障,有一个非中心参数对应于它。由于非中心性参数是、型误差的函数,然后利用非中心参数和相关系数估计了漏诊、成功识别和错误排除的相关系数。因此,对于每个异常统计,使不同类型的错误问题的能力估计有助于决定是否能正确识别故障的伪距。本文的组织如下。首先介绍了孤立点检测理论和假设检验的模型。然后应用FDE进行检查。此后,两离群数据的可分性进行了分析,并扩展到泛函微分方程中的应用。下一个例子是,使用真实的全球定位系统的数据,以证明所提出的方法。最后,得出的结论从研究中提出。2模型描述线性高斯马尔可夫模型应用于导航是由schaffrin定义的:(1)其中:v 是 1向量的残差,A 是设计矩阵反映几何强度的一种结构,l 是测量载体的伪距观测值与卫星之间的距离,X是T的未知,其估计值的向量为 。l的平均值和它的正定协方差矩阵:(2)最小二乘解在某种意义上说是公正的,它在线性无偏估计类中是最小方差最优解。然而,这些最优性能只有保持正确的方程(1)和(2)是正确的。2.1单一替代假设的局部检验在有故障的伪距测量的情况:。因此,该位置的最小二乘估计器变得偏向:。为了检测位置偏应用故障检测程序。当偏置位置进行检测,它可以通过排除故障的伪距改正。如果假定与伪距是错误的,那么正确的模型是由:(3)这里 是伪故障和= 是一元等于一个单位向量。解决这个问题,然后导致:(4)具有方差:(5)这里是估计残差的共同因子矩阵(由原来的高斯马尔可夫模型)。然后对伪距离群值检验统计量可以形成:(6)一对离群数据之间的相关系数给出:(7)基于方程(3)的零假设对应的假设有没有故障的伪距测量:(8)在零假设下,方程(1)和(3)是等价的。否则,假设意味着与伪距是错误的:(9)以方程(6)中的异常统计量的期望为例,可以得到非中心参数:(10)2.2故障检测在故障检测阶段,零假设的整体有效性测试:(11)如果这会导致一个否定的零假设,那么它的结论是,一个故障出现。理想的情况是,在与伪假警报的概率是这样的异常值检验失败的任何一个的概率,当零假设是真实的,等于一个虚警概率,PFA。由于难以实现这一点,我的异常值的错误的类型是保守设置为:(12)在方程的孤立点的检验统计量(6)服从标准正态分布在H0。关于模型的错误,如方程(3)所指定的证据,或没有,发生的基础上的测试:(13)让i从一个对N,所有的伪距测量可以筛选出存在的故障。如果一个或多个异常测试失败,那么它的结论是,一个故障存在。除了犯错误的可能性外,还有一种可能性,即零假设被接受,事实上它是假的。这个错误,表示为0,是一个类型的错误。因此,可以看出,当被接受的零假设将有一个类型的错误的可能性。当接受另一种假设时,也有可能犯错误。因此,无论做出什么决定,总有一个错误的可能性。然而,可能采取的措施,以控制可能发生错误,并保证做出正确的决定的概率估计。通过设置一个假警报的概率的基础上,我的错误可以控制的类型。要控制的类型的错误,保护级别的制定和警报限制。如果在警报限内包含保护级别,则可接受一个类型的错误的概率是可以接受的。否则,它是不。在制定保护水平需要设置每个测试,故障通过所有的测试去未被发现的概率等于错过检测II型错误的概率大小,PMD。然而,由于难以实现这一点,使用近似的:(14)2.3故障排除当故障检测程序检测到一个故障,下一步是试图识别和排除伪距。由于零假设被拒绝,伪距测量符合的一个替代假说: i=1,n(15)要确定哪个替代假说,最大的离群统计,在绝对值被发现相应的伪距的计算公式是错误的。数学上可以表示为第j个伪距是错误的时: and (16)一旦错误的伪距已经确定,必须采取纠正措施以减轻对导航解的影响。在这里,确定伪距是排除模型,方程(1)现在少一个伪距测量。由于不正确的伪距有时会发现由于相关,FDE程序通常会重新更新的模型,直到零假设被接受。3故障检测和排除
温馨提示:
1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
2: 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
3.本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
提示  人人文库网所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。
关于本文
本文标题:基于MATLAB的卫星导航系统突变故障处理方法研究【物联网开题报告外文翻译说明书论文】.zip
链接地址:https://www.renrendoc.com/p-22399306.html

官方联系方式

2:不支持迅雷下载,请使用浏览器下载   
3:不支持QQ浏览器下载,请用其他浏览器   
4:下载后的文档和图纸-无水印   
5:文档经过压缩,下载后原文更清晰   
关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

网站客服QQ:2881952447     

copyright@ 2020-2025  renrendoc.com 人人文库版权所有   联系电话:400-852-1180

备案号:蜀ICP备2022000484号-2       经营许可证: 川B2-20220663       公网安备川公网安备: 51019002004831号

本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知人人文库网,我们立即给予删除!