基于MATLAB的卫星导航系统突变故障处理方法研究【物联网开题报告外文翻译说明书论文】.zip
收藏
资源目录
压缩包内文档预览:(预览前15页/共47页)
编号:22399306
类型:共享资源
大小:1.44MB
格式:ZIP
上传时间:2019-10-16
上传人:小***
认证信息
个人认证
林**(实名认证)
福建
IP属地:福建
50
积分
- 关 键 词:
-
物联网开题报告外文翻译说明书论文
基于MATLAB
开题报告外文翻译
处理开题报告
基于MATLAB的
基于 MATLAB 的
基于Matlab 的
- 资源描述:
-
基于MATLAB的卫星导航系统突变故障处理方法研究【物联网开题报告外文翻译说明书论文】.zip,物联网开题报告外文翻译说明书论文,基于MATLAB,开题报告外文翻译,处理开题报告,基于MATLAB的,基于 MATLAB 的,基于Matlab 的
- 内容简介:
-
说明:要求学生结合毕业设计(论文)课题参阅一篇以上的外文资料,并翻译至少一万印刷符(或译出3千汉字)以上的译文。译文原则上要求打印(如手写,一律用400字方格稿纸书写),连同学校提供的统一封面及英文原文装订,于毕业设计(论文)工作开始后2周内完成,作为成绩考核的一部分。Optimal Fault Detection and Exclusion Applied in GNSS PositioningAbstractIn Global Navigation Satellite System (GNSS) positioning, it is standard practice to apply the Fault Detection and Exclusion (FDE) procedure iteratively, in order to exclude all faulty measurements and then ensure reliable positioning results. Since it is often only necessary to consider a single fault in a Receiver Autonomous Integrity Monitoring (RAIM) procedure, it would be ideal if a fault could be correctly identified. Thus, fault detection does not need to be applied in an iterative sense. One way of evaluating whether fault detection needs to be re-applied is to determine the probability of a wrong exclusion. To date, however, limited progress has been made in evaluating such probabilities. In this paper the relationships between different parameters are analysed in terms of the probability of correct and incorrect identification. Using this knowledge, a practical strategy for incorporating the probability of a wrong exclusion into the FDE procedure is developed. The theoretical findings are then demonstrated using a GPS single point positioning example.Key words: Fault Detection ;Wrong Exclusion ;Missed Detection1. IntroductionWhen estimating position using the least-squares estimation technique, it is expected that the calculated position conforms to a normal distribution centred at the true position. The existence of a faulty pseudorange measurement causes the estimated position to become biased. For this reason it is vital that fault detection be applied to detect the presence of a faulty pseudorange. In circumstances where GNSS is used as a primary means of navigation, however, detection alone is not sufficient. Upon detection of a fault, measurements from the “bad” satellite should be excluded before navigation can continue.Using an outlier test for fault detection actually means making a decision between the null and alternate hypotheses (Baarda, 1967, 1968; Kelly, 1998; Koch, 1999). Usually, it is judged that the pseudorange corresponding with the largest outlier statistic is faulty and is subsequently excluded (Kelly, 1998). During such a procedure the probability of drawing wrong conclusions cannot be avoided. These are referred to as Type I and Type II errors and are denoted 0 and 0 respectively. The Type I and Type II error values in fault detection are set, based on the probability of a false alert and the probability of a missed detection.However, the outlier statistics are prone to masking and swamping and thus the wrong pseudorange can be identified (Parkinson and Axelrad, 1988; Lee et al., 1996; Hekimoglu, 1997; Lee and Van Dyke, 2002). Masking means that a pseudorange contaminated by a fault is identified as a good one. Conversely, swamping is when a good pseudorange is identified as faulty (Hekimoglu, 1997). This probability of identifying the wrong pseudorange is the probability of a wrong exclusion (Lee et al., 1996). In statistics this probability is referred to as a Type III error, where the null hypothesis is correctly rejected but the wrong pseudorange is identified as being faulty (Hawkins, 1980). If the probability of a wrong exclusion can be evaluated, then there is a possibility that the position can be classified as available for navigation without even having to reapply fault detection (Lee et al., 1996). In the case where the probability of a wrong exclusion is too high, fault detection would still have to be reapplied after exclusion or the position would be classified as unavailable. Nevertheless, significant operational benefit could still be gained from an algorithm that evaluates the probability of a wrong exclusion, such that the confidence level of fault detection can be assured (Lee et al., 1996).It is for this reason that Lee (1995) and Kelly (1998) attempt to evaluate the probability of a wrong exclusion by taking the difference between two outlier statistics.This is because two outlier statistics that are separated by a small distance have a higher probability of contributing to a wrong exclusion. Conversely, as the difference between two outlier statistics grows, there is less probability of making a wrong exclusion. But, the issue with using the difference between the outlier statistics is that it does not precisely estimate the probability of a wrong exclusion (Ober, 2003). Another method of estimating the probability of a wrong exclusion is given by Pervan et al. (1996; 1998). In this method, it is assumed that the faulty pseudorange conforms to a uniform distribution. Then, using Bayesian statistics, the probability of a wrong exclusion is evaluated. The weakness of this method, though, is that the distribution of the biases is unknown. Consequently, even small changes within the assumed distribution of the faulty pseudorange can have a significant influence on the estimated probability of a wrong exclusion. Outside the field of navigation, Frstner (1983) and Li (1986) have carried out studies on the separability of two outlier statistics. Using the results of these studies Li (1986) then defined the Minimal Separable Bias (MSB) as the smallest bias that can be confidently identified for a set Type III error. Applying the MSB to the field of navigation, the separability of various satellite constellations has been analysed by Hewitson et al. (2004), and Hewitson and Wang (2006).While the basic Fault Detection and Exclusion (FDE) techniques have been well established, the relation between the FDE algorithm performance and the primary means integrity requirements has not. Specifically, although the formulae for the probability of a wrong exclusion and the probability of a missed detection have been developed, there is not yet a practical method that can evaluate them in application. Correctly calculating these probabilities is essential in meeting the integrity requirements of the primary means of navigation. This paper proposes and analyses new methods of correctly calculating these two quantities. Initially, the separability of two alternative hypotheses is analysed. The relationships between the probabilities of false alert, missed detection and wrong exclusion; threshold, correlation coefficient and the noncentrality parameter are discussed in detail. Then it is assumed that each outlier statistic corresponds with a fault and that there is a noncentrality parameter corresponding to it. Since the non-centrality parameter is a function of Types I, II and III error; the correlation coefficient, the probabilities of missed detection, successful identification and wrong exclusion are then estimated using the non-centrality parameter and the correlation coefficient. Thus, for each outlier statistic, the probabilities of making different types of errors are estimated to aid in deciding whether or not the faulty pseudorange can be correctly identified.This paper is organised as follows. First, the outlier detection theory and the models used in hypothesis testing are introduced. Then their applications to FDE are examined. Thereafter, the separability of two outlier statistics is analysed, and extended to the application of FDE. Next an example is given, using real GPS data, to demonstrate the proposed method. Finally, the conclusions drawn from the study are presented.2. Model descriptionsThe linearized Gauss-Markov model applied in navigation is defined by (Schaffrin, 1997; Koch, 1999):(1)where:v is the n by 1 vector of residuals,A is the n by t design matrix reflecting the geometric strength,l is the n by 1 measurement vector containing the pseudorange observations and the distances between satellites to receiver,x is the vector of t unknown and its estimated value is .The mean of l and its positive definite covariance matrix are given by:(2)The least-squares solution x is optimal in the sense that it is unbiased and that it is of minimal variance in the class of linear unbiased estimators. However, these optimality properties only hold true when Equations (1) and (2) are correct.2.1. Local Test for Single Alternative Hypothesis. In the case where there are faulty pseudorange measurements:. Consequently, the least-squares estimator of the position becomes biased:. In order to detect a biased position a fault detection procedure is applied. When a biased position is detected, it can then be corrected by excluding the faulty pseudorange. If it is assumed that the pseudorange is faulty, then the correct model is given by:(3)where the is the fault in the pseudorange, and = is a unit vector with the element equal to one. Solving this for the fault then leads to: (4)which has the variance:(5)where is the co-factor matrix of the estimated residuals (from the original Gauss-Markov model).The outlier test statistic for the ith pseudorange can then be formed as (Baarda, 1968; Kok, 1984): (6)The correlation coefficient between a pair of outlier statistics is given by:(7)Based on Equation (3) the null hypothesis corresponding to the assumption that there are no faulty pseudorange measurements is:(8)Under the null hypothesis, Equations (1) and (3) are equivalent. Otherwise, the alternative hypothesis Hi means that the ith pseudorange is faulty:(9)Taking the expectation of the outlier statistic in Equation (6), the non-centrality parameter can be obtained as:(10)2.2. Fault Detection. In the fault detection phase, the overall validity of the null hypothesis is tested:(11)If this leads to a rejection of the null hypothesis, then it is concluded that a fault is present. Ideally, the probability of a false alert in the ith pseudorange is set such that the probability of any one of the outlier tests failing, when the null hypothesis is true, is equal to the probability of a false alert, PFA. Due to the difficulty in achieving this exactly, the Type I error of the outlier test is conservatively set as (Kelly, 1998):(12)The outlier test statistic in Equation (6) follows a standard normal distribution under H0. The evidence on whether the model error as specified by Equation (3) did, or did not, occur is based on the test:(13)By letting i run from one up to and including n, all of the pseudorange measurements can be screened for the presence of a fault. If one or more of the outlier tests fails then it is concluded that a fault exists.Besides the possibility of making a Type I error, there is also a possibility that the null hypothesis is accepted when in fact it is false. This error, denoted as 0, is a Type II error. Thus, it can be seen that when the null hypothesis is accepted there will be a possibility of making a Type II error. When accepting the alternative hypothesis, there is also a possibility of making a Type I error. Therefore, no matter what decision is made there is always the possibility of making an error. However, steps may be taken to control the possibility of making errors and guarantee that the probability of making a correct decision can be estimated.By setting the threshold based on the probability of a false alert, the Type I error can be controlled. To control the Type II error, protection levels are formulated and compared with the alert limit. If the protection level is contained within the alert limit then the probability of making a Type II error is acceptable. Otherwise, it is not. When formulating the protection levels it is desired to set the size of Type II error for each test such that the probability of a fault going undetected by all of the tests is equal to the probability of a missed detection, PMD. However, due to the difficulty in achieving this, Kelly (1998) uses the approximation:(14)2.3. Fault Exclusion. When the fault detection procedure has detected a fault, the next step is to attempt to identify and remove the faulty pseudorange. Since the null hypothesis has been rejected, the pseudorange measurements conform to one of the alternative hypotheses: i=1,n(15)To determine which alternate hypothesis, the largest outlier statistic, in absolute value, is found. The corresponding pseudorange is then deduced to be faulty. Mathematically this can be expressed as the jth pseudorange is faulty when: and (16)Once the faulty pseudorange has been identified, corrective action must be taken to mitigate its influence on the navigation solution. Here, the identified pseudorange is excluded from the model such that Equation (1) now has one fewer pseudorange measurements. Since the incorrect pseudorange can at times be identified due to the correlation, a FDE procedure would normally be reapplied to the updated model, until the null hypothesis is accepted.3. Separability analyses of fault detection and exclusionDuring the FDE procedure, wrong decisions can sometimes be made. According to Frstner (1983) a Type III error occurs when both Type I and Type II errors are committed, which means making a wrong exclusion. In the following sections, the origins of the three types of errors are presented. Then, this knowledge is applied to the FDE procedure to determine if a fault can be successfully identified.Table1.Decisions when testing two alternative hypotheses3.1. Three Types of Error, Based on Two Alternative Hypotheses In Frstners pioneering studies, the decisions that can be made with two alternative hypotheses are described in Table 1 (Frstner, 1983; Li, 1986). From Table 1, it can be seen that, and because of the symmetry of and,. In addition, the following is satisfied:(17)and:(18)The estimation of parameters shown in Table 1 is based on the distribution of test statistics. It is assumed that there is an outlier in the observation, causing the expectation of to become , which is the non-centrality parameter. The bias also causes the expectation of to become because of the correlation betweenand, which can be computed from Equation (6). Successful identification then actually means accepting the alternative hypothesisrather than. The joint distribution of and is:w = ( )T N(, D)(19)The expectation of the joint distribution of and in Equation (13) is then given by: and D= (20)The probability function of the two outlier statistics is given by:(21)If the critical value c and the distribution of w are known, the probability of successful identification, denoted as (1 ii), can be obtained from:(22)The sizes of Type II and Type III errors can be obtained from:(23)And:(24)In Frstner (1983) and Li (1986) if 0, , i0 and are given, then the non-centrality parameter is obtained from:(25)or:(26)By setting the values of i0 and with the same preset values of 0 and , the non-centrality parameters may be different. In this case, the greater value of is chosen to satisfy the requirements that the probability of a Type II error is not greater than i0 and that the probability of a Type III error is not greater than (Frstner,1983; Li, 1986). In this paper the non-centrality parameter is calculated from:(27)This is because the probability of making errors, ii, remains unchanged with different correlation coefficients. Nonetheless, correlation coefficients do determine the ratio between and. Consequently, given preset values for and ii the non-centrality parameter will change along with the correlation coefficient. For this non-centrality parameter the probability of making Type II and Type III errors will not be greater than ii, and in addition their sum will be equal to.3.2. Relationships between Different Parameters. Although there are many parameter factors that control the probabilities of making different types of error with two alternative hypotheses, there are only three that are fundamentally independent. They are, and . Any other parameters can be obtained as shown in Table 2.From Table 2 it can be seen that once 0 and are given, the threshold c, and the size of the Type I errors 00, 0i and 0j can be estimated. Also by setting as well, , and can be calculated.The changes within and are shown in Figure 1 when ii is equal to 20% and is set to values of 0.1%, 0.3%, 0.5%, and 1%. This illustrates the fact that the Type II error and the Type III error have opposite tendencies as the correlation Table2.The relationship among different parametersFigure1.Type II and III errorscoefficient increases. For small correlation coefficients i0 is large and is small, whereas for large correlation coefficients is large and i0 is small. When is close to zero, i0 is around 20% and is approximately zero. This is irrespective of the size of the Type I error. Conversely, i0 reduces quickly to about zero when approaches 098 and increases rapidly to 20%.The above analyses are based on the preset values of 0 and ii in order to determine the dependence of the other parameters on . In the following analysis, 0 is preset to 1% and changes in and are compared to, and.The value of ii, changing with and , is shown in Figure 2, which demonstrates that a larger correlation coefficient leads to higher value of ii when is kept constant. In addition, larger results in smaller ii for the same value of . This means that higher and smaller will enhance the probability of correct identification. When becomes larger, the impact of the correlation coefficient on ii becomes much more significant. When is zero, ii decreases quickly from around one to zero as increases. This means that when the outlier statistics are independent from each other the probability of committing errors can be controlled to near zero, once is large enough. However, as approaches 1, even increasing the non-centrality parameter to 20 still does not reduce ii below 40%. This indicates that when the correlation coefficient is approximately 1 the non-centrality parameter only has a small effect in decreasing ii. Consequently, to control the probability of making errors, the goal should be to keep the correlation coefficients, between outlier statistics, to minimum values.Figure2.The sum probability of making errors Figure3.Type II errori0The values for i0 and are shown in Figure 3 and Figure 4 respectively. Figure 3 shows that i0 decreases from about one to zero as increases. The decrease within the i0 curves for different are similar. Thus, the correlation coefficient plays a relatively minor role in influencing i0. Figure 4 shows that the correlation coefficient significantly impacts ij. All of the curves quickly increase to a peak and then decrease slowly. The larger the correlation coefficient values the higher the peak and the slower the decrease. For large correlation coefficients, the peak also occurs at larger values of the non-centrality parameters. When is zero, ij is always close to zero no matterFigure4.Type III error what the value of . When the correlation coefficient is close to 1, increases dramatically from about zero to nearly 50% and then decreases rather slowly. Therefore, the dominant challenge in successful identification is avoiding making a Type III error, when the correlation coefficient is high. This is because it is difficult to control the probability of so doing by increasing .Based on the above analysis, it can be deduced that the probabilities of making Type I, II and III errors can be accurately estimated based on their correlation coefficients and their noncentrality parameters. These parameters can also be used to provide an accurate control for making correct decisions in a FDE procedure. For instance, a large correlation coefficient implies a greater chance of incorrect identification. Under these circumstances, a larger non-centrality parameter is required to control the probability of committing a Type III error.3.3. Fault Verification. As can be seen, the test statistic is a function of all observation errors, and the correlation coefficient between each test statistic contributes to missed detection and wrong exclusion. This section presents a practical procedure to estimate the probability of a missed detection and the probability of a wrong exclusion. The calculation is based on the assumption: that if one considers only the bias error on one of the satellites, at a time, and neglects the range errors on all the other satellites, then the position estimation error and the test statistic become linearly proportional, and the slope varies depending on which satellite has the bias error. The satellite that is the most difficult to detect is the one with the maximum correlation coefficient and with the highest probability of a wrong exclusion. The probability of a missed detection is highest for the failure of that satellite (Lee, 1995; 1996).From Equation (6), when only the bias error on the observation is taken into consideration and the random errors on other satellites are neglected, the test statistic simplifies to: (28)The influence of this bias error on another test statistic is:(29)However, as the real circumstance is never known, the hypothesis test is an inverse procedure, which actually uses the estimated statistic to deduce the unknown fault. According to Table 1, the second column of the test result shows that during the classical FDE procedure, when the greatest absolute value is pointed to the test statistic, there are two types of possibility, either resulting in a successful identification or a wrong exclusion. A wrong exclusion means that the outlier occurring on the observation impacts on the test statistic so that becomes greater than the critical value. Consequently, the expectation shift of may originate from:(30)which will lead to a successful identification:(31)which means a wrong exclusion will be committed after the test.Based on Equations (30) and (31), the non-centrality parameter corresponding to the largest test statistic can be obtained. Then, using each non-centrality parameter and the correlation coefficient, the probability of successful identification and wrong exclusion can be calculated from the relationship in Table 2. There is:(32)or:(33)As it is complicated and time consuming to exactly calculate ii and via numerical integration, the approximate solution by interpolation could be obtained via the grid data illustrated in Figures 2, 3 and 4, once the grid data is accurate enough. Comparing the estimated values of ii and ji with the corresponding preset thresholds, decisions about the successful identification and wrong exclusion can be made.4. Quality control for fault detection and exclusionIn this section a practical procedure for controlling the quality of the FDE procedure is introduced based on the above analysis. The main proposal is to estimate the probability of a missed detection and a wrong exclusion based on the separability analysis of two alternative hypotheses. The probability of a missed detection and a wrong exclusion depends on the magnitude of the bias, of which the receiver has no knowledge and which may continuously vary. For this reason the simulation tests for the probabilities of a missed detection and wrong exclusion verification were designed as follows:Step 1. Form the observation equation .Step 2. Calculate the protection level based on the probability of a false alert and the probability of a missed detectionFigure5.Indicator for data snooping procedureStep 3. If the protection level is less than the alert limit, then proceed to Step 4. Otherwise, the system is unavailable for navigation, so proceed to step 1 for next epoch.Step 4. Calculate the outlier statistics and compare them with the threshold.Step 5. If all of the outlier statistics pass, the position is available for navigation, proceed to Step 1 for next epoch. Otherwise, proceed to Step 6.Step 6. Calculate the probability of correct identification for the largest outlier statistics. If the probability of correct identification is higher than its threshold, exclude this observation and proceed to Step 4. Otherwise, proceed to Step 7.Step 7. Calculate the probability of wrong exclusion between the largest and the second largest outlier statistic. If the probability of wrong exclusion is higher than its threshold, both of them are excluded and proceed to Step 4.Otherwise, the position is unavailable for navigation; proceed to Step 1 for the next epoch.Based on the above analysis, the traditional FDE procedure is still applied but we now add new criteria to estimate the probability of a missed detection and a wrong exclusion to improve the successful identification rate.5. Experiments and analysisThe separability analysis theory described in this paper was applied to some GPS pseudorange data collected from Minot, North Dakota, USA on the 18th August 2008. The sample interval is 30 seconds, and the duration of the data is 24 hours. To compare classical FDE and the optimal FDE method proposed in this paper, an outlier of 15 times the MDB was added to the second pseudorange in each epoch. The parameters for the FDE procedure were = 1%, = 20% and the thresholds for the probability of successful identification and wrong exclusion were set to 1 ii = 80% and = 3%.For the classical FDE procedure, there are three types of judgments based on the global and the local tests. The judgment indicator for each epoch is shown in Figure 5. Indicator = 0 indicates that the global test was passed and that no outlier exists; indicator = 1 indicates that global test failed but the local test was passed, which means that the existence of an outlier was detected but the location cannot be identified;Figure6.Fault location identified by FDE procedureFigure7.Indicator for the optimal FDE procedure with wrong exclusion estimationindicator = 2 indicates that both global and local tests were rejected, so the outlier can be identified.Figure 5 shows that although a fault is always added to each epoch there are still some epochs when the fault cannot be detected. In such circumstances both indicator = 0 and indicator = 1 signify a missed detection, and indicator = 2 shows an identification. As the real location of the fault is known in this test, whether the identification is correct or not can be evaluated. The location of the assumed fault at each epoch is shown in Figure 6. This clearly shows that there is a great possibility of identifying the fault in the wrong satellite. Furthermore, wrong identification (or exclusion) will negatively influence the position accuracy, especially when the satellite geometry is weak or when there are no redundant measurements.The judgment indicator for the proposed optimal FDE procedure is shown in Figure 7. When applying the new method, there are two more criteria that should be applied for the local test. They are: the probability of successful identification should be greater than its threshold and the probability of wrong exclusion should be smaller than its threshold. Consequently, the results are more complex: 0 indicating that the global test was passed; 1 indicating that the global test was failed and the local test was passed; 2 indicating that the global test was failed and the local test was rejected with the two new criteria being satisfied (so it is assumed that the outlier can be identified); 3 indicating that although the global and local test were rejected the confidence levelFigure8. Fault location identified by optimal FDEFigure9.Probability of successful identificationfor correct identification was not satisfied, therefore the outlier cannot be identified; 4 indicating that although other criteria are satisfied, the probability of wrong exclusion is higher than its threshold (which implies an unacceptable risk of making a wrong exclusion).Figure10. Probability of wrong exclusionComparing Figure 7 with Figure 5 it can be seen that the results for indicator 0 and indicator 1 are the same for both procedures. However, the results for indicator 2 in Figure 5 are divided into three parts (indicator = 2, 3 and 4) in Figure 7. This means that the optimal FDE procedure requires more restrictions on the identification so as to guarantee a successful rate of identification. Figure 8 shows the location of the fault that is identified by the optimal FDE procedure when the indicator = 2. Compared with Figure 6, it is clear that the possibility of wrong exclusion is smaller, which means that with the stricter criteria the wrong exclusion can largelybe separated from the identification.The corresponding probability of successful identification and probability of wrong exclusion are presented in Figures 9 and 10, respectively. In Figure 9, the red marks show that for many epochs, although the local test can identify the fault, the evaluated probability of successful identification is smaller than its threshold (80%). Consequently it is assumed that the identification is untrustworthy.Figure 10 shows the evaluated probability of a wrong exclusion that is greater than the threshold (3%) when the indicator equals 4. At many epochs the probability of a wrong exclusion is even greater than 20%, which means that the identification is untrustworthy and the position accuracy may be negatively influenced after exclusion.Figure11.Position errors( metres , outlier size:1-5 MDB)The North, East and Vertical position errors for the different methods are shown in Figure 11. From the figure it can be seen that even the least-squares estimation results are much better than those of the classical FDE procedure, which means that the position accuracy is negatively influenced when the FDE procedure is applied. This is caused by the frequent occurrence of wrong exclusion since there is no criterion to check if a wrong exclusion has been committed. The red curves show that the optimal FDE procedure can improve the reliability and stability of the results. In Figure 12, the corresponding results for an outlier with a magnitude of 4 times the MDB are shown. Compared with Figure 11, it is clear that, with a larger outlier, the position accuracies for the different methods are all significantly reduced. The estimation accuracies from both the proposed and classical FDE procedures are much higher than those of the least-squares estimation. This shows that least-squaresestimation is not optimal any more - in the sense of being unbiased and of minimal variance. Since the existence (and the magnitude) of a fault cannot be predicted beforehand, the applied FDE procedure must guarantee the stability and reliability of the results.6. ConclusionsThis paper has studied the separability of two alternative hypotheses, including the relationships between different statistical parameters. The probabilities of making Type I, II, and III errors were found to be dependent on the correlation coefficients between the outlier statistics. The larger the correlation coefficient is, the larger the non-centrality parameter that is required to guarantee successful identification. This also means that it is much more difficult to successfully identify a faulty pseudorange that is highly correlated with other pseudorange measurements. In addition, a larger correlation coefficient also significantly increases the probability of making a wrong exclusion. Increasing the non-centrality parameter does not necessarily increase or decrease the probability of making a wrong exclusion.However, eventually a very large non-centrality parameter will decrease the probability of making a wrong exclusion.In terms of Fault Detection and Exclusion (FDE) procedure, the results presented here can be used to determine the probability of a wrong exclusion. This entails simply inspection of a graphical presentation of, or calculating directly, the probability of a wrong exclusion as a function of the correlation coefficient. The probability of a wrong exclusion can then be used to determine if exclusion is to be trusted, or FDE is required to be reapplied after the removal of a pseudorange measurement.AcknowledgementsThe first author wishes to record her appreciation to the China Scholarship Council (CSC) for supporting her studies at the University of New South Wales, Australia.ReferencesBaarda, W. (1967). Statistical concepts in geodesy. Netherlands Geodetic Commission, Publications on Geodesy, New Series 2, No. 4, Delft, The Netherlands.Baarda, W. (1968). A testing procedure for use in geodetic networks. Netherlands Geodetic Commission, Publications on Geodesy, New Series 2, No. 5, Delft, The Netherlands.Brown, R., Chin, G. (1997). GPS RAIM: Calculation of the threshold and protection radius using Chi-square methods - A geometric approach. In: Global Positioning System, Vol. 5, The U.S. Institute of Navigation, Fairfax, Virginia, USA, 155178.Frstner, W. (1983). Reliability and discernibility of extended Gauss-Marko models, In: Mathematical Models of Geodetic/Photogrammetric Point Determination with Regard to Outliers and Systematic Errors, Deutsche Geodtische Kommission, Reihe A, No. 98, Munich, Germany.Hawkins, D. M. (1980). Identification of outliers. Chapman and Hall, New York.Hekimoglu, S. (1997). Finite sample breakdown points of outlier detection procedures. Journal of Surveying Engineering, 125(1), 1531.Hewitson, S., Lee, H., Wang, J. (2004). Localizability analysis for GPS/Galileo receiver autonomous integrity monitoring. Journal of Navigation, 57, 245259.Hewitson, S., Wang, J. (2006). GNSS receiver autonomous integrity monitoring (RAIM) performance analysis. GPS Solutions, 10(3), 155170.Kelly, R. (1998). The linear model, RNP, and the near-optimum fault detection and exclusion algorithm. In: Global Positioning System, Vol. 5, The U.S. Institute of Navigation, Fairfax, Virginia, USA, 227260.Koch, K. (1999). Parameter estimation and hypothesis testing in linear models, 2nd Edn. Springer, Berlin, Heidelberg, New York.Kok, J. (1984). On data snooping and multiple outlier testing. NOAA Technical Report, NOS NGS 30, U.S. Department of Commerce, Rockville, Maryland, USA.Lee, Y. (1995). New techniques relating fault detection and exclusion performance to GPS primary means integrity requirements. In: ION GPS 1995, 1215 September, Palm Springs, California, USA, 19291939.Lee, Y., Van Dyke, K., Decleene, B., Studenny, J. and Beckmann, M. (1996). Summary of RTCA SC-159 GPS Integrity Working Group Activities. Navigation, 43(3), 195226.Lee, Y. C., Van Dyke, K. (2002). Analysis performed in support of the ad-hoc working group of RTCA SC-159 on RAIM/FDE Issues. In: ION NTM 2002, 2830 January 2002, San Diego, California, USA, 639654.Li, D. (1986). Trennbarkeit und Zuverlssigkeit bei zwei verschiedenen Alternativhypothesen im Gaub-Markoff-Modell. Z. Vermess. 3, 114128.Ober, P. (2003). Integrity prediction and monitoring of navigation Systems. Integricom, Leiden, Netherlands.Parkinson, B., Axelrad, P. (1988). Autonomous GPS integrity monitoring using the pseudorange residual, Navigation, 35(2), 255274.Pervan, B., Lawrence, D., Cohen, C. and Parkinson, B. (1996). Parity space methods for autonomous fault detection and exclusion using GPS carrier phase. In: PLANS 1996, 2226 April 1996, Atlanta, Georgia, USA, 649656.Pervan, B., Lawrence, D. and Parkinson, B. (1998) Autonomous fault detection and removal using GPS carrier phase. IEEE Trans. Aerosp. Electron. System, 34(3), 897906.Schaffrin, B. (1997). Reliability measures for correlated observations. Journal of Surveying Engineering, 123(3), 126137.Wang, J., Kubo, Y. (2008). GNSS Receiver Autonomous Integrity Monitoring. In: Sugimoto, S., Shibasaki, R. (eds), GPS Handbook, Asakura, Tokyo, Japan.应用于全球导航卫星定位系统的最优故障检测与排除摘要在全球导航卫星定位系统(GNSS)中,为了排除所有故障测量,然后确保可靠的定位结果,应用故障检测与排除(FDE)迭代的方法是标准的做法。因为它往往只需要考虑一个接收机自主完整性监视(RAIM)过程的单一故障,理想状态是,如果出现故障,可以正确识别。因此,故障检测不需要迭代的意义来完成。评估故障检测是否需要被重新应用的一种方法是确定错误排除的概率。然而,迄今为止,人们只取得有限的进展来评估此类概率。本文将对于不同参数之间的关系中的正确和不正确的识别的可能性进行了分析。使用这方面的知识,为将错误排除到FDE过程的概率实用到战略。然后使用GPS单点定位的例子对该理论成果证明。关键词:故障检测 错误排除 漏检1. 前言因为使用最小二乘估计技术估计位置,所以预计所计算的位置符合正态分布从而集中在真正的位置。因为有错误的伪距测量的存在,所以会导致所估计的位置成为偏置。出于这个原因,被应用于故障检测以检测有错误的伪距的存在是至关重要。在全球导航卫星系统被用作导航的主要手段的情况下,然而,单独的检测是不够的。在检测到故障的过程中,在“坏”的卫星测量被排除之前导航才能继续。使用一个离群值测试故障检测,其实就是让零和替代假设之间的关系。一般,对判定具有最大统计异常值的伪故障进行排除。在这样的过程不能避免绘制错误结论的可能发生。这些被称为I型和II型误差,并且分别表示为0,0 。I型和II型误差值在故障检测基础中分别是,假警报的概率和错过检测的概率。然而,离群值统计容易发生掩蔽和沼泽化,从而错误的伪距可能被识别。掩蔽意味着由一个故障污染伪距被识别为正确的。相反地,沼泽是当一个良好的伪距被识别为有故障。识别错误的伪距的这种概率的错误排除的概率。在统计中这个概率称为III型误差,其中该零假设被正确地拒绝但错误的伪距被标识为有故障。如果一个错误的排除概率可以被评估,则存在一种可能性,即位置可以被分类为可用于导航,甚至无需重新应用故障检测。在其中的一个错误排除的概率是太高的情况下,故障检测仍然必须在排除后重新应用或这位置将被归类为不可用。尽管如此,显著运行利益仍然可以从一个算法用于评估错误排除的概率,使得故障检测的置信水平可以达到让人放心。正是由于这个原因,李和Kelly尝试通过以两个离群值统计数据之间的差值评估的错误排除的概率。这是因为是由一个小的距离隔开的两个离群统计有一个错误排除的概率较高。相反地,两个离群值统计数据之间的差的增加,有使错误排除的概率更小。但是,这个问题同使用离群值统计数据之间的区别在于,它没有精确地估算出一个错误排除的可能性。推定的错误排除的概率的另一种方法是Pervan等人给出。在这种方法中,假设故障伪距符合一均匀分布。然后,利用贝叶斯统计,讲错误排除的概率计算出来。这种方法的缺点是该偏差的分布是未知的。因此,即使有故障的伪距的假定分布内的微小变化可能对一个错误排除概率的估计一个显著影响。外导航领域,Forstner和Li进行了研究,对两种离群统计进行区分。使用这些研究的结果李,则讲最小可分偏置(MSB)定义作为可以自信地识别为一组的III型误差最小的偏差。施加MSB到导航领域,各种卫星星座的区分已由休伊森,和休伊森和王等进行分析。 虽然基本故障检测和排除(FDE)技术已被很好地建立,但是FDE算法的性能和初级之间的关系完整性要求还没有。具体地,虽然公式的一个错误排斥和错过检测的概率已经开发,目前还没有能够评价它们在应用中的实用方法。正确地计算这些概率是在满足诚信要求发言:导航的主要手段是必不可少的。本文提出并正确地分析计算这两个新方法。起初,对两个备选假设分离性进行分析。假警报,漏检和误排除的概率之间的关系;阈值时,相关系数和非集中参数进行了详细讨论。再假定每个离群数据对应一个故障,有一个非中心参数对应于它。由于非中心性参数是、型误差的函数,然后利用非中心参数和相关系数估计了漏诊、成功识别和错误排除的相关系数。因此,对于每个异常统计,使不同类型的错误问题的能力估计有助于决定是否能正确识别故障的伪距。本文的组织如下。首先介绍了孤立点检测理论和假设检验的模型。然后应用FDE进行检查。此后,两离群数据的可分性进行了分析,并扩展到泛函微分方程中的应用。下一个例子是,使用真实的全球定位系统的数据,以证明所提出的方法。最后,得出的结论从研究中提出。2模型描述线性高斯马尔可夫模型应用于导航是由schaffrin定义的:(1)其中:v 是 1向量的残差,A 是设计矩阵反映几何强度的一种结构,l 是测量载体的伪距观测值与卫星之间的距离,X是T的未知,其估计值的向量为 。l的平均值和它的正定协方差矩阵:(2)最小二乘解在某种意义上说是公正的,它在线性无偏估计类中是最小方差最优解。然而,这些最优性能只有保持正确的方程(1)和(2)是正确的。2.1单一替代假设的局部检验在有故障的伪距测量的情况:。因此,该位置的最小二乘估计器变得偏向:。为了检测位置偏应用故障检测程序。当偏置位置进行检测,它可以通过排除故障的伪距改正。如果假定与伪距是错误的,那么正确的模型是由:(3)这里 是伪故障和= 是一元等于一个单位向量。解决这个问题,然后导致:(4)具有方差:(5)这里是估计残差的共同因子矩阵(由原来的高斯马尔可夫模型)。然后对伪距离群值检验统计量可以形成:(6)一对离群数据之间的相关系数给出:(7)基于方程(3)的零假设对应的假设有没有故障的伪距测量:(8)在零假设下,方程(1)和(3)是等价的。否则,假设意味着与伪距是错误的:(9)以方程(6)中的异常统计量的期望为例,可以得到非中心参数:(10)2.2故障检测在故障检测阶段,零假设的整体有效性测试:(11)如果这会导致一个否定的零假设,那么它的结论是,一个故障出现。理想的情况是,在与伪假警报的概率是这样的异常值检验失败的任何一个的概率,当零假设是真实的,等于一个虚警概率,PFA。由于难以实现这一点,我的异常值的错误的类型是保守设置为:(12)在方程的孤立点的检验统计量(6)服从标准正态分布在H0。关于模型的错误,如方程(3)所指定的证据,或没有,发生的基础上的测试:(13)让i从一个对N,所有的伪距测量可以筛选出存在的故障。如果一个或多个异常测试失败,那么它的结论是,一个故障存在。除了犯错误的可能性外,还有一种可能性,即零假设被接受,事实上它是假的。这个错误,表示为0,是一个类型的错误。因此,可以看出,当被接受的零假设将有一个类型的错误的可能性。当接受另一种假设时,也有可能犯错误。因此,无论做出什么决定,总有一个错误的可能性。然而,可能采取的措施,以控制可能发生错误,并保证做出正确的决定的概率估计。通过设置一个假警报的概率的基础上,我的错误可以控制的类型。要控制的类型的错误,保护级别的制定和警报限制。如果在警报限内包含保护级别,则可接受一个类型的错误的概率是可以接受的。否则,它是不。在制定保护水平需要设置每个测试,故障通过所有的测试去未被发现的概率等于错过检测II型错误的概率大小,PMD。然而,由于难以实现这一点,使用近似的:(14)2.3故障排除当故障检测程序检测到一个故障,下一步是试图识别和排除伪距。由于零假设被拒绝,伪距测量符合的一个替代假说: i=1,n(15)要确定哪个替代假说,最大的离群统计,在绝对值被发现相应的伪距的计算公式是错误的。数学上可以表示为第j个伪距是错误的时: and (16)一旦错误的伪距已经确定,必须采取纠正措施以减轻对导航解的影响。在这里,确定伪距是排除模型,方程(1)现在少一个伪距测量。由于不正确的伪距有时会发现由于相关,FDE程序通常会重新更新的模型,直到零假设被接受。3故障检测和排除的可分性分析FDE过程中,有时可能会让错误的决定。根据FrstnerIII型错误发生时,I型和II型错误的承诺,这意味着一个错误的排除。在下面的部分,三种类型的错误的起源。然后,这些知识应用到的程序来确定故障是否能够成功地识别。表1.决定测试的两个替代假说3.1基于替代假说三种类型的错误Frstner的开拓性的研究,可以用两个备选假设的决策表1描述。从表1可以看出,为无线和WJ的对称性,。此外,以下是(17)还有:(18)表1所示参数的估计是根据检验统计量的分布。它被认为是在与观察的异常,导致它成为强子,这是非中心参数。偏差也会导致WJ的期望成为由于WI和WJ的关系,可以计算从方程(6)。成功识别后实际上意味着接受备择假设喜而不是HJ。WI和WJ的联合分布w = ( )T N(, D)(19)在方程的WI和WJ的联合分布的期望(13)由下式给出and D=(20)给出了两者的概率函数:(21)如果临界值C和W的分布是已知的,对成功的识别概率,记为(1II),可从 获得 (22)型和型误差可以是:(23)
- 温馨提示:
1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
2: 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
3.本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

人人文库网所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。