英文翻译.pdf_第1页
英文翻译.pdf_第2页
英文翻译.pdf_第3页
英文翻译.pdf_第4页
全文预览已结束

英文翻译.pdf.pdf 免费下载

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Received 15 May 2008; accepted 20 July 2008 Projects 50674086 supported by the National Natural Science Foundation of China, BS2006002 by the Society Development Science and Technology Plan of Jiangsu Province and 20060290508 by the Doctoral Foundation of Ministry of Education of China Corresponding author. Tel: +86-516-83591702; E-mail address: xiasx Mine-hoist fault-condition detection based on the wavelet packet transform and kernel PCA XIA Shi-xiong, NIU Qiang, ZHOU Yong, ZHANG Lei School of Computer Science PCA; KPCA; fault condition detection 1 Introduction Because a mine hoist is a very complicated and variable system, the hoist will inevitably generate some faults during long-terms of running and heavy loading. This can lead to equipment being damaged, to work stoppage, to reduced operating efficiency and may even pose a threat to the security of mine per- sonnel. Therefore, the identification of running faults has become an important component of the safety system. The key technique for hoist condition moni- toring and fault identification is extracting informa- tion from features of the monitoring signals and then offering a judgmental result. However, there are many variables to monitor in a mine hoist and, also, there are many complex correlations between the variables and the working equipment. This introduces uncertain factors and information as manifested by complex forms such as multiple faults or associated faults, which introduce considerable difficulty to fault diagnosis and identification1. There are currently many conventional methods for extracting mine hoist fault features, such as Principal Component Analysis (PCA) and Partial Least Squares (PLS)2. These methods have been applied to the actual process. However, these methods are essentially a linear transformation approach. But the actual monitoring process includes nonlinearity in different degrees. Thus, researchers have proposed a series of nonlinear methods involving complex nonlinear transforma- tions. Furthermore, these non-linear methods are con- fined to fault detection: Fault variable separation and fault identification are still difficult problems. This paper describes a hoist fault diagnosis feature exaction method based on the Wavelet Packet Trans- form (WPT) and kernel principal component analysis (KPCA). We extract the features by WPT and then extract the main features using a KPCA transform, which projects low-dimensional monitoring data samples into a high-dimensional space. Then we do a dimension reduction and reconstruction back to the singular kernel matrix. After that, the target feature is extracted from the reconstructed nonsingular matrix. In this way the exact target feature is distinct and sta- ble. By comparing the analyzed data we show that the method proposed in this paper is effective. 2 Feature extraction based on WPT and KPCA 2.1 Wavelet packet transform The wavelet packet transform (WPT) method3, which is a generalization of wavelet decomposition, offers a rich range of possibilities for signal analysis. J China Univ Mining k=1, 2, , n), where n is the length of the signal. Then we can get: 22 33 1 ( ) d n jjjk k ESttx = =? ? (2) Consider that we have made only a 3-layer wavelet package decomposition of the echo signals. To make the change of each frequency component more de- tailed the 2-rank statistical characteristics of the re- constructed signal is also regarded as a feature vector: 2 3 1 1 () n jk jjk k Dxx n = = ? (3) Step 4: The 3 j E are often large so we normalize them. Assume that 7 2 3 0 j j EE = ? = ? ? ? , thus the derived feature vectors are, at last: 30313637 /, /, ., /, /EE EEEE EE=T (4) The signal is decomposed by a wavelet package and then the useful characteristic information feature vectors are extracted through the process given above. Compared to other traditional methods, like the Hil- bert transform, approaches based on the WPT analy- sis are more welcome due to the agility of the process and its scientific decomposition. 2.2 Kernel principal component analysis The method of kernel principal component analysis applies kernel methods to principal component analy- sis45. Let 1 , 1, 2, , , 0 M N kk k xRkMx = = ? ?. The prin- cipal component is the element at the diagonal after the covariance matrix, T 1 1 M ij j x x M = = ? C, has been diagonalized. Generally speaking, the first N values along the diagonal, corresponding to the large eigen- values, are the useful information in the analysis. PCA solves the eigenvalues and eigenvectors of the covariance matrix. Solving the characteristic equa- tion6: 1 1 () M jj j xv x M = = ? C (5) where the eigenvalues 0 and the eigenvectors 0 N R is essence of PCA. Let the nonlinear transformations, : N RF, xX, project the original space into feature space, F. Then the covariance matrix, C, of the original space has the following form in the feature space: T 1 1 ( )() M ij j xx M = = ? C (6) Nonlinear principal component analysis can be considered to be principal component analysis of C in the feature space, F. Obviously, all the eigenvalues of C (0) and eigenvectors, 0VF satisfy VV= C. All of the solutions are in the subspace that transforms from ( ), 1, 2, , i xiM=?: ()(), 1, 2, , kk xVxV kM =?C (7) There is a coefficient i . Let 1 ( ) M ii i Vx = =? (8) From Eqs.(6), (7) and (8) we can obtain: XIA Shi-xiong et al Mine-hoist fault-condition detection based on the wavelet packet 569 1 11 ()( ) 1 ()()()( ) M iki i MM ikjji ij axx axxxx M = = = ? ? (9) where1, 2, , kM=?. Define A as an MM rank matrix. Its elements are: ( )() ijij A?xx= (10) From Eqs.(9) and (10), we can obtain 2 Maa=AA. This is equivalent to: M aa=A (11) Make 12M ? as As eigenvalues, and 12 , , , M ? as the corresponding eigenvector. We only need to calculate the test points projec- tions on the eigenvectors k V that correspond to nonzero eigenvalues in F to do the principal compo- nent extraction. Defining this as k , it is given by: 1 ( )( )( ) M kk iik i Vxxx = = ? (12) It is easy to see that if we solve for the direct prin- cipal component we need to know the exact form of the non-linear image. Also as the dimension of the feature space increases the amount of computation goes up exponentially. Because Eq.(12) involves an inner-product computation, ( )( ) i xx, accord- ing to the principles of Hilbert-Schmidt we can find a kernel function that satisfies the Mercer conditions and makes ( , )( )( ) ii K x xxx=. Then Eq.(12) can be written: 1 ( )( , ) M kk iik i Vxx x = = ? K (13) Here is the eigenvector of K. In this way the dot product must be done in the original space but the specific form of ( )x need not be known. The mapping, ( )x, and the feature space, F, are all completely determined by the choice of kernel func- tion78. 2.3 Description of the algorithm The algorithm for extracting target features in rec- ognition of fault diagnosis is: Step 1: Extract the features by WPT; Step 2: Calculate the nuclear matrix, K, for each sample, (1, 2, , ) N i xRiN=?, in the original in- put space, and ( )() ijij Kxx=; Step 3: Calculate the nuclear matrix after zero-mean processing of the mapping data in feature space; Step 4: Solve the characteristic equation M aa=A; Step 5: Extract the k major components using Eq.(13) to derive a new vector. Because the kernel function used in KPCA met the Mercer conditions it can be used instead of the inner product in feature space. It is not necessary to con- sider the precise form of the nonlinear transformation. The mapping function can be non-linear and the di- mensions of the feature space can be very high but it is possible to get the main feature components effec- tively by choosing a suitable kernel function and kernel parameters9. 3 Results and discussion The character of the most common fault of a mine hoist was in the frequency of the equipment vibration signals. The experiment used the vibration signals of a mine hoist as test data. The collected vibration sig- nals were first processed by wavelet packet. Then through the observation of different time-frequency energy distributions in a level of the wavelet packet we obtained the original data sheet shown in Table 1 by extracting the features of the running motor. The fault diagnosis model is used for fault identification or classification. Table 1 Original fault data sheet Eigenvector (104) E50E51E41E31E21E11 Fault style 1166.4951.34980.13612 0.08795 0.19654 0.25780F1 2132.7141.24600.10684 0.07303 0.12731 0.19007F1 3112.251.53530.21356 0.09543 0.16312 0.16495F1 4255.031.95740.44407 0.31501 0.33960 0.28204F2 5293.112.65920.66510 0.43674 0.27603 .027473F2 6278.842.46700.49700 0.44644 0.28110 0.27478F2 7284.122.30140.29273 0.49169 0.27572 0.23260F3 8254.221.53490.47248 0.45050 0.28597 0.28644F3 9312.742.43370.42723 0.40110 0.34898 0.24294F3 10304.122.60140.77273 0.53169 0.37281 0.27263F4 11314.222.53490.87648 0.65350 0.32535 0.29534F4 12302.742.83370.72829 0.50314 0.38812 0.29251F4 Experimental testing was conducted in two parts: The first part was comparing the performance of KPCA and PCA for feature extraction from the origi- nal data, namely: The distribution of the projection of the main components of the tested fault samples. The second part was comparing the performance of the classifiers, which were constructed after extracting features by KPCA or PCA. The minimum distance and nearest-neighbor criteria were used for classifica- tion comparison, which can also test the KPCA and PCA performance. In the first part of the experiment, 300 fault sam- ples were used for comparing between KPCA and PCA for feature extraction. To simplify the calcula- tions a Gaussian kernel function was used: Journal of China University of Mining & Technology Vol.18 No.4 570 2 2 ( , )( ), ( )exp 2 xy x yxy ? ?= ? ? K (10) The value of the kernel parameter, , is between 0.8 and 3, and the interval is 0.4 when the number of reduced dimensions is ascertained. So the best correct classification rate at this dimension is the accuracy of the classifier having the best classification results. In the second part of the experiment, the classifi- ers recognition rate after feature extraction was ex- amined. Comparisons were done two ways: the minimum distance or the nearest-neighbor. 80% of the data were selected for training and the other 20% were used for testing. The results are shown in Tables 2 and 3. Table 2 Comparing the recognition rate of the PCA and KPCA methods (%) PCA KPCA Minimum distance 91.4 97.2 Nearest-neighbor 90.6 96.5 Table 3 Comparing the recognition times of the PCA and KPCA methods (s) Times of extractionTimes of classificationTotal times PCA 216.4 38.1 254.5 KPCA129.5 19.2 148.7 From Tables 2 and 3, it can be concluded from Ta- bles 2 and 3 that KPCA takes less time and has rela- tively higher recognition accuracy than PCA. 4 Conclusions A principal component analysis using the kernel fault extraction method was described. The problem is first transformed from a nonlinear space into a lin- ear higher dimension space. Then the higher dimen- sion feature space is operated on by taking the inner product with a kernel function. This thereby cleverly solves complex computing problems and overcomes the difficulties of high dimensions and local minimi- zation. As can be seen from the experimental data, compared to the traditional PCA the KPCA analysis has greatly improved feature extraction and efficiency in recognition fault states. References 1Ribeiro R L. Fault detection

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

最新文档

评论

0/150

提交评论