三相异步电动机故障诊断中英文翻译@外文翻译@外文文献翻译_第1页
三相异步电动机故障诊断中英文翻译@外文翻译@外文文献翻译_第2页
三相异步电动机故障诊断中英文翻译@外文翻译@外文文献翻译_第3页
三相异步电动机故障诊断中英文翻译@外文翻译@外文文献翻译_第4页
三相异步电动机故障诊断中英文翻译@外文翻译@外文文献翻译_第5页
已阅读5页,还剩14页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

翻译部分 英文原文 Fault Diagnosis of Three Phase Induction Motor Using Neural Network Techniques Abstract: Fault diagnosis of induction motor is gaining importance in industry because of the need to increase reliability and to decrease possible loss of production d ue to machine breakdown.Due to environmental stress and many others reasons different faults occur in induction motor. Many researchers proposed different techniques for fault detection and diagnosis.However,many techniques available presently require a good deal of expertise to apply them successfully.Simpler approaches are needed which allow relatively unskilled operators to make reliable decisions without a diagnosis specialist to examine data and diagnose problems.In this paper simple,reliable and economical Neural Network(NN)based fault classifier is proposed,in which stator current is used as input signal from motor.Thirteen statistical parameters are extracted from the stator current and PCA is used to select proper input.Data is generated from the experimentation on specially designed 2 Hp,4 pole 50 Hz.three phase induction motor.For classification,NNs like MLP,SVM and statistical classifiers based on CART and Discriminant Analysis are verified.Robustness of classifier to noise is also verified on unseen data by introducing controlled Gaussian and Uniform noise in input and output. Index Terms: Induction motor, Fault diagnosis, MLP, SVM,CART, Discriminant Analysis, PCA I.INTRODUCTION INDUCTION motors play an important role as prime movers in manufacturing,process industry and transportation due to their reliability and simplicity in construction.In spite of their robustness and reliability,they do occasionally fail,and unpredicted downtime is obviously costly hence they required constant attention.The faults of induction motors may not only cause the interruption of product operation but also increase costs,decrease product quality and affect the safety of operators.If the lifetime of induction machines was extended, and efficiency of manufacturing lines was improved,it would lead to smaller production expenses and lower prices for the end user.In order to keep machines in good condition, some techniques i.e.,fault monitoring, fault detection, and fault diagnosis have become increasingly essential.The most common faults of induction motors are bearing failures, stator phase winding failures ,broken rotor bar or cracked rotor end-rings and air-gap irregularities. The objective of this research is to develop an alternative neural network based incipient fault-detection scheme that overcome the limitations of the present schemes in the sense that,they are costly, applicable for large motors, furthermore many design parameters are requested and especially concerning to long time operating machines, these parameters cannot be available easily.As compared to existing schemes, proposed scheme is simple, accurate, reliable and economical. This research work is based on real time data and so proposed neural network based classifier demonstrates the actual feasibility in a real industrial situation. Four different neural network structures are presented in this paper with all kinds of performances and about 100%classification accuracy is achieved. II.FAULT CLASSIFICATION USING NN The proposed fault detection and diagnosis scheme consists of four procedures as shown in Fig.1: 1. Data collection & acquisition 2. Feature extraction 3. Feature selection 4. Fault classification A. Data Collection and Data acquisition In this paper the most common faults namely stator winding interturn short(I),rotor dynamic eccentricity(E)and both of them(B)are considered. Fig.1.General Block Diagram of proposed classifier For experimentation and data generation the specially designed 2 HP, three phase,4 pole,415V,50 Hz induction motor is selected. Experimental set up is as shown in Fig.2. Fig.2.Experimental Setup The load of the motor was changed by adjusting the spring balance and belt.Three AC current probes were used to measure the stator current signals for testing the fault diagnosis system. The maximum frequency of used signal was 5 kHz and the number of sampled data was 2500.From the time waveforms of stator currents as shown in Fig.3,no conspicuous difference exists among the different conditions. Fig.3.Experimental Waveforms of Stator current B. Feature Extraction There is a need to come up with a feature extraction method to classify faults.In order to classify the different faults,the statistical parameters are used.To be precise, sample statistics will be calculated for current data.Overall thirteen parameters are calculated as input feature space.Minimum set of statistics to be examined includes the root mean square (RMS)of the zero mean signal(which is the standard deviation),the maximum, and minimum values the skew ness coefficient and kurtosis coefficient. Pearsons coefficient of skew ness, 2g defined by: xSxxg 32 (1) Where x denotes mean, x denotes median andxSdenotes the sample standard deviation.The sample coefficient of variation rv is defined by; xSv xr (2) The thr sample moment about the sample mean for a data set is given by; nxxmrniir 1)( (3) m2 denotes spread about the center,m3 refers to skewness about the center;m4 denotes how much data is massed at the center. Second,third and fourth moments are used to define the sample coefficient of skewness,3gand the sample coefficient of kurtosis, 4g as follows. 3233 mmg (4) 4244 mmg (5) The sample covariance between dimensions j and k is defined as; )1()(1nxxxxcnikikjijjk (6) The ordinary correlation coefficient for dimensions j and k ,jkris defined as; kjjkjk SScr (7) C. Feature Selection Before a feature set is fed into a classifier,most superior features providing dominant fault-related information should be selected from the feature set,and irrelevant or redundant features must be discarded to improve the classifier performance and avoid the curse of dimensionality.Here Principal Component Analysis(PCA)technique is used to select the most superior features from the original feature set.Principal Components(PCs)are computed by Pearson rule.The Fig.4 is related to a mathematical object,the eigenvalues,which reflect the quality of the projection from the 13-dimensional to a lower number of dimensions. Fig.4.Principal Component, Eigenvalues and percent variability D. Fault Classifier (1)MLP NN Based Classifier Simple Multilayer Perceptron(MLP)Neural Network is proposed as a fault classifier.Four Processing Elements are used in output layer for four conditions of motor namely Healthy, Inter turn fault,Eccentricity and Both faults. From results as shown in Fig.5,five PCAs are selected as inputs;hence number of PEs in input layer is five. Fig.5(a).Variation of Average MSE on training and CV with number of PCs as input Fig.5(b).Variation of Average Classification Accuracy on Testing on Test data, Training data and CV data with number of PCs as input The randomized data is fed to the neural network and is retrained five times with different random weight initialization so as to remove biasing and to ensure true learning and generalization for different hidden layers.This also removes any affinity or dependence of choice of initial connection weights on the performance of NN.It is observed that MLP with a single hidden layer gives better performance.The number of Processing Elements(PEs)in the hidden layer is varied.The network is trained and minimum MSE is obtained when 5 PEs are used in hidden layer as indicated in Fig.6. Fig.6.Variation of Average MSE with number of PEs in Hidden Layer VariousTransferfunctions,namely,Tanh,Sigmoid,Liner-tanh,Linear-sigmoid,Softmax,Bias axon, Linear axon and learning rules, namely, Momentum, Conjugate-Gradient, Quick Propagation, Delta Bar Delta, and Step are verified for training, cross validation and testing.Minimum MSE and average classification accuracy on training and CV data set are compared . With above experimentations finally,the MLP NN classifier is designed with following specifications, Number of Inputs:5;Number of Hidden Layers:01; Number of PEs in Hidden Layer:04; Hidden Layer: Transfer function:tanh Learning Rule:Momentum Step size:0.6 Momentum:0.5 Output Layer: Transfer function:tanh Learning Rule:Momentum Step size:0.1 Momentum:0.5 Number of connection weights:44 Training time required per epoch per exemplar:0.0063 ms (2) SVM NN Based Classifier The support vector machine(SVM)is a new kind of classifier that is motivated by two concepts. First , transforming data into a high-dimensional space can transform complex problems (with complex decision surfaces)into simpler problems that can use linear discriminant functions. Second, SVMs are motivated by the concept of training and using only those inputs that are near the decision surface since they provide the most information about the classification. It can be extended to multi-class.SVMs training always seek a global optimized solution and avoid over fitting,so it has ability to deal with a large number of feature. Generalized Algorithm for the classifier: For N dimensional space data ix (i =1 N) this algorithm can be easily extended to network by substituting the inner product of patterns in the input space by the kernel function, leading to the following quadratic optimization problem: Ni Nj jijijiNi i xxGddJ 1 1 21 )2,(21)( (8) Subject to 01Ni iid Nii ,.1,0 (9) where 2,xG represents a Gaussian function, N is the number of samples,iare a set of multipliers(one for each sample), Ni jijjiibxxGddxJ12 )2,()( (10) and )(minii xgM (11) and choose a common starting multiplier i,learning rate , and a small threshold. Then, while Mt, we choose a pattern ixand calculate an update )(1(ii xg and perform the update If 0)( ii n )()()1( nnniii iidnbnb )()1( (12) And if 0)( ii n )()1( nnii )()1( nbnb (13) After adaptation only some of the iare different from zero (called the support vectors). It is easy to implement the kernel Adatron algorithm since )(ixgcan be computed locally to each multiplier,provided that the desired response is available in the input file.In fact,the expression for )(ixgresembles the multiplication of an error with an activation,so it can be included in the framework of neural network learning.The Adatron algorithm essentially prunes the RBF network so that its output for testing is given by, )2,(s gn()( 2 Nv e c t o r ss p p o r tiiiii bxxGdxf (14) And cost function in error criterion is 12)(,( t a n h ()(21)(i itytdtJ (15) Number of PCs as input and step size is selected by checking the average minimum MSE and average classification accuracy; results are shown in Fig 7. Fig.7(a).Variation of Average MSE on training and CV with number of PCs as input Fig.7(b).Variation of Average Classification Accuracy on Testing on Test data,Training data and CV data with number of PCs as input Finaly the SVM based classifier is designed with following specifications, Number of Inputs:5; Step Size:0.7 Time required per epoch per exemplar:0.693 ms Number of connection weights:264 Designed classifier is trained and tested using the similar datasets and results are as shown in Fig.8 and Fig.9 Fig.8.Variation of Average Minimum MSE on Testing on Test data,CV data and Training data with number of rows shifted(n) Fig.9.Variation of Average Minimum MSE on Training and CV with various groups (3)Classification and Regression Trees(CART) CART induces strictly binary trees through a process of binary recursively partitioning of feature space of a data set. The first phase is called tree building,and the other is tree pruning.Classification tree is developed using XLSTAT-2009.Various methods, measures and maximum tree depth are checked and results are shown in Fig.10.It is observed that optimum average classification accuracy on testing on test data and CV data is found to be 90.91 and 80 percent,respectively. Fig.10(a).Variation of Average Classification Accuracy on Testing on Test data and CV data with Method and Measure of Trees Fig.10(b).Variation of Average Classification Accuracy on Testing on Test data and CV data with Depth of Trees (4) Discriminant Analysis Discriminant analysis is a technique for classifying a set of observations into predefined classes.The purpose is to determine the class of an observation based on a set of variables known as predictors or input variables.The model is built based on a set of observations for which the classes are known. Based on the training set,the technique constructs a set of linear functions of the predictors,known as discriminant functions ,such that cxbxbxbLnn .2211, where the sb are discriminant coefficients, the sx are the input variables or predictors and c is a constant. Discriminant analysis is done using XLSTAT-2009.Various models are checked and results are shown in Fig.11.It is observed that optimum average classification accuracy on testing on test data and CV data is found to be 91.77 and 80 percent,respectively. Fig.11.Variation of Average Classification Accuracy on Testing on Test data and CV data with Model of DA III.NOISE SUSTAINABILITY OF CLASSIFIER Since the proposed classifier is to be used in real time,where measurement noise is anticipated,it is necessary to check the robustness of classifier to noise.To check the robustness, Uniform and Gaussian noise with mean value zero and variance varies from 1 to 20%is introduced in input and output and average classification accuracy on testing data i.e.unseen data is checked.It is seen that SVM based classifier is the most robust classifier in the sense that it can sustain both uniform and Gaussian noise with 14%and 20%variance in input and output, respectively. Results are as shown in Table I G-Gaussian Noise U-Uniform Noise IV.RESULTS AND DISCUSSION In this paper,the authors evaluated the performance of the developed ANN based classifiers for detection of four fault conditions of three phase induction motor and examined the results.MLP NN,and SVM are optimally designed and after completion of the training,the learned network is tested to detect different types of faults. Similarly step size is varied in SVM and 0.7 step size is found to be optimum. These confirm our idea that the proposed feature selection method based on the PCA can select the most superior features from the original feature set,and therefore,is a powerful feature selection method.Also proposed classifier is enough robust to the noise,in the sense that classifier gives satisfactory results for Uniform and Gaussian noise with 14%variance in input and with 20% variance in output.Comparative results are shown in Fig.12 and Table II. Fig.12.Comparative analysis of various classifier w.r.t.Average classification accuracy. TABLE II COMPARATIVE RESULTS OF NN BASED CLASSIFIERS 中文译文 基于 神经网络技术的三相异步电动机故障诊断 摘要: 异步电机故障诊断在工业 中十分重要 ,因为需要提高可靠性和降低 由于 机器故障 造成的生产 损失。由于环境压力和许多其他 的 原因 使 异步电动机发生不同的故障。许多研究者提出了不同的故障检测和诊断技术。然而,目前许多 技术 需要提供良好的专业知识才能在 应用中获得成功。更简单的方法是 可以 使 用 相对不熟练的操作在 不需要诊断专家仔细考察数据和诊断问题的情况下做出可靠的判断 。本文 提出了 简单,可靠,经济的 基 于 神经网络( NN) 的 故障分类,其中电机定子电流作为输入信号。 从 定子电流 中 提取 13 个参数 并使用 PCA 来 选择合适的输入。 数据来自于特别设计的 2马力、 4 极 50HZ 的三相异步电动机试验。为了 分类,如 神经网络像 MLP、 SVM 和基于 CART 的统计分类器以及 判别分析 欧进行了验证。 对噪声分类器的鲁棒性的验证,也通过在输入和输出引进高斯和统一控制进行了验证。 关键词: 异步电动机,故障诊断, MLP, SVM, CART,判别分析, PCA 1.引言 异步电机 作为主驱动设备 在生产 、 工业和运输中由于其可靠性和 结构 简单发挥重要作用。尽管 由 于它 们的 稳定 性和可靠性, 它 们偶尔也会 发生故障和 意外停机 ,造成很大的损失。 因此, 它 们需要不断的关注。感应电动机的故障不仅会导致产品的运作中断,而且增加成本,降低产品质量,影响操作人员的安全。如果 延长 异步电机寿命和提高生产线效率,这将 花费更少 的生产费用, 使 终用户 可以以 更低的价格 购买 。为了保持机器的状态良好,一些技 术 , 如 故障监测,故障检测和故障诊断变得越来越重要。感应电动机的最常见的故障是轴承故障,定子绕组故障,转子断条或气隙 不合理 。 本研究的目的是发展一种替代人工神经网络的早期故障检测计划,克服在这个 方案 上的局 限 性 ,目前的 方法很 昂贵,对于大型电动机适用,而且许多设计参数要求,特别是涉及到长时间运作的机器,这些参数不能提供方便。相对于现有的 方案 ,这项 方案 很简单,准确,可靠和经济。这项研究工作是基于实时数据等,提出基于神经网络分类器显示一个真正的产业状况的实际可行性。 本文提出四个不同神经网络结构在与各种各样的表现,并且达到 100%分类准确性。 2.用神经网络分析故障 如图 .1 所显示,提出的故障检测和诊断计划包括四个步骤: 1.数据 的收集与采集 2.特征提取 3.特征选择 4.故障分类 A.数据收集和数据采集 在 本 中最 常见的故障,即定子绕组匝间短 路 ( 1),转子动态偏心( E)和 二者皆有( B) 。 图 1 一般结构图 用于实验和数据生成而特别设计的 2 马力,三相,四极, 415V, 50 Hz 的异步电动机已选好。所做实验设图 2所示。 图 2 实验装置 调整弹簧秤和传送带改变马达的装载。三 个 交流电流探针,用于测量定子测试故障诊断系统的电流信号。 最大使用频率信号是 5 kHz,被抽样的数据的数量是 2500。 定子电流波形图 3所示,在不同的条件 下 ,没有明显的差异存在。 图 3 定子电流实验波形 B.特征提取 有必要得出一个根据提取特征判 断分类故障的方法。为了对不同的故障,使用了统计参量。 确切地说,对当前的数据 进行统计将会得出样品资料 。全部 13 个参数作为输入特征计算。 考察的 最小 统计量 包括平均信号(这 里 是标准差)的均方根( RMS),最高和最低值的偏斜系数和峰度系数。皮尔逊的 偏斜 系数, 2g 定义为: xSxxg 32 ( 1) 其中 x 表示 平均值, x 表示位数 ,xS表示样本标准差。 样本变化参数rv定义为: xSv xr ( 2) 数据的样本平均数在 thr 采样时刻的值为: nxxmrniir 1)( ( 3) 2m 表示中心的范围, 3m 是指对中心偏度 ; 4m 表示中心集合的数据数量。其次,第三和第四的时刻是用来定义样本偏度系数的3g和样品的峰度系数 4g 如下: 3233 mmg ( 4) 4244 mmg ( 5) 尺寸之间的样本协方差 j和 k 是指 )1()(1nxxxxcnikikjijjk ( 6) 普通关联系数 j 和 k ,jkr被定义为 : kjjkjk SScr ( 7) C.特征选择 在将一个特征送入分类器前, 最 明显的 故障提供相关的信息优势,应 从 设置功能 中 选择功能不相干的或多余的功能必须被丢弃,以提高分类器的性能,避免 维度的危害 。这里的主成分分析( PCA)技术用于从原来的 特征中选择最优的特征 。主成分( PCs)的计算由皮尔逊规则 完成 。图 4 与 一个数学对象有关 ,即 特征值,这 个特征值 反映了从 13 个 维到一个较低维的 投影质量。 图 4 主成分,特征值和百分比变化 D.故障分类 ( 1)基于多层感知器神经网络分类器 简单的多层感知器( MLP)神经网络,作为一个故障分类。四个处理单元输出层中使用的电机 有 四个条件,即 合理 ,匝间故障,偏心 这两种故障。如 图 5 所示的结果,选择 5个项目合作安排为输入,因此在输入层 PE 的数量为 5。 图 5(a)微型和小型企业的平均变化对培训和电脑的数量作为输入 图 5(b)变异的平均分类精度对测试的测试数据,培训资料,并与电脑的数量作为输入数据 将 随机数据输入到神经网络,并已培训了 5 倍不同的随机初始化重量,以消除偏置,并确保真正的学习和不同的隐藏层的 推广。 研究表明 ,这种方法用单隐层可提供较好的性能。这个数目的处理单元 (PEs)在隐层是多种多样的。 该网络进行训练最小均方误差为 5时获得 处理单元 是在隐藏层 如 图 6 所示。 图 6 平均均分误差和处理单元的数目隐藏层 各种传输功能,即双曲正切、 S状弯曲、线性曲线、线性弯曲、 偏置轴突 、 线性轴突 、学习规则 ,即原动力、 共轭梯度 、快速传播,进行训练验证要按步骤进行,交叉验证测试最小均方误差与平均分类精度的训练和变异系数数据集进行了比较。 结果表明,双曲正切传递函数和学习规则的势头给予最佳的成果。 最后 ,利用上述试验 ,该神经网 分类器设计 ,规格: 数字的输入 ,以隐藏层个数 :1; 在隐层处理单元数目: 04; 隐藏层: 双曲正切函数 学习规则:动量 步骤: 6 动量: 0.5 输出层: 双曲正切传递函数 学习规则:动量 步骤: 0.1 动量: 0.5 连接权数: 44 所需培训的时间: 0.0063ms ( 2) 基于神经网络支持向量机分类器 支持向量机 (SVM)是一种新的有两个概念的分类器。首先,进入一个高维转换可将复杂的问题转换为更简单的问题,可以使用线性判别函 数。其次,支持向量机的动机是基于概念的基础上 ,利用投入训练 ,只用那些在它们的表面附近最大限度的信息分类。它可以扩展到多个级别,支持向量机的训练总是寻求一个全面性的优化方案 ,避免了拟合 ,所以它有能力去处理大量的特征。 通用算法的分类器: 对于 N 维空间数据 ,该算法可以轻易ix (i =1 N)延伸到网络代替的内积空间的输入核函数 ,从而导致下列二次优化问题。 Ni Nj jijiji

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

最新文档

评论

0/150

提交评论