From the spherical to an elliptic form of the

able to prevent an abnormal operation before its occurrence and to divert false ... application of the proposed dynamic neural network in the field of machine ...
689KB taille 1 téléchargements 346 vues
From the spherical to an elliptic form of the dynamic RBF neural network influence field Ryad ZEMOURI, Daniel RACOCEANU, Noureddine ZERHOUNI [email protected], [email protected], [email protected] Laboratoire d'Automatique de Besançon 25 Rue Alain Savary 25000 Besançon, France http://www.lab.ens2m.fr

Abstract – We introduce a new neural network architecture corresponding to the dynamic RBF with an elliptic influence field. The dynamic aspect is obtained with a self-connection, and the elliptic influence field form, by adding a supplementary RBF layer. The modified RBF neural network gives good results in monitoring applications.

I. INTRODUCTION In order to optimize the production costs, a great number of modern industrial systems need to replace the systematic traditional maintenance by a conditional one, based on the on-line monitoring. This kind of on-line monitoring is thus able to prevent an abnormal operation before its occurrence and to divert false alarms [24]. In this study, we have applied a modified Radial Basis Function network to a monitoring task. The Radial Basis Function (RBF) networks are a threelayer networks derived from an interpolation technique named RBF interpolation. Used for the first time in the context of neuromimetic networks by Broomhead and Low [23], this technique proves to be a fast and efficient one, in particular for the classification [16]. The principle of the method consists to divide the NDimensional space in different classes or categories. Every category possesses a core called prototype, and an influence field having the shape of a hyper-sphere. Several prototypes can be associated to the same category. The classification consists in evaluating the distance between an N-dimensional input-vector and the memorized prototypes. It thus permits to decide to which influence field belongs the input vector. Thus, the RBF network principle consists to define a function, which is maximal to the core, and generally decrease in a monotonous way with the distance. The usual function is the Gaussian one. The shape of the influence field of the prototypes has a great role in the performances of classification and generalization of RBF network. The spherical shape of this field can present some limits in classification. A big size of the influence field can have a greater capacity of generalization but often, when it is close to the border's class,

covers the data of another class. On the other hand, a small size of the influence field avoids including data of other class, but decreases generalization capabilities. In this paper, we propose a new RBF neural network architecture with a greater flexibility of the prototypes influence fields. An elliptic form replaces the spherical one and gives thus a better adaptation to various practical forms, especially in the diagnosis case. This article is organized as following: In the next section, we briefly present a survey of the use of neural network in the industrial monitoring tasks. Then, we present the topology of the modified RBF network. At the end, we illustrate an application of the proposed dynamic neural network in the field of machine tools monitoring. II. MONITORING AND NEURAL NETWORK: A BRIEF SURVEY Monitoring of the industrial systems represents a very complex task with two basis functions: failure detection and diagnosis. The monitoring methodologies are generally divided into two groups: monitoring methodology with and without system model [2]. In the first case, control system techniques are used for the detection and location. The second methodologies are more interesting since a formal model of the system is not available. Usually statistical tools and artificial intelligence are used. Thanks to their training and generalization properties, neural networks take an important place in artificial intelligence area. Their use is similar to traditional pattern recognition, where the form represents a part of the accessible information of the system, and the class represents the operating or failure mode of the system and eventually the associated diagnosis [9]. The fault and operating classes are established from the system experiment or the system history. It is thus impossible to be exhaustive. In order to solve this problem, the pattern recognition diagnosis system must be evolutionary [10]. Neural networks offer the possibility of training new modes. Somme examples of neural network applications on the monitoring and diagnosis of industrial equipment can be found in [1]-[4]-[7]-[11]-[12]-[13]-[14]-

[15]. The neural network presented in [1] and [12] considers the temporal evolution of the data. The other works relate an application of the MLP network, where the static data processing does not take into account the temporal aspect of the information. To treat the dynamics of the input data, the neural network must be able to memorize events history. The representation given by [6]-[5] reveals two types of solutions (figure 1). Time in the neural networks can be represented, either by an external or internal mechanism. These two terms correspond respectively to a space representation and a dynamic representation of time [5]-[8].

execution. For this reason, our study concerns the class of implicit time neural networks. III. A NEW DYNAMIC MONITORING MODEL The principal function of the neuronal network monitoring is the recognition of an industrial operating mode (figure 2).

Sensor signal Detection Production system (on line)

Temporal Neural Networks

Operation/Fault Modes Temporal neural network

Time externally processed: (TDNN) (TDRBF)

A. Recurrent Radial Basis Function Network (RRBF)

Time explicitly represented in the architecture.

Time at the network level (on connections)

Fig. 2. Neuronal monitoring system.

Time as an internal mechanism.

Implicit time: Recurrent Networks

Time at the neurone level

Algebraic Model

Biological Model

Fig.1. Time representation in neural networks

In the spatial representation, the time is introduced in the model using an external mechanism. The objective is to find particular architectures permitting to perform dynamic parameters. The most important networks using actually this concept are: TDNN (Time Delay Neural Network) used with the famous vocal recognition software NetTalk, and the TDRBF (Time Delay Radial Basis Function) used for the phonemes recognition [3]. The major inconvenience of these algorithms is the existence of an external interface with the environment to delay and store data. The second disadvantage is the use of a temporal window that imposes a limit of the sequence length. The major advantage of the TDRBF networks compared to the TDNN networks is the training flexibility and the small number of parameters necessary for adjusting the training time [3]. Concerning the dynamic representation of time, when the time is an internal mechanism, it can be implicit (recurrent networks) or explicit (time variable appears on the network connections or neurons). For explicit time at the neuron level (the most “biological” approach), we evidentiate methods using algebraic or biologic models. Both methods use very complicate algorithms and ask important software for

In our previous work [17] we have proposed a new neural network architecture called RRBF (Recurrent Radial Basis Function). This neural network uses an internal representation of the time [17]-[22]. This property obtained with a selfconnection of the input layer neurons gives a dynamic aspect to the RBF network. The network is so able to memorize the input data during a certain laps of time. This self-connection was used in several works [1]-[9]-[18]-[19] but never on RBF networks, which are more interesting for diagnosis applications. RRBF network uses the advantages of the RBF (flexibility of the training process and the phenomenon of the over-training) and is able to take into account the temporal evolution of the signals. We have tested the RRBF network on a simulated application with only one input signal and two output modes [17] : Mode 1

wii Rbf(t) Input signal

X(t) Mode 2

X(t)

S1(t)

Rdef(t) Sigmoid Function Output neuron Radial Basis Function

Fig.3. Monitoring Model with one input signal.

According to its self-connection, the RRBF neural network input neuron stimulus is memorized during a certain time lapse (figure 4).

1

X1

Output of sigmoïde neurone

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

1

2

3

4

5 Time

6

7

8

9

1

Fig.4. The output of the sigmoid neuron.

X2

This temporal aspect allows to this type of neural network the capacity to recognize the evolution of input signal and to distinguish between false alarms and real progressive degradation (loss of performances). Figure 5 shows that the RRBF neural network is able to distinguish between this two situations even if the input has the same value. S(t)

X(t)

100

Fig.7. Influence field of the initial RRBF prototype.

B. Non linear RRBF The spherical shape of a prototype influence field makes the RRBF network able to generalize its knowledge on the closest area with the same probability that the vectors being at the same distance from the prototype. This property can present some limits in classification (figure 8).

90

Prototype Stage of degradation

False alarm

70 60 50

X(t)

X(t)

40 30 20

Output of the sigmoid neurone

Stimulus signal (output sensor )

80

Class B

Class B

Influence field

Class A

S(t)

S(t)

Class A

10 0

t

Too large

0 Time

Fig.5. Response of the monitoring RRBF model to progressive degradation and false alarms.

The limit of this architecture in a monitoring application is the input signal variation range definition. In practice, the variation ranges of two different signals can be different. For example, for equipment with a known operating mode, the variation range of the electrical current will be different from that of the temperature. The network of the figure 3 where the influence fields of the radial neurons are hyper spheres, cannot respect this characteristic (figure 6 and 7).

Too small

Fig 8. Limit of the initial RRBF network.

More flexibility of the influence field of RRBF network can give better results. For example, an influence field having an elliptic shape can easily adopt certain particular forms (figure 9). Class B

wii Input signal

Mode 1 S1(t)

Rbf(t)

Mode 2 Rdef(t) S2(t)

Fig.6. Monitoring RRBF model with two input signals.

Class A Better result with an elliptic form

Fig.9. Elliptic form of the new influence field.

This kind of influence field can be obtained by adding a second layer of radial neurons to the network of the figure 6, with the following connections (figure 10): • the ith neuron (ni) of the input layer is connected to all neurons ij of the second layer (j = 1…m),



all neurons ij of the second layers are connected to the neuron j of the third layer (i = 1...k), Thus, we obtain a layer of k*m radial neurons nij (k represents the size of the input vector, and m represents the number of memorized prototypes). m increases with the training process prototypes number. At each new memorized prototype, a neuron (ni) of the third layer is added with a whole group of neurons of the second layer (n(1…..k)j). The standard deviation of the second layer neurons is not identical (value defined by training). That gives an influence field of a prototype in the form of a hyper-ellipse instead of a hypersphere.

wii

n11 n1 ni1 n1

First, the influence field of the network has a greater flexibility. Heterogeneous signals fusion can thus be carried out for a complex monitoring application. A second advantage of the proposed network lies in the optimization of the training process. In the RBF networks, the usual training algorithm RCE [20] proceeds in the following way: • •

for each new prototype, a new RBF neuron is added (step 1, figure 12), The influence field of a prototype is reduced if a new vector of different category belongs to its influence field (step 3, figure 12).

The second point is very significant because it can cause losses of information previously memorized, and need thus to restart again the training process in order to re-memorize the lost data (step3, figure12).

nk1

Influence field n1j ni

nj nij

Step 1 nkj

Prototype of class A : a new RBF neuron is added

nk n1m nm nim nkm

Step 2

Fig.10. Modified RRBF. Vector of class A: no thing to do.

For a network with two dimensions (two neurons in input), the output of the third layer neurons can have an influence field with an elliptic form (Fig.11). The rays of this ellipse are imposed by the influence rays of the second layer neurons and are determined by training.

Reduction of the influence field Vector of class B: a new RBF neuron is added Step 3 Lost vector

Fig.12. Disadvantage of the training process of the RBF.

Fig.11. Prototype influence field of the modified RRBF.

On the other hand, in the case of the new proposed network, if a new prototype of different category overlaps with a memorized prototype, instead of reducing the influence field of all contour, we reduce only the dimensions where occurs the conflict (step 3, figure 13)

Influence field

Mode 3 Mode 6

Step 1 Mode 5

X1 Mode 1

Prototype of class A : a new RBF neuron is added Mode 2

Mode 4

Step 2 X2

Vector of class A: no thing to do

Fig. 15. Operating modes of the machine tool.

Reduction of one size of the influence field Vector of class B: a new RBF neuron is added

Step 3

Vector not lost

Fig.13. Improvement of the training process of the non-linear RRBF.

This result seems very significant. The influence field of a prototype becomes more flexible. We thus avoid losing information and restarting the training process that generates additional delays.

This two-dimensional space division is not exhaustive but makes the neural network able to recognize learned situations. We test our monitoring model by presenting known failures modes. One failure type is the wear of the stages, guidance and the slides, memorized by the network as a mode 2 (figure 15). This type of failure results in the fall of current and the tension towards a certain value. The network detects this degradation in two phases: First, the neural network detects the failure. This results in the fall of the neuron output memorizing the correct operation mode. A second phase consists to recognize the failure type by the recognition of the failure mode (mode 2). Tension 1.5

1.4

1.3

IV. APPLICATION TO A MACHINE TOOL MONITORING

S1

1.2

1.1

We apply the RRBF neural network to a monitoring problem of a machine tool simulated with MATLAB (figure 14).

1

0.9

0

1

2

3

4

5 Time

6

7

8

9

10

Current 3

2.8

2 i(t)

2.6

1

1

u(t)

Ls+R Sum

Transfer Fcn

i(t)

K

1

1

Js+F

Gain

Sum1

Transfer Fcn1

2.4

w(t)

S2

2.2

C Gain1

sin Trigonometric Function

K Gain2

Fig.14. Machine tool studied model.

2

1.8

0

1

2

3

4

5 Time

6

7

8

9

10

Neuron response 1 0.9

We consider the physical parameter corresponding to the nominal operating failure mode and the associated diagnosis presented in [21]. We thus establish our training set, which gives the relations between evolution of the current and tension of the machine's induction coil and the failure mode. The training process gives the different prototypes of figure 15. We obtain six known situations: a situation of correct operation and five failures modes.

RBF1

0.8 0.7 0.6 0.5

RBF 2

0.4

S3

0.3 0.2

RBF 4

0.1 0

0

1

2

3

4

5 Time

6

7

8

Fig. 16. Failure detection.

9

10

The figure 16 illustrates this situation. The fall of the current value and the tension causes an abrupt fall of the neuron output representing the nominal mode (mode 1). The failure is detected before the current or the tension passes below the threshold of nominal operation. This characteristic is obtained thanks to the self-connection of the input layer, which makes the neural network able to recognize the tendency of the signal. Mixing sensor data at the same time makes the neuron able to detect a performance degradation more quickly than by individual processing of each signal.

[10]

V. CONCLUSION

[14]

This article treats about a new dynamic RBF neural network architecture: the RRBF(Recurrent Radial Basis Function). The interest of the proposed network compared to traditional RBF networks is the capacity to consider the dynamic aspect of the data. This characteristic is obtained thanks to a self-connection of the input neurons. An application to monitoring tasks demonstrates that the RRBF is able to distinguish between false alarm and a degradation of performances. The second contribution is the addition of a supplementary layer of radial basis neurons giving a more flexible influence field. The elliptic form of this influence field, improves adaptative capabilities of the neural network. This property is essential for diagnosis and monitoring applications.

[15]

VI. REFERENCES [1]

[2]

[3] [4] [5]

[6]

[7] [8] [9]

[11]

[12]

[13]

[16] [17]

[18]

[19]

[20] [21]

E. Bernauer, and H. Demmou, "Temporal sequence learning with neural networks for process fault detection," IEEE International Conference on Systems, Man, and Cybernetics, IEEE-SMC 93, Volume 2, pp. 375-380, Le Touquet, France 1993 S. Dash and V. Venkatasubramanian, "Challenges in the industrial applications of fault diagnostic systems," in the Proceedings of the conference on Process Systems Engineering 2000, Comput. & Chem. Engng, 24 (2-7), pp. 785-791 Keystone, Colorado, July 2000. M. R.Berthold, "A Time Delay Radial Basis Function Network for Phoneme Recognition," Proc. of the IEEE International Conference on Neural Networks, Vol. 7, pp. 4470-4473, Orlando 1994. Nando de Freitas, I.M. Macleod and J.S. Maltz,, "Neural networks for pneumatic actuator fault detection". Transactions of the SAIEE, Vol. 90, No. 1, Pages 28 to 34, 1999 J.C.Chappelier, "RST: une architecture connexionniste pour la prise en compte de relations spatiales et temporelles," PhD thesis, Ecole Nationale Supérieure des Telecommunications/France, January 1996. J.C.Chappelier et A.Grumbach, A Kohonen, "Map for Temporal Sequences,". In Proceeding of neural Networks and Their Application, NEURAP'96, IUSPIM, pages 104-110, Marseille/France, march 1996. P.Keller, R.T.Kouzes, L.J.Kangas, "Three Neural Network Based Sensor System for Environemental Monitoring", Proceedings IEEE Electro94 Conference, Boston, MA, USA, May 1994 J.L.Elman, "Finding Structure in Time". Cognitive Science, Vol. 14, pp. 179-211, June 1990. Eric Bernauer, "Les réseaux de neurones et l'aide au diagnostic: un modèle de neurones bouclés pour l'apprentissage de séquences temporelles", PhD thesis LAAS/France 1996.

[22]

[23] [24]

B.Dubuisson, "diagnostic et reconnaissance des formes", Traité des nouvelles technologies, Edition HERMES, France 1990. T.A.Petsche, A.Marcontonio, C.Darken, S.J.Hanson, G.M.kuh, I.Santoso, "A Neural Network autoassociator for induction motor failure prediction," in D.S. Touretzky, M.C. Mozer, and M.E. Hasselmo (Eds.), Advances in Neural Information Prodessing Systems 8, pp. 924-930. Cambridge: MIT Press. R. Rengaswamy and V. Venkatasubramanian, "A Syntactic Pattern Recognition Approach for Process Monitoring and Fault Diagnosis", Engineering Applications of Artificial Intelligence Journal, 8(1), pp. 35-51, 1995 Hervé Poulard "statistiques et réseaux de neurones pour un système de diagnostic. Application au diagnostic de pannes automobiles". PhD thesis LAAS/France 1996. A. Vemuri and M. Polycarpou, "Neural Network Based Robust Fault Diagnosis in Robotic Systems,'' IEEE Transactions on Neural Networks, Vol. 8, no. 6, pp. 1410-1420, November 1997. A. Vemuri, M. Polycarpou and S. Diakourtis, "Neural Network Based Fault Detection and Accommodation in Robotic Manipulators," IEEE Transactions on Robotics and Automation, Vol. 14, no. 2, pp. 342-348, April 1998. J.F.Jodouin, "Les réseaux neuromimétiques", collection informatique, Hermes/France 1994. R. Zemouri, D.Racoceanu, N.Zerhouni, "application of the dynamic RBF network in a monitoring problem of the production systems" To appear in the 15th IFAC World Congress on Automatic Control, Barcelona, July 2002. P. Frasconi and M. Gori and M. Maggini and G. Soda, "Unified Integration of Explicit Rules and Learning by Example in Recurrent Networks", IEEE Transactions on Knowledge and Data Engineering, vol. 7, no. 2, pp. 340-346, 1995. P. Frasconi and M. Gori, "Computational Capabilities of LocalFeedback Recurrent Networks Acting as Finite-State Machines", IEEE Transactions on Neural Networks, Vol. 7, no. 6, pp. 15211525, November, 1996 D.L.Reilly, L.N.Cooper, C.Elbaum, "A Neural Model for Category Learning", in Biol. Cybernet. 45, pp.35-41, 1982. X.Desforges, "Méthodologie de surveillance en fabrication mécanique : application de capteur intelligent à la surveillance d'axes de machine-outil," PhD thesis, University of Bordeaux/France 1999. M.R. Zemouri, D.Racoceanu and N.Zerhouni, "The RRBF-Dynamic Representation of time in Radial Basis Function Network," in the 8th IEEE International Conference on Emerging Technologies and Factory Automation ETFA, France 2001. D.S.Broomhead, D.Lowe, "Multivariable Functional Interpolation and Adaptive Networks", Complex Systems, Vol. 2, pp. 321-355. M. Basseville, and M.O. Cordier, "Surveillance et diagnostic des systèmes dynamiques: approche complémentaire du traitement du sugnal et de l'intelligence artificielle," research rapport. N°2861, INRIA/France 1996.