Process Control and Optimization, VOLUME II - Unicauca

INTRODUCTION. The evolutionary algorithm (EA) uses the computational model of natural .... mental pressure,” which means the selection of the fittest individuals. ..... the choice of the input, output, and delay terms that are to contribute to the ..... Application-oriented systems follow many innovative strat- egies. Systems such ...
374KB taille 3 téléchargements 153 vues
2.10

Genetic and Other Evolutionary Algorithms J. MADÁR, J. ABONYI

(2005)

INTRODUCTION

Applications

The evolutionary algorithm (EA) uses the computational model of natural selection for optimization. EA has proved particularly successful in applications that are difficult to formalize mathematically and that are therefore not easily handled by the engineering tools of classical analysis. EA does not rely on domain-specific heuristics and therefore is attractive for applications that are highly nonlinear, stochastic, or poorly understood; very little a priori information is required, although it can be utilized if so desired. For problems that are well understood, approximately linear, and for which reliable solutions exist, the EA is unlikely to produce competitive results. It is possible for an EA to provide several dissimilar but equally good solutions to a problem, due to its use of a population. Hence, the EA is a robust search-and-optimization method that is able to cope with multimodality, discontinuity, time-variance, randomness, and noise. A single control engineering problem can contain a mixture of “decision variable” formats (numbers, symbols, and other structural parameters). Since the EA operates on a “genetic” encoding of the optimized variables, diverse types of variables can be simultaneously optimized. Each of the previously mentioned properties can prove significantly problematic for conventional optimizers. Furthermore, an EA search is directed and, hence, represents potentially much greater efficiency than a totally random or 1 enumerative search. Real-time performance of an optimization algorithm is of particular interest to engineers. Unfortunately, there is no guarantee that the EA results will be of sufficient quality for use online. This is because EAs are very computationally intensive, often requiring massively parallel implementations in order to produce results within an acceptable timeframe. In real-time applications there is also the matter of how individuals will be evaluated if no process model is available. Furthermore, EAs should be tested many times due to the algorithms’ stochastic nature. Hence, online control of realtime processes, especially applications for safety-critical processes, is in most cases not yet feasible.

Evolutionary algorithms have been most widely and successfully applied to offline design applications. In the field of control systems engineering, these applications include controller design, model identification, robust stability analysis, system reliability evaluation, and fault diagnosis. The main benefit of the EA is that it can be applied to a wide range of problems without significant modification. However, it should be noted that EA has several implementations: evolutionary programming (EP), evolutionary strategy (ES), genetic algorithm (GA), and genetic programming (GP), and the selection of the proper technique and the tuning of the parameters of the selected technique require some knowledge about these techniques. Hence, the aim of this section is to introduce the readers to the basic concept of EAs, describe some of the most important implementations, and provide an overview of the typical process engineering applications, especially in the area of process control.

THE CONCEPT OF EA Fitness Function: Encoding the Problem EAs work with a population of potential solutions to a problem. In this population, each individual represents a particular solution, which can be described by some form of genetic code. The fitness value of the individual solution expresses the effectiveness of the solution in solving the problem. Better solutions are assigned higher values of fitness than the less well performing solutions. The key of EA is that the fitness determines the success of the individual in propagating its genes (its code) to subsequent generations. In practical system identification, process optimization, or controller design it is often desirable to simultaneously meet several objectives and stay within several constraints. For the purposes of the EA, these factors must be combined to form a single fitness value. The weighted-sum approach has proved popular in the literature because it is amenable to a solution by conventional EA methods, but Pareto-based

181 © 2006 by Béla Lipták

182

Control Theory

TABLE 2.10a A Typical Evolutionary Algorithm procedure EA;



{ Initialize population; Evaluate all individuals; while (not terminate) do { Select individuals; Create offspring from selected individuals using Recombination and Mutation; Evaluate offspring; Replace some old individuals by some offspring; } }

multi-objective techniques are likely to surpass this popularity in the future. In some cases, the objectives and constraints of the problem may be noncommensurable and the objective functions explicitly/mathematically are not available. In these cases, interactive evolutionary computation (IEC) should be used to allow subjective human evaluation; IEC is an evolutionary algorithm whose fitness function is replaced by human users who directly evaluate the potential solutions. Model of Natural Selection The population has evolved over generations to produce better solutions to the task of survival. Evolution occurs by using a set of stochastic genetic operators that manipulate the genetic code used to represent the potential solutions. Most evolutionary algorithms include operators that select individuals for reproduction, produce new individuals based on the characteristics of those selected, and determine the composition of the population of the subsequent generation. Table 2.10a outlines a typical EA. A population of individuals is randomly initialized and then evolved from generation to generation by repeated applications of evaluation, selection, mutation and recombination. In the selection step, the algorithm selects the parents of the next generation. The population is subjected to “environmental pressure,” which means the selection of the fittest individuals. The most important automated selection methods are stochastic uniform sampling, tournament selection, fitness ranking selection, and fitness proportional selection: •



Stochastic uniform sampling (SUS) is the simplest selection method. In it, every individual has the same chance to be selected without considering individual fitness. This technique can be useful when the size of the population is small. The tournament selection method is similar to SUS, but the individuals that have higher fitness values have higher probabilities to be selected. The selection

© 2006 by Béla Lipták



procedure is simple; in every tournament two individuals are picked randomly from the population, and the one with the higher fitness value is selected. Fitness proportional selection is the most often applied technique. In this selection strategy the probability of selection is proportional to the fitness of the 2 individuals. The fitness ranking selection method uses a rank-based mechanism. The population is sorted by fitness, and a linear ranking function allocates a rank value for every individual. The probability of selection is proportional to the normalized rank value of the individual.

After the selection of the individuals, the new individuals (offspring) of the next generation are created by recombination and mutation. •



Recombination (also called crossover) exchanges information between two selected individuals to create one or two new offspring. The mutation operator makes small, random changes to the genetic coding of the individual.

The final step in the evolutionary procedure is replacement, when the new individuals are inserted into the new population. Once the new generation has been constructed, the processes that resulted in the subsequent generation of the population are repeated once more.

GENETIC ALGORITHM The GA, as originally defined by John Holland and his 3 students in the 1960s, uses bit-string representation of the individuals. Depending on the problem, the bit strings (chromosomes) can represent numbers or symbols (e.g., see Figure 2.10b). Of course, this automatically raises the question as to what precision should be used, and what should be the mapping between bit strings and real values. Picking the right precision can be potentially important. Historically, genetic algorithms have typically been implemented using low precision, such as 10 bits per parameter. Recombination means the swapping of string fragments between two selected parents (see Figure 2.10c), while mutation means the flip of a few bits in these strings.

Chromosome

Coded numbers

(110 011)

(6, 3)

FIG. 2.10b Example of binary representation of integer numbers.

2.10 Genetic and Other Evolutionary Algorithms

Recombination

Mutation 0 1 1 0 1 0 1 1

183

0 1 1 0 0 0 1 1

0 1 1 0 1 0 1 1

1 0 1 1 1 0 1 1

1 0 1 1 1 0 1 0

0 1 1 0 1 0 1 0

FIG. 2.10c Mutation and recombination of binary strings.

+

Recombination has much higher probability than mutation, so recombination is often said to be the “primary search3 ing operator.” GA applies simple replacement technique: all the original individuals are replaced by the created offspring; except in case of the elitist strategy when some of the best individuals are also placed into the next generation.

x1

* p1

A random terminal or function is selected and replaced by another (randomly selected) terminal or function.

© 2006 by Béla Lipták

p1

*

x2 crossover +

+ x1

p1

x2

x2

+

mutation

/



x1

* x2

GENETIC PROGRAMMING GA is a powerful technique for numeric optimization, but because of the fixed length bit-string representation, it is not really suitable for solving structural and symbolic optimization problems. Genetic programming (GP), which has been 4 developed by John Koza, is based on tree representation. This representation is extremely flexible, since trees can represent computer programs, mathematical equations, or complete models of process systems. A population member in GP is a hierarchically structured computer program consisting of functions and terminals. The functions and terminals are selected from a set of functions and a set of terminals. For example, function set F could contain the basic arithmetic operations: F = {*, −, +, /}. However, the function set may also include other mathematical functions, Boolean operators, conditional operators, or any user-defined operators. The terminal set T contains the arguments for the functions, for example T = {y, x , pi } with x and y as two independent variables and pi representing the parameters. The potential solution (program) can now be depicted as a rooted, labeled tree with ordered branches, using operations (internal points of the tree) from the function set and arguments (leaves of the tree) from the terminal set. An example of such a tree using the aforementioned function sets F and T is given in the top left corner of Figure 2.10d. GP inherited the selection and replacement strategies from GA, but the mutation and crossover are adapted to the tree representation. The mutation exchanges a node in the tree, or deletes a sub-tree from the tree, or inserts a new sub-tree into the tree. Mutation can be implemented in two different ways:

+

x2

x2

* x1

+

x2

* p1

x2

FIG. 2.10d Mutation and crossover of trees in GP.



A randomly selected sub-tree is replaced by a randomly generated sub-tree.

The crossover generates new children trees by jointing parental sub-trees. Crossover is determined by choosing two individuals based on fitness and generating for each tree the crossover point (node) at random. For example: consider the trees shown at the top right of Figure 2.10d with crossover points 2 and 2. The sub-tree of the first solution (program) starting from crossover point 2 will be swapped with the sub-tree of the second program at crossover point 2, resulting in two new individuals shown at the bottom right of Figure 2.10d. The size of tree can vary during mutation and crossover, which gives additional flexibility to GP. Hence, GP is more used for structure optimization than for parameter optimiza4. tion. Koza has shown that GP can be applied to a great diversity of problems. He especially illustrated the use of GP with regard to optimal control problems, robotic planning, and symbolic regressions. The key advantage of GP is the ability to incorporate domain-specific knowledge in a straightforward fashion. Another advantage is that the results can be readily understood and manipulated by the designer. However, GP structures can

184

Control Theory

become complicated and may involve redundant pathways. Minimizing the complexity of a solution should be a specific objective, and the topic of “bloat” is a continuing area of study 5 for GP researchers. Furthermore, GP can be quite processor intensive, especially for structural identification, where a parameter estimation procedure must be carried out for each individual structure at each generation. Due to the complexity of the structure, traditional (trusted and efficient) parameter estimation methods are often impossible to apply.

EVOLUTIONARY STRATEGY Contrary to GP, ES searches in continuous space. The main distinction from GA is that ES uses real-valued representation of the individuals. The individuals in ES are represented by n-dimensional n vectors (x ∈ℜ ), often referred to as object variables. To allow for a better adaptation to the particular optimization problem, the object variables are accompanied by a set of socalled strategy variables. Hence, an individual a j = (x j , σ j ) consists of two components, the object variables, xj = [xj,1,…,xj,i], and strategy variables, σ j = [σ j ,1 ,..., σ j ,n ]. As in nature where small changes occur frequently but large ones only rarely, normally distributed random numbers are added as mutation operators to the individuals: x j , i = x j , i + N (0, σ j , i )

2.10(1)

Before the update of the object variables, the strategy variables are also mutated using a multiplicative normally distributed process.

the µ parent population, the following operators can be 6 defined:  x F ,i   x F , i or x M ,i  xi′ = (xx F ,i + x M ,i )/2  µ  x k ,i /µ   k =1



σ F ,i  σ F ,i or σ M ,i  σ ′i = (σ F ,i + σ M ,i )/2  µ  σ /µ k ,i   k =1



no recombination discrete intermediate

2.10(4)

global avarge no recombination discrete intermediate

2.10(5)

global avarge

The selection is stochastic in the ES. First we chose the best µ individuals to be parents, and then we select the parentpairs uniformly randomly from these individuals. The standard notations in this domain, ( µ + λ) and (µ, λ), denote selection strategies in which the µ number of parents are selected to generate λ number of offspring. In case of (µ, λ) only the λ offspring are inserted into the subsequent generation (it means that λ is equal to the population size). In the case of ( µ + λ) not only the λ offsprings but the µ parents are also inserted into the subsequent generation (it means that µ + λ is equal to the population size).

EVOLUTIONARY PROGRAMMING 7

σ

(t ) j,i



( t −1) j,i

exp(τ ′ N (0,1) + τ N i (0,1))

2.10(2)

with exp(τ ′ N (0,1)) as a global factor that allows an overall change of the mutability and exp(τ N i (0, 1)) allowing for individual changes of the mean step sizes σj,i. The parameters can be interpreted in the sense of global learning rates. Schwefel suggests to set them as

τ′ =

1 2n

τ=

1

2.10(3)

2 n

Recombination in ES can be either sexual (local), where only two parents are involved in the creation of an offspring, or global, where up to the whole population contributes to a new offspring. Traditional recombination operators are discrete recombination, intermediate recombination, and geometric recombination, all existing in a sexual and global form. When F and M denote two randomly selected individuals from

© 2006 by Béla Lipták

Evolutionary programming (EP) was developed by Fogel et al. independently from ES. Originally, it was used to achieve machine intelligence by simulated evolution. In the EP there are µ individuals in every generation. Every individual is selected and mutated (there is no recombination). After the calculation of the fitness values of the new individuals, µ individuals are selected to form the next generation from the µ + µ individuals, using a probabilistic function based on the fitness of the individuals.

SYSTEM IDENTIFICATION System identification can be decomposed into two interrelated problems: selection of a suitable model structure and estimation of model parameters. Well-developed techniques exist for parameter estimation of linear models and linearin-the-parameters nonlinear models. Techniques for nonlinear-in-the-parameters estimation and for the selection of the model structure are still the subject of ongoing research.

2.10 Genetic and Other Evolutionary Algorithms

185

+

+





y(k – 1) y(k – 1)

p4



p2 y(k – 1)

u(k – 1)



p3







p1

+

y(k – 1)

u(k – 2)

y(k) = –0.01568 y(k – 1) √y(k – 1) u(k – 1) –6.404 y(k – 1) u(k – 2) + 1.620 y(k – 1) + 9525

FIG. 2.10e Tree representation of a solution.

Polymerization Reactor Example This example demonstrates how GP can be used for the development of a dynamic input–output model of a process from the input–output data of that process. The studied system is the dynamic model of a continuous polymerization 13 reactor. The manipulated input variable in this process is the flow rate of the initiator, which influences the rate of the polymerization reaction. The controlled output is the mean molecular weight of the produced polymer (product quality). The terminal set consists of the delayed process input and output terms. The function set simply consists of sum, product, and square root. The first two functions are sufficient

© 2006 by Béla Lipták

to construct any NARMAX polynomial model, while the square root function is used on the basis of a priori knowledge about the reaction dynamics of the process. In order to develop the model, input–output data were collected from the dynamic model of the process. Based on this process data, input–output models were identified. Figures 2.10e and 2.10f show the results.

5

×104

4.5

Molecular weight

The application of EAs to parameter identification of black-box and gray-box models has received considerable 8 interest since the seminal papers by Kristinsson and Dumont 9 and Choi et al. One of the central problems in system identification is the choice of the input, output, and delay terms that are to contribute to the model. EAs provide a simple method for searching the structure space for terms that make the most 10,11 significant contributions to process output. Multi-objective NARMAX (nonlinear autoregressive with exogenous input) polynomial model structure identification has been accomplished by the use of a multi-objective 12 genetic programming (MOGP) strategy. Here, seven objectives were optimized simultaneously: the number of model terms, model degree, model lag, residual variance, long-term prediction error, the auto-correlation function of the residuals, and the cross-correlation between the input and the residuals.

4

3.5

3

2.5

0

100

200

Time

300

400

500

FIG. 2.10f The results of model identification (GP). The solid line is the “real” output; the dotted line is the “estimated” output generated by the identified model.

186

Control Theory

Most of the resulting input-output models and estimates 14 obtained by Rhodes and Morari have identical input and output order. Hence, this result shows that GP is a useful tool for the data-driven identification of the model orders of unknown processes. In addition, the equations obtained by GP frequently had square-root terms, which were also in the original differ13 ential equations of the state-space model of the process. Hence, this example illustrates that the GP is not only good for the development of input–output models having good prediction capabilities, but because of the transparent model structure, these models are also useful tools to obtain additional knowledge about the process.

attention was paid to MBPC for nonlinear systems with input constraints. Specialized genetic coding and operators were developed with the aim of preventing the generation of infeasible solutions. The resulting scheme was applied to a simulated batch-type fermenter, with favorable results reported (compared to the traditional branch-and-bound method) for long control horizons. The main problem of optimized controller tuning is the selection of the appropriate objective function. For this purpose classical cost functions based on the integral squared error (ISE) can be used. Usually, not only the control error is minimized but also the variation of the manipulated variable:

Complete Process Models 15

Gray et al. performed nonlinear model structure identification using GP. They considered two representations: block diagrams (using Simulink) and equations (differential and integro-differential). A function library was constructed, which included basic linear and nonlinear functions and also specific a priori knowledge. The resulting scheme was applied to diverse systems of varying complexity, including simple transfer functions, a coupled water tank, and a helicopter rotor speed controller and engine. In a paper by Marenbach et al., a general methodology for structure identification using GP with a block diagram 16 library was developed. Typical blocks included time delays, switches, loops, and domain-specific elements. Each block in the library was assigned a value, representing the subjective complexity of the block, and the method utilized an evaluation function that used this value. The developed methodology was applied to a biotechnological batch-type fermentation process, providing a transparent insight into the structure of the process.

n

min

∑ k =1

n

e( k ) 2 + β ⋅

∑ ∆u(k)

2

2.10(6)

k =1

The second term of this cost function is responsible for providing a smooth control signal, and in some control algorithms, such as model predictive control, stability. The selection of the β weighting parameter, which balances the two objectives, is an extremely difficult design problem. Besides the selection of this weighting parameter, there are other freely selectable design parameters, e.g., the error can be weighted with the elapsed time after the setpoint change. As can be seen, the design of an appropriate cost function is not a straightforward task. The performance of the optimized controller does not always meet the expectations of the designer. The following example will show that the application of 22 IEC is a promising approach to handle this problem; IEC is an evolutionary algorithm whose fitness function is replaced by human users who directly evaluate the potential solutions. Application of IEC in Controller Tuning

PROCESS CONTROL APPLICATIONS Controller Tuning An early use of EAs was the tuning of proportional-integral17 derivative (PID) controllers. Oliveira et al. used a standard GA to determine initial estimates for the values of PID parameters. They applied their methodology to a variety of classes of linear time-invariant (LTI) systems, encompassing minimum phase, nonminimum phase, and unstable systems. 18 In an independent inquiry, Porter and Jones proposed an EA-based technique as a simple, generic method of tuning 19 digital PID controllers. Wang and Kwok tailored an EA using inversion and preselection “micro-operators” to PID 20 controller tuning. More recently, Vlachos et al. applied an EA to the tuning of decentralized proportional-integral (PI) controllers for multivariable processes. 21 Onnen et al. applied EAs to the determination of an optimal control sequence in model-based predictive control (MBPC), which is discussed in Section 2.17. Particular

© 2006 by Béla Lipták

The continuously stirred tank reactor (CSTR) control system 23 model is taken from Sistu and Bequette. The dynamic behavior of this system is interesting: it has multiple steady states as a function of the cooling water flow rate and reactor temperature. The reactor’s dynamic behavior is complex and non24 linear and as such, presents a challenging control problem. To control the reactor temperature, it is advantageous to use cascade-control, where the slave controls the jacket temperature, while the master controls the reactor temperature. According to the industrial practice, a PID controller in the master loop and a P (proportional) controller in the slave loop have usually been applied. If properly tuned, this selection of control modes should give good control performance. This example demonstrates how ES can be used for the tuning of these controllers. The chromosomes consist of design and strategy variables. In this case study, the design variables (xj) are the three tuning settings of the master PID controller (gain, integral time and differential time) and the gain of the slave P controller.

2.10 Genetic and Other Evolutionary Algorithms

This example demonstrated that with the use of Interactive Evolutionary Computing we obtained the same controller performance as with a cost function–based controller tuning approach, but with much less effort and time.

Instead of using a fully automated ES optimization, interactive evolutionary strategy has been applied, where the user evaluates the performances of the controllers with the use of a process simulator incorporated into a graphical user interface tailored for this purpose. With this human–machine interface (HMI), the user can analyze the resulting plots and some numerical values, such as performance indices and parameters, and can select one or more individuals based on this information. The prototype of this tool, which has been developed in MATLAB/Simulink, can be downloaded from www.fmt.vein. hu/softcomp/EAsy. The number of individuals is limited due to human fatigue; hence in this example the user can run, analyze, and evaluate eight independent tuning settings to provide feedback to the evolutionary process. Figure 2.10g shows the interactive figure that was used in this example. The IEC converged quickly to good solutions; after only ten generations it resulted in well-tuned controllers (see Figure 2.10h). For comparison, the direct optimization of the tuning settings by sequential quadratic programming (SQP) is considered. First, the cost function is based on the squared controller error and the variance of manipulated variable (Equation 2.10[6]). Although several β weighting parameters have been used, the SQP-technique led to badly tuned controllers. Finally, the term that in the cost function contained the sum of the squared errors was replaced by a term based on the absolute value of the control errors. With the use of this new cost function, the SQP resulted in better controllers after a few experiments with different β weighting parameters.

Generation B

2

1

FIG. 2.10g IEC display.

© 2006 by Béla Lipták

Control Structure Design Many EA applications simply optimize the parameters of existing controller structures. In order to harvest the full potential of EA, some researchers have experimented with the manipulation of controller structures themselves. GP has been utilized for the automatic synthesis of the 25 parameter values and the topology of controllers. The system has reportedly duplicated existing patents (for PI and PID controllers) and rediscovered old ones (a controller making use of the second derivative of the error, which is the difference between the set point and the controlled variable, or in simulation language, the reference signal and the output signal). Multi-objective evolutionary algorithms (MOEAs) have been utilized in controller structure optimization. For example, MOEAs have been used to select controller structure and suitable parameters for a multivariable control system for a 26 gas turbine engine. Online Applications Online applications present a particular challenge to the EA. The number of successful applications have been limited to date. In online applications, it is important that at each sample instant, the controller must generate a signal to set the

Delete Continue

Offsprings:

Parents:

3

4

5

6

Exit 7

8

10

10

10

10

10

10

10

10

8

8

8

8

8

8

8

8

6

6

6

6

6

6

6

6

4

4

4

4

4

4

4

4

2

2

2

2

2

2

2

2

0

0

0 0

0

0 0

0

0 0

0

50

0

50

50

0

50

50

0

50

50

0

0

6

6

6

6

6

6

6

6

5

5

5

5

5

5

5

5

4

4

4

4

4

4

4

4

3

3

3

3

3

3

3

3

2

2

2

2

2

2

2

2

1

1

1

1

1

1

1

1

0

0

0 0

0

0 0

0 0

0 0

50

0

0

50

50

0

50

187

50

50

50

0 0

50

50

IAE: 18.23

IAE: 17.77

IAE: 17.90

IAE: 18.59

IAE: 18.00

IAE: 18.62

IAE: 18.49

IAE: 17.71

KI: 22.84

KI: 22.82

KI: 25.21

KI: 22.10

KI: 22.44

KI: 23.66

KI: 20.50

KI: 24.27

TI: 0.00

TI: 0.01

TI: 0.02

TI: 0.08

TI: 0.04

TI: 0.08

TI: 0.00

TI: 0.00

K2: 15.57

K2: 6.68

K2: 3.79

K2: 11.57

K2: 8.55

K2: 14.70

K2: 15.96

K2: 6.67

T2: –23.97

T2: –25.76

T2: –22.70

T2: –25.34

T2: –21.03

T2: –24.39

T2: –26.51

T2: –25.76

188

Control Theory

y (Reactor temp.)

6

4

2

0

0

10

20

30

40

50

60

70

0

10

20

30

40

50

60

70

u (Coolant flowrate)

5 4 3 2 1 0

t (time)

FIG. 2.10h Control performance of cascade loop with controllers tuned on the basis of IEC. On the top: the dashed line represents changes in reactor temperature set point of the master controller and the solid line shows the actual temperature of the reactor. On the bottom: the cooling water flow rate is shown on the same time scale as on the top. This flow rate is manipulated by the slave controller.

manipulated variable(s). The actions of the “best” current individual of the EA may inflict severe consequences on the process. This is unacceptable in most applications, especially in the case of a safety- or mission-critical control system. Given that it may not be possible to apply the values represented by any individual in an EA population to the system, it is clear that evaluation of the complete, evolving population cannot be performed on the actual process. The population may be evaluated using a process model, assuming that such a model exists, or performance may be inferred from system response to actual input signals. Inference may also be used as a mechanism for reducing processing requirements by making a number of full evaluations and then computing estimates for the remainder of the population based on these results. In a real-time application, there is only a limited amount of time available for an optimizer between decision points. Given current computing power, it is unlikely that an EA will execute to convergence within the sampling time limit of a typical control application. Hence, only a certain number of generations may be evolved. For systems with long sample times, an acceptable level of convergence may well be achieved. In the case of a controller, an acceptable control signal must be provided at each control point. If during that period the EA evolved for only a few generations, then population performance may still be poor. A further complication is that the system, seen from the perspective of the optimizer, is changing over time. Thus, the evolved control signal at one instant can become totally inappropriate at the next.

© 2006 by Béla Lipták

EAs can cope with time-varying landscapes to a certain extent, but a fresh run of the algorithm may be required. In this instance, the initial population can be seeded with previous “good” solutions. Note that this does not guarantee fast convergence and may even lead to premature convergence. There are three broad approaches to the use of EAs for 27 online control: • • •

Utilize a process model. Utilize the process directly. Permit restricted tuning of an existing controller.

The last approach can be used to ensure stability when combined with some form of robust stability analysis, while permitting limited exploration.

SOFTWARE TOOLS Here an overview is provided of the EA implementation envi28 ronments, based on the taxonomy of Filho, which utilizes three major classes: application-oriented systems, algorithmoriented systems, and tool kits (see Tables 2.10i and 2.10j). Application-Oriented Systems Most potential users of a novel computing technique, such as genetic algorithms, are not interested in the details of that

2.10 Genetic and Other Evolutionary Algorithms

189

TABLE 2.10i Programming Environments and Their Categories Algorithm-Oriented Application-Oriented EVOLVER OMEGA

Algorithm-Specific

Tool Kits

Algorithm Libraries

Educational

ESCAPADE GAGA

PC/BEAGLE

GAUCSD

XpertRule GenAsys

GENESIS

EnGENEer EM

GA Workbench

GAME MicroGA

OOGA

GENITOR

technique, only in the contribution it can make to their applications. By using an application-oriented programming environment, it is possible to configure a particular application without having to know the encoding technique or the genetic operators involved. Application-oriented systems follow many innovative strategies. Systems such as PC/BEAGLE and XpertRule GenAsys are expert systems using GAs to generate new rules to expand their knowledge base of the application domain. EVOLVER is a companion utility for spreadsheets, and systems such as OMEGA are targeted at financial applications. EVOLVER is an add-on utility that works within the Excel, WingZ, and Resolve spreadsheets on Macintosh and PC computers. It is being marketed by Axcélis, Inc., which

General Purpose

PeGAsusS Splicer

describes it as “an optimization program that extends mechanisms of natural evolution to the world of business and science applications.” The user starts with a model of a system in the spreadsheet and calls the EVOLVER program from a menu. After filling a dialogue box with the information required (e.g., cell to minimize/maximize) the program starts working, evaluating thousands of scenarios automatically until it is sure it has found an optimal answer. The program runs in the background, freeing the user to work in the foreground. When the program finds the best result, it notifies the user and places the values into the spreadsheet for analysis. This is an excellent design strategy given the importance of interfacing with spreadsheets in business.

TABLE 2.10j Programming Environment Lists EM—Evolution Machine

http://www.amspr.gfai.de/em.htm

ESCAPADE

Frank Hoffmeister, [email protected]

EnGENEer

Logica Cambridge Ltd., Betjeman House, 104 Hills Road, Cambridge CB2 1LQ, UK

EVOLVER

http://www.palisade.com/html/evolver.html

GA Workbench

ftp://wuarchive.wustl.edu, [email protected]

GAGA

ftp://cs.ucl.ac.uk/darpa/gaga.shar, [email protected]

GAUCSD

http://www-cse.ucsd.edu/users/tkammeye/

GAME

ftp://bells.cs.ucl.ac.uk/papagena/, [email protected]

GENESIS

ftp://ftp.aic.nrl.navy.mil/pub/galist/src/genesis.tar.Z, [email protected]

GENITOR

ftp://ftp.cs.colostate.edu/pub/GENITOR.tar, [email protected]

MicroGA

Steve Wilson, Emergent Behavior, 953 Industrial Avenue, Suite 124, Palo Alto, CA 94301, USA, e-mail: [email protected]

OMEGA

http://www.kiq.com/kiq/index.html

OOGA

The Software Partnership, P.O. Box 991, Melrose, MA 2176, USA

PC-BEAGLE

Richard Forsyth, Pathway Research Ltd., 59 Cranbrook Road, Bristol BS6 7BS, UK

PeGAsuS

http://borneo.gmd.de/AS/pega/index.html

Splicer

http://www.openchannelfoundation.org/projects/SPLICER/, [email protected]

XpertRule GenAsys

http://www.attar.com

© 2006 by Béla Lipták

190

Control Theory

The OMEGA Predictive Modeling System, marketed by KiQ Limited, is a powerful approach to developing predictive models. It exploits advanced genetic algorithm techniques to create a tool that is “flexible, powerful, informative, and straightforward to use.” OMEGA is geared to the financial domain. The environment offers facilities for automatic handling of data; business, statistical, or custom measures of performance; simple and complex profit modeling; validation sample tests; advanced confidence tests; real-time graphics, and optional control over the internal genetic algorithm. PC/BEAGLE, produced by Pathway Research Ltd., is a rule-finder program that applies machine-learning techniques to create a set of decision rules for classifying examples previously extracted from a database. XpertRule GenAsys is an expert system shell with embedded genetic algorithms, marketed by Attar Software. This GA expert system is targeted to solve scheduling and design applications. The system combines the power of genetic algorithms in evolving solutions with the power of rule-base programming in analyzing the effectiveness of solutions. Some examples of design and scheduling problems that can be solved by this system include: optimization of design parameters in electronic and avionics industries, route optimization in the distribution sector, and production scheduling in manufacturing. Algorithm-Oriented Systems Algorithm-oriented systems are programming systems that support specific genetic algorithms. They subdivide into:

GAUCSD allows parallel execution by distributing several copies of a GENESIS-based GA into UNIX machines in a network. Finally, ESCAPADE employs a somewhat different approach, as it is based on an evolutionary strategy. Algorithm Libraries These systems are modular, allowing the user to select a variety of algorithms, operators, and parameters to solve a particular problem. Their parameterized libraries provide the ability to use different models (algorithms, operators, and parameter settings) to compare the results for the same problem. New algorithms coded in high level languages, such as “C” or Lisp, can be easily incorporated into the libraries. The user interface is designed to facilitate the configuration and manipulation of the models as well as to present the results in different shapes (tables, graphics, etc.). The two leading algorithm libraries are EM and OOGA. Both systems provide a comprehensive library for genetic algorithms, and EM also supports strategies for evolution simulation. In addition, OOGA can be easily tailored for specific problems. It runs in Common Lisp and CLOS (Common Lisp Object System), an object-oriented extension of Common Lisp.

Tool Kits Tool kits are programming systems that provide many programming utilities, algorithms, and genetic operators, which can be used for a wide range of application domains. These programming systems can be subdivided into: •

• •

Algorithm-specific systems—which contain a single EA; the classic example is GENESIS. Algorithm libraries—where a variety of EAs and operators are grouped in a library, as in Lawrence Davis’s OOGA.

Algorithm-oriented systems are often supplied in source code and can be easily incorporated into user applications. Algorithm-Specific Systems Algorithm-specific environments embody a single powerful genetic algorithm. These systems typically have two groups of users: system developers requiring a general-purpose GA for their applications and researchers interested in the development and testing of a specific algorithm and genetic operators. The code has been developed in universities and research centers and is available free over worldwide computer research networks. The best-known programming system in this category is the pioneering GENESIS system, which has been used to implement and test a variety of new genetic operators. In Europe, probably the earliest algorithm-specific system was GAGA. For scheduling problems, GENITOR is another influential system that has been successfully used.

© 2006 by Béla Lipták



Educational systems—to help the novice user obtain a hands-on introduction to GA concepts. Typically, these systems support a small set of options for configuring an algorithm. General-purpose systems—to provide a comprehensive set of tools for programming any GA and application. These systems may even allow the expert user to customize any part of the software, as in Splicer.

Educational Systems Educational programming systems are designed for the novice user to obtain a hands-on introduction to genetic algorithms concepts. They typically provide a rudimentary graphic interface and a simple configuration menu. Educational systems are typically implemented on PCs for reasons of portability and low cost. For ease of use, they have a nice graphical interface and are fully menu-driven. GA Workbench is one of the best examples of this class of programming environments. General-Purpose Programming Systems General-purpose systems are the ultimate in flexible GA programming systems. Not only do they allow users to develop their own GA applications and algorithms, but they also provide users with

2.10 Genetic and Other Evolutionary Algorithms

the opportunity to customize the system to suit their own purposes. These programming systems provide a comprehensive tool kit, including: • • • •

A sophisticated graphic interface A parameterized algorithm library A high-level language for programming GAs An open architecture

Access to the system components is via a menu-driven graphic interface and a graphic display/monitor. The algorithm library is normally “open,” allowing the user to modify or enhance any module. A high-level language — often object-oriented—may be provided to support the programming of GA applications, algorithms, and operators through specialized data structures and functions. Lastly, due to the growing importance of parallel GAs, systems provide translators to parallel machines and distributed systems, such as networks of workstations. The number of general-purpose systems is increasing, stimulated by growing interest in the application of GAs in many domains. Examples of systems in this category include Splicer, which presents interchangeable libraries for developing applications, MicroGA, an easy-to-use object oriented environment for PCs and Macintoshes, and parallel environments such as EnGENEer, GAME, and PeGAsusS.

References 1.

2.

3. 4.

5.

6.

7. 8.

9.

10.

Rechenberg, I., Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution, Stuttgart: FrommannHolzboog, 1973. Spears, W. M., De Jong, K. A., Back, T., Fogel, D. B., and Garis, H., “An Overview of Evolutionary Computation,” European Conference on Machine Learning, 1993. Holland, J. H., Adaptation in Natural and Artificial Systems, Ann Arbor: University of Michigan Press, 1975. Koza, J. R., Genetic Programming: On the Programming of Computers by Means of Natural Evolution, Cambridge, MA: MIT Press, 1992. Langdon, W. B., and Poli, R., “Fitness Causes Bloat,” Proceedings of the Second On-Line World Conference on Soft Computing in Engineering Design and Manufacturing, London: Springer-Verlag, 1997. Mandischer, M., “A Comparison of Evolutionary Strategies and Backpropagation for Neural Network Training,” Neurocomputing, Vol. 42, 2002, pp. 87–117. Fogel, L. J., Owens, A. J., and Walsh, M. J., Artificial Intelligence through Simulated Evolution, New York: Wiley, 1966. Kristinsson, K., and Dumont, G. A., “System Identification and Control Using Genetic Algorithms,” IEEE Transactions on Systems, Man, and Cybernetics, Vol. 22, No. 5, 1992, pp. 1033–1046. Choi, S. H., Lee, C. O., and Cho, H. S., “Friction Compensation Control of an Electropneumatic Servovalve by Using an Evolutionary Algorithm,” Proceedings of the Institution of Mechanical Engineers Part I, 2000, pp. 173–184. Fonseca, C. M., Mendes, E. M., Fleming, P. J., and Billings, S. A., “Non-Linear Model Term Selection with Genetic Algorithms,” IEE/IEEE Workshop on Natural Algorithms in Signal Processing, Vol. 2, 1993, pp. 27/1–27/8.

© 2006 by Béla Lipták

11.

12.

13.

14.

15.

16.

17.

18. 19.

20.

21.

22.

23.

24.

25.

26.

27.

28.

191

Luh, G. C., and Wu, C. Y., “Non-Linear System Identification Using Genetic Algorithms,” Proceedings of the Institution of Mechanical Engineers Part I, Vol. 213, 1999, pp. 105–118. Rodriguez-Vasquez, K., Fonseca, C. M., and Fleming P. J., “Multiobjective Genetic Programming: A Nonlinear System Identification Application,” Late-Breaking Papers at the 1997 Genetic Programming Conference, Stanford Bookstore, USA, 1997, pp. 207–212. Doyle, F. J., Ogunnaike, B. A., and Pearson, R. K., “Nonlinear ModelBased Control Using Second-Order Volterra Models,” Automatica, Vol. 31, 1995, pp. 697–714. Rhodes, C., and Morari, M., “Determining the Model Order of Nonlinear Input/Output Systems,” AIChE Journal, Vol. 4, No. 1, 1998, pp. 151–163. Gray, G. J., Murray-Smith, D. J., Li, Y., Sharman, K. C., and Weinbrunner, T., “Nonlinear Model Structure Identification Using Genetic Programming,” Control Engineering Practice, Vol. 6., No. 11, 1998, pp. 1341–1352. Marenbach, P., Bettenhausen, K. D., Freyer, S., Nieken, U., and Rettenmaier, H., “Data-Driven Structured Modeling of a Biotechnological Fed-Batch Fermentation by Means of Genetic Programming,” Proceedings of the Institution of Mechanical Engineers Part I, Vol. 211, 1997, pp. 325–332. Oliveira, P., Sequeira, J., and Sentieiro, J., “Selection of Controller Parameters Using Genetic Algorithms,” Engineering Systems with Intelligence. Concepts, Tools, and Applications, S. G. Tzafestas (Ed.), Dordrecht, The Netherlands: Kluwer Academic Publishers, 1991, pp. 431–438. Porter, B., and Jones, A. H., “Genetic Tuning of Digital PID Controllers,” Electronics Letters, Vol. 28, No. 9, 1992, pp. 843–844. Wang, P., and Kwok, D. P., “Autotuning of Classical PID Controllers Using an Advanced Genetic Algorithm,” International Conference on Industrial Electronics, Control, Instrumentation and Automation (IECON 92), Vol. 3, 1992, pp. 1224–1229. Vlachos, C., Williams, D., and Gomm, J. B., “Genetic Approach to Decentralised PI Controller Tuning for Multivariable Processes,” IEE Proceedings—Control Theory and Applications, Vol. 146, No. 1, 1999, pp. 58–64. Onnen, C., Babuska, R., Kaymak, U., Sousa, J. M., Verbruggen, H. B., and Isermann, R., “Genetic Algorithms for Optimization in Predictive Control,” Control Engineering Practice, Vol. 5, No. 10, 1997, pp. 1363–1372. Takagi, H., “Interactive Evolutionary Computation — Cooperation of Computational Intelligence and Human KANSEI,” 5th Int. Conf. on Soft Computing (IIZUKA’98), Iizuka, Fukuoka, Japan, 1998, pp. 41–50. Sistu, P. B., and Bequette, B. W., “Nonlinear Predictive Control of Uncertain Processes: Application to a CSTR,” AIChE Journal, Vol. 37, 1991, pp. 1711–1723. Cho, W., Lee, J., and Edgar, T. F., “Control System Design Based on a Nonlinear First-Order Plus Time Delay Model,” J. Proc. Vont., Vol. 7, 1997, pp. 65–73. Koza, J. R., Keane, M. A., Yu, J., Bennett, F. H. III, and Mydlowec, W., “Automatic Creation of Human-Competitive Programs and Controllers by Means of Genetic Programming,” Genetic Programming and Evolvable Machines, Vol. 1, 2000, pp. 121–164. Chipperfield, A., and Fleming, P., “Multiobjective Gas Turbine Engine Controller Design Using Genetic Algorithms,” IEEE Transactions on Industrial Electronics, Vol. 43, No. 5, 1996, pp. 1–5. Linkens, D. A., and Nyongesa, H. O., “Genetic Algorithms for Fuzzy Control — Part 1: Offline System Development and Application” and “Part 2: Online System Development and Application,” IEE Proceedings—Control Theory and Applications, Vol. 142, No. 3, 1996, pp. 161–176 and 177–185. Filho, J. R., Alippi, C., and Treleaven, P., “Genetic Algorithm Programming Environments,” IEEE Comp. Journal, Vol. 27, 1994, pp. 28–43.

192

Control Theory

Bibliography Banzhaf, W., Nordin, P., Keller, R. E., and Francone, F. D., Genetic Programming: An Introduction: On the Automatic Evolution of Computer Programs and Its Applications, St. Louis, MO: Morgan Kaufmann, 1997. Dasgupta, D., and Michalewicz, Z., Evolutionary Algorithms in Engineering Applications, Heidelberg: Springer-Verlag, 1997. Fleming, P. J., and Purshouse, R. C., “Evolutionary Algorithms in Control Systems Engineering: a Survey,” Control Engineering Practice, Vol. 10, 2002, pp. 1223–1241. Goldberg, D. E., The Design of Innovation, Dordrecht, The Netherlands: Kluwer Academic Publishers, 2002.

© 2006 by Béla Lipták

Goldberg, D. E., Genetic Algorithms in Search, Optimization, and Machine Learning, Reading, MA: Addison-Wesley, 1989. Mitchell, M., An Introduction to Genetic Algorithms (Complex Adaptive Systems), Cambridge, MA: MIT Press, 1996. Sette, S., and Boullart, L., “Genetic Programming: Principles and Applications,” Engineering Application of Artificial Intelligence, Vol. 14, 2001, pp. 727–736. Tang, K. S., Man, K. F., Kwong, S., and He, Q., “Genetic Algorithms and Their Applications,” IEEE Signal Processing Magazine, November 1996, pp. 22–37. Whitley, D., “An Overview of Evolutionary Algorithms: Practical Issues and Common Pitfalls,” Information and Software Technology, Vol. 43, 2001, pp. 817–831.