Process Control and Optimization, VOLUME II - Unicauca

a reactor jacket increases as the water flow rate through the jacket drops). ..... For a more detailed treatment of self-tuning con- trollers, refer to ..... Trevathan, V. L., “pH Control in Waste Streams,” ISA Paper #72-725, 1979. Vervalin, C. H. ...
241KB taille 2 téléchargements 184 vues
2.19

Nonlinear and Adaptive Control F. G. SHINSKEY R. D. ROJAS

(1970)

P. D. SCHNELLE

B. G. LIPTÁK

(1995)

(2005)

INTRODUCTION This section discusses the conceptual approaches and tools of adaptive control and their applications to nonlinear processes. Adaptive control can be thought of as a system that automatically designs a control system. This technique has considerable potential for applications where the processes being controlled are not well understood or have significant nonlinearities or time-varying parameters. Adaptive control utilizes the natural steps of control system design. These include process modeling and identification (Section 2.16), relative gain calculations (Section 2.25), controller design (for example, self-tuning controllers, Section 2.29), and optimization (Section 2.20). At present, these approaches and ideas are being extended to include such new tools as artificial intelligence and as expert systems (Section 2.8), neural networks (Section 2.18), fuzzy logic (Section 2.31), and genetic algorithms (Section 2.10), among others. Therefore, it is suggested that all of these sections be studied if a thorough understanding of the subject matter is desired.

DEFINITIONS AND TERMINOLOGY Most processes are nonlinear in some respects. The process gain can vary with load (for example, the gain of heat transfer processes will drop with rising load) or can change with time (for example, because of dirt or coating buildup). The dead time and with it the period of oscillation can also vary (for example, the transportation lag or dead time represented by a reactor jacket increases as the water flow rate through the jacket drops). Feedback control loops are designed so that the controlled variable (the process output) is maintained at its set point even if disturbances occur or the dynamics of the process changes. When the process is operating close to the upper or lower boundaries of its performance range, the total loop gain will approach one of its limits. If it approaches its upper limit, it will cause undamped oscillation; if it drops near the lower limit, the loop becomes sluggish and cannot keep its controlled variable on set point. Similarly, because the process dead time constrains the oscillations of the feed-

© 2006 by Béla Lipták

(1985)

back loops, when the dead time of the process changes, the control settings also need to be revised. Adaptive control involves the automatic detection of changes that occur in the process parameters or in the set point and the readjustment of the controller settings, thereby adapting the tuning of the loop to the changing conditions. Adaptation can be based on the inputs entering the process (known operating conditions or measurable disturbances) or on the closed-loop performance (behavior of the controlled variable, c, or tracking error, e). In the first case, the system is adjusted on the basis of a measurement of the disturbance variable, which is called feedforward adaptation (or gain scheduling). In the second case, the system uses a measurement of its own performance, which is called feedback adaptation (or self-adaptation). Steady-State and Dynamic Adaptation The adaptation criteria can be specified in the steady state: an example is the goal of setting the air–fuel ratio of a boiler so that it will maintain the process to maximum efficiency (the minimum total-loss) point on its performance curve. This is called steady-state adaptive (or optimizing) control. Another way to specify the adaptation criteria is on the basis of the process transient response. An example of this approach is the modification of tuning constants based on the damping of the controlled variable after an upset. In this case the adaptation method is called dynamic adaptation. This method can be used, for example, to stabilize a PID-based pH control system. Without adaptation, when the process gain (Kp) rises as the process approaches neutrality, the increased process gain results in the loop becoming unstable (cycling). In this application the adaptive controller maintains the damping ratio (usually set for 1/4 ). Therefore, when the controller realizes that Kp has risen, it starts to lower the controller gain (Kc) so that the total gain product of the loop (Kp Kc Kv Kt , where Kv and Kt are valve and transmitter gains, respectively) will remain at the desired value (usually 0.5). An adaptive control system is one whose parameters are automatically adjusted to meet the corresponding variations in the behavior of the process being controlled in order to optimize the response of the loop. The significant difference is that

266

Control Theory

adaptive control changes the controller parameters (tuning settings), which in PID control are normally fixed, as opposed to changing the outputs of a system, which are expected to vary. The parameters set into controllers and control systems naturally reflect the characteristics of the processes they control. To maintain optimum performance, these parameters should change as the associated process characteristics change. If these changing characteristics can be directly related to the magnitude of the controller input or output, it is possible to compensate for their variation by the introduction of suitable nonlinear functions at the input or output of the controller. Examples of this type of compensation would be the introduction of a square-root extractor in a differential pressure type flow-measurement signal to linearize it, or the selection of a particular valve characteristic to offset the effect of line resistance on flow. This step is not considered to be part of adaptive control, in that the controller functions remain fixed. If the process nonlinearities are compensated by controller functions (i.e., by feedback linearization or by changing PID controller gain, integral, or derivative tuning settings) the controller is nonlinear. An example is a pH controller, where the controller gain is adjusted to compensate for the nonlinearities of the pH system titration curve. Changing controller gain as a function of the pH controller input (measured variable) is not strictly adaptive, but it is nonlinear.

APPROACHES TO ADAPTIVE CONTROL Adaptive control has been an important field of theoretical research, but now it is also a good tool to solve real-world problems. Classical adaptive control techniques have shown to provide simple solutions to the control of nonlinear and time-variant systems; several texts and articles pursue the details of the approaches presented here. Some good examples are the books of Astrom and 1 2 3 Wittenmark, Sastry and Bodson, and Ioannou and Sun; electronic versions of the latter two books can be obtained by free download from the authors’ Web pages. They can be 4 complemented with the Tao book, which summarizes the recent fast growth of adaptive control applications and theory development. With the increasees in the complexities of processes, including the effects of system component imperfections and nonsmooth nonlinearities, the adaptive control system is expected to provide greater robustness. These needs justify the emergence of such expert and artificial intelligence techniques as neural networks, fuzzy logic, and genetic algorithms.

Temperature 50% Flow T100 L100

100% Flow

G50 G100 Time

L50

T50

T = Time constant L = Dead time G = Steady-state gain

FIG. 2.19a Step response of steam temperature to firing rate in a once-through boiler.

compensation for its effect can be programmed into the control system. This is the basis for gain scheduling. The most notable example of gain scheduling is the variation of dynamic gain with flow in pipes and other longitudinal equipment where no back-mixing takes place. This variation of dynamic gain is commonly seen in heat exchangers, but it causes a particularly severe problem in once-through boilers. In a once-through boiler, feed water enters the economizer tubes, passes directly into the evaporative tubing and then into the superheater tubing, and leaves as superheated steam whose temperature must be accurately controlled. No mixing takes place as in drum-type boilers, so a sizable amount of dead time exists in the temperature control loop, particularly at low flow rates. Figure 2.19a shows the response of steam temperature to changes in firing rate at two different feed water flow rates. At 50% flow, the steady-state gain is twice as high as at 100% flow, because only half as much water is available to absorb the same increase in heat input. The dead time and the dominant time constant are also twice as great at 50% flow. The effect of these variable properties on the dynamic gain of the process is evidenced in Figure 2.19b. The same size load upset produces a larger excursion in temperature at 50% flow, indicating a higher dynamic gain. The difference in damping between the two conditions also reveals the

Temperature 50% Flow

Time

Temperature

Feedforward or Gain Scheduling Where a measurable process variable produces a predictable effect on the gain of the control loop, or a known nonlinear behavior depends on the operating conditions,

© 2006 by Béla Lipták

100% Flow

Time

FIG. 2.19b Response of the steam temperature loop to step changes in load without adaptation.

2.19 Nonlinear and Adaptive Control change in dynamic gain and so does the response, which is twice as fast at 100% flow. If oscillations were evident in the 100% flow response curve, their period would be much shorter than for 50% due to the difference in dead time. At 25% flow, oscillation periods would be still shorter and damping would disappear altogether. If the only feedback mode used in this application were proportional, the period of oscillation would vary inversely with flow, as would the dynamic gain at that period. The only adjustment that could be made would be that of the controller gain (Kc), which should vary in direct proportion to flow. This would make the damping uniform, but with a plain proportional controller nothing can change the increased sensitivity of the process to upsets at low flow rates. Since reset and derivative modes are normally also used to control temperature, some consideration should also be given to their adjustment as a function of changes in flow. The process dead time —hence the period of oscillation under proportional control — varies inversely with flow. Therefore, in order to complete the adaptation, the reset and derivative time constants should also be varied inversely with flow. The equation for the flow-adapted three-mode controller is  w m = Kc w  e + T  i



e dt +

Td d  e +b w dt 

2.19(1)

where w is the fraction of full-scale flow and Kc, Ti, and Td are the proportional, integral, and derivative settings at full scale flow. Equation 2.19(1) can be rewritten to reduce the adaptive terms to two:  w2 m = Kc  we + Ti 



e dt + Td

d  e +b dt 

2.19(2)

Other parameters can be substituted for flow in instances where the adaptation is based on variables other than flow. Dead-Band Control Dead-band control is a frequently used form of nonlinear control. The dead-band action is typically a programmed nonlinearity or adjustment depending on whether dead band is set by a process variable or by a disturbance factor. Dead-band control is not a stand-alone control function. Proportional, integral, and derivative modes can still be used. If this function is used, a dead band is placed around the controller error such that no control action occurs unless the error exceeds the dead-band range. If the error is within the dead band, the error is set to zero. Dead-band control is also used to stabilize very fast or sensitive processes (guidance of missiles or other projectiles) where stability is gained by not making changes (in direction) unless the limits of the gap (control tunnel) are approached. Dead-band control has found application in pH control and

© 2006 by Béla Lipták

3-Mode nonlinear (Gap action) Input VPC

Conventional P + I controller pHIC Input

Output

267

pH Measurement

Output Reagent

(Large)

VPC = Valve position controller (Small)

100% Auto (3 mode control)

VPC Set point= Small valve open to 50%

GAP (nocontrol action)

Process variable % full scale

Auto (3 mode control) 0%

FIG. 2.19c The valve position controller (VPC) measures the opening of the small valve and prevents it from fully opening or fully closing by becoming active only when such extreme positions are evolving.

in systems requiring two control valves — one large, one small — to overcome the rangeability problem. For example, consider the control scheme shown in the top portion of Figure 2.19c. Here the dead-band controller is used to drive the large valve when the output of the conventional proportional and integral controller reaches a preset limit (top or bottom). The large valve makes a rough adjustment when the error is outside of its dead band or gap. Returning the pH back within the gap allows the small control valve to once again take control. Figure 2.19c shows the regions where the dead-band control action is in operation. Depending on the relative size of the small (trim) valve and the response desired, the size of the dead band can vary widely. Commercial controllers and DCS or computer control packages are available with this dead-band feature.

Switching Controller Gains Another form of programmed adaptation or nonlinear control is referred to as variable breakpoint control. The proportional gain for this type of controller is changed at certain predefined values of the process variable or controller error (process variable − set point). In this type of application it is usually desired to set the controller gain at a low value at or around the set point, so that not much controller action would occur near the set point.

268

Control Theory

If the process moves to some preset distance from set point, the controller gain is increased to drive the process back into the set point region at a faster rate. This type of control action may also be used to compensate for very nonlinear process gain characteristics. In this case the controller gain is set high when the process gain is low and set low when the process gain is high in order to maintain a relatively constant (0.5 or so) closed-loop gain (i.e., consistent closed-loop performance throughout control range). The pH control problem is a good example of nonlinear control. Consider the process gain characteristic of the neutralization system illustrated in Figure 2.19d. This is a typical strong acid, strong base neutralization process. The control set point is at pH = 7. The process has extremely high gain at this point (i.e., a small amount of reagent change results in a dramatic change in pH). The required controller gain between a pH of 3 and a pH of 9 must be small for stability. If the pH measurement were to move above 9 or below 3, this low gain would result in a very sluggish response because at those pH values the process gain is low. Therefore, above 9 and below 3 the controller gain is switched to a high value. The required controller gain is shown as a dashed line in Figure 2.19d. Such nonlinear controllers are available both in the form of electronic hardware controllers or as DCS algorithms. They can also be implemented with more than two line segments, and in more sophisticated systems the controller gains can correspond to the mirror image of the process gain curve. It is important to point out that the implementation of this algorithm does require more than just a simple switching of gain numbers at breakpoints. Care must be taken to ensure that the gain change does not result in a bump or discontinuity in control action.

Gain 10

14

Controller gain

Nonlinear controller action pH Titration curve

Set point

9 pH 7

5

Slope of pH Titration curve is process gain (Kp)

3 1

Controller gain 0

Slope of controller action curve is controller gain (Kc)

Reagent flow proportional controller action

FIG. 2.19d Variable breakpoint nonlinear control action is used for strong acid, strong base neutralization.

© 2006 by Béla Lipták

FRC setpoint Nonlinear controller Surge tank

LRC

LT

100%

Set point FRC

Level

0% 0% 20%

FT

80% 100%

Feed to distillation tower

FIG. 2.19e Nonlinear level control keeps the set point of its cascade slave (FRC) unchanged most of the time while allowing the level to float, but it still prevents flooding or draining of the surge tank.

An example of a process application where the discontinuity in controller gain can be tolerated, while the use of the nonlinear controller improves overall performance, is the case of surge tank level control. As the purpose of installing a surge tank is to absorb the differences in flows between plant sections, the intent is to let the level in these tanks float. When this level is cascaded to a flow controller (Figure 2.19e), it is even more desirable not to react to small level variations (which have no harmful consequence) by changing the flow controller set point, which if changed would upset the material balance of the downstream process. Therefore, the nonlinear controller is an ideal cascade master for such an application. As shown in Figure 2.19e, the nonlinear controller can be adjusted to make no change at all in the slave (FRC) set point as long as the level is between 20 and 80%. This control system still protects the tank from draining or flooding by becoming active when the level goes outside this gap. Continuous Adjustment of Controller Gains Just as in the programmed adaptation algorithm of Equation 2.19(1), in which the gain was multiplied by (and thereby was made a function of) the load variable, one can also adjust the controller gain (Kc ) as a function of other parameters. One obvious option is to multiply the control gain (Kc) by some function of 2 the error (e). For example, if Kc is substituted with (Kc)(e ), this will make the controller gain nonlinear and would result in the following PID algorithm:  1 m = Kc (e 2 )  e + Ti 

∫ e dt − T

d

dc  +b dt 

2.19(3)

In Equation 2.19(3), the derivative mode acts on the measurement (c) only, instead of acting on the error (e) in order to make it insensitive to set-point changes. Note that this algorithm has a lower gain when the errors are small, and therefore this controller is less responsive to small errors.

2.19 Nonlinear and Adaptive Control This characteristic makes the nonlinear controller well suited for use in applications involving noisy measurements, such as flow or level, because instead of amplifying the noise, its response is minimal when the error is small. Naturally, it should be remembered that with large errors the controller gain rapidly gets very high and can cause stability problems and cycling unless some reasonable limit is placed on it. When using “error squared” PID algorithms, a variety of configurations is available. In one configuration, the controller gain (Kc) is substituted with XKc, where the value of X is found from Equation 2.19(4) below: X = 1.0 + 90 E/Y + 3, 045 E/Y 2 + 145, 675 E/Y 3

2.19(4)

where E = the normalized absolute value of the error (E is between 0 and 1.0) and Y = a constant that sets the severity of the desired nonlinearity in the controller action. For example, if Y = 14%, it means that X will double after every 14% increase in the error. In other words, if X is 2 when the error is 14%, it will be 4 when the error is 28%, and it will be 8 when the error is 42%. The multiplier X can be applied to the controller gain (Kc) and/or to its integral time (Ti). When both are chosen the algorithm is particularly suited to level control. For more detail on the control technique see Reference 5. Gain scheduling obtained its name because it was originally used to accommodate changes in the process gain. Today it is also used, based on measurements of the operating conditions of the process, to compensate for the variations in process parameters or for known nonlinearities in the process. Feedback Adaptation or Self Adaptation Where the cause of changes in control-loop response is unknown or unmeasurable, feedforward adaptation or gain scheduling cannot be used. If an adaptive system is to be applied under these circumstances, it must be based on the response of the loop itself, i.e., it must be part of a feedback adaptive scheme. Feedback adaptation is a more difficult problem than feedforward adaptation because it requires an accurate evaluation of loop responses, ideally without the knowledge of the nature of the disturbance input. The block diagram of a feedback adaptive system is given in Figure 2.19f. This system has all the problems that are

Adaptive system

Load

Control settings r

+

e − c

Controller

m

Process

FIG. 2.19f A self-adaptive system is a control loop around a control loop.

© 2006 by Béla Lipták

Regulator parameter calculation (design criterion)

m

ai bi

Regulator and integrator (1−z−1)

Identifier (parameter estimation)

269

c

Load disturbances (q) m Manipulated variable (control action)

Process

c Controlled process variable

FIG. 2.19g Self-tuning regulator.

associated with implementing feedforward adaptation plus the problems of evaluating the response and making a decision on the correct adjustment. Several feedback adaptive techniques that are currently being used in the process industry are briefly discussed below. These include the self-tuning regulator, the model reference controller, and the pattern recognition adaptive controller. Self-Tuning Regulator The self-tuning regulator (STR) is a name given to a large class of self-adaptive systems. The block diagram in Figure 2.19g shows the common structure of these STR systems. The figure indicates that all STRs have an identifier section, typically consisting of a process parameter estimation algorithm. Also common to all STRs is a regulator parameter calculation section. This section calculates the new controller parameters as a function of the estimated process parameters. The methods used in these two blocks distinguish the type of STR being used. Popular varieties of STR include minimum-variance, generalized minimum-variance, detuned minimum-variance, dead-beat, and generalized pole-placement controllers. Reference 6 is a summary paper on basic STR technology. For a more detailed treatment of self-tuning controllers, refer to Section 2.29. Model Reference Adaptive Controls Model reference adaptive control (MRAC) offers another method for the self-adaptive control problem. The controller is composed of a reference model, which specifies the desired performance; an adjustable controller, whose performance should be as close as possible to that of the reference model; and an adaptation mechanism. This adaptation mechanism processes the error between the reference model and the real process in order to modify the parameter of the adjustable controller accordingly. Figure 2.19h schematically shows how the parts of a model reference controller are organized. Model reference adaptive controllers were originally designed to solve the deterministic servo problem, that is, control of the process-to-variable set

270

Control Theory

INTELLIGENT ADAPTIVE TECHNIQUES r(t) + −

Adjustable controller (Ci )

Σ

m

Process

+ Reference model (CM)



Σ

Adaptation mechanism (CA)

* u(n)

FIG. 2.19h Model reference adaptive controller.

point reference signals. The design of MRAC has been mostly based on stability theory. For more information, see Reference 7 and also refer to Sections 2.13 and 2.16. Pattern Recognition Controllers Other self-adaptive controllers exist that do not explicitly require the modeling or estimation of discrete time models. These controllers adjust their tuning base on the evaluation of the system’s closedloop response characteristics (i.e., rise time, overshoot, settling time, loop damping, etc.). They attempt to “cut and try” the tuning parameters and recognize the pattern of the response, thus the name “pattern recognition.” Pattern recognition controllers are commercially available from several instrument vendors. They are microprocessor based and usually heavily constrained with regard to the severity of allowable tuning parameter adjustments. They are gaining fair acceptance in operating plants.

R (t)

Neural nets

Parameter adjustable controller

Process changes, such as flow disturbances and sensor noise, are common sources of inaccurate process information; because they degrade the control loop performance, the use of adaptive control is justified as the complexity of the process (nonlinearities and time-varying parameters) increases. Adaptive control capability to deal with unknown or time-variant dynamics can be extended by using intelligent control (intelligent identification and/or tuning). Furthermore, economically optimal strategies have emerged to improve profitability. Many of these utilize switching operation points that require multiple models or multiple controllers or both (multiple model adaptive control). To maintain optimal performance, the control system has to adapt continuously to the changes in the process and must perform well while it is adapting and while set points are also changing. In such cases, a supervised switching control strategy can be considered, where a variety of control algorithms can be chosen from a supervisor or safety net that is driven by a switching logic-based criterion, often realized using artificial intelligence or expert systems. Some of these will be discussed in the next paragraphs. Intelligent Identification and/or Tuning The overall goal of this approach is to use intelligent structures (fuzzy logic, neural networks, genetic algorithms) to substitute for either the identification, control, or adaptation tasks in the control scheme (see Figure 2.19i). As is discussed

u(t)

Process

Y(t)

qc (t)

Online parameter estimation

Adaptation law

FIG. 2.19i Intelligent identification and tuning in adaptive control.

© 2006 by Béla Lipták

qm(t)

Fuzzy logic

2.19 Nonlinear and Adaptive Control

R(t) + −

Candidate controller library

u(t)

Y(t)

Process

P*(t)

Model library

271

Ym(t) −

+

Logic change fp(t) Transition signal generator

Transition supervisor sp(t)

Transition signal filter

em(t)

FIG. 2.19j Adaptive control system for matching multiple process models to multiple process algorithms and selecting the best pairing in a supervised adaptive scheme.

in other sections of this chapter, control schemes combining neural nets with model predictive control (MPC) can tolerate inaccuracy and uncertainty in the model and can apply online 8 training to continuously improve the model. Multiple Model Adaptive Control (MMAC) Figure 2.19j illustrates the concept of multiple model adap9–11 tive control (MMAC). This technique is based on the use of a library of process models, a library of controller algorithms, and an intelligent switching logic. The MMAC system operates by identifying a process model in its library that closely resembles that of the actual process and, based on that knowledge, selecting a control algorithm (from the Candidate Controller Library) that is best suited for controlling such a process. The process model is usually selected on the basis of controlled variable (process output) pattern recognition, from the Model Library. Once the process model is identified, a switching law is applied (Logic Switching Supervisor or Transition Supervisor) to adaptively select the correct matching between the model and its corresponding controller. The switching laws can roughly be divided into two categories: those based on process estimation and those based on a direct performance evaluation of each candidate controller. MMAC Limitations and Suppliers MMAC is like a consultant in a box. Most consultants have hours, sometime weeks, to think about the best way of controlling a difficult process, while in the MMAC system, such decisions are automated and the options are those available in the libraries. Therefore, the selected process model will be as good as the library and its selection algorithm, and the same holds true for the controller library.

© 2006 by Béla Lipták

The main limitation of MMAC is that it is discontinuous; every time a new computer algorithm is selected from the library, it will upset the process. Another limitation of MMAC is its adaptation to start-up and shut-down conditions, where the process model is changing very fast. It should also be noted that at this time few adaptive controllers on the market can update their control strategies while the process is running. References 12 and 13 provide further information on the suppliers and their products, and the reader can also visit their Web sites: • • • • • •

QuickStudy, www.quickstudy.com, from Adaptive Resources (Pittsburgh, PA) Exact and Connoisseur, www.foxboro.com, from the Foxboro Co. (Foxboro, MA) BrainWave, www.brainwave.com, from Universal Dynamics Technologies (Vancouver, BC, Canada) Intune, www.controlsoftinc.com, from ControlSoft (Highland Heights, OH) KnowledgeScape, www.kscape.com, from KnowledgeScape Systems (Salt Lake City, UT) CyboCon, www.cybocon.com, from CyboSoft (Rancho Cordova, CA)

Adaptive model-based controllers such as Exact, BrainWave, QuickStudy, and Connoisseur generate their models automatically while the controller is online, using previously recorded historical process data. This is convenient when compared to classical controllers because in theory, such adaptive controllers can predict process trends and future behavior changes over time. CyboCon skips the modeling step. Instead, CyboCon looks for patterns in the recent errors and the learning algorithm

272

Control Theory

produces a set of gains or weighting factors that are then used as parameters for the control law.

9. 10.

CONCLUSIONS 11.

Classically, the performance difference between gain scheduling and self-adaptive systems is analogous to that which exists between feedforward and feedback control. A self-adaptive controller (comparable to feedback) cannot make an adjustment to correct its settings until an unsatisfactory response is encountered. Two or more cycles must pass before an evaluation can be made upon which to base an adjustment. Therefore, the cycle time of the adaptive loop must be much longer than the natural period of the control loop itself. Consequently, the self-adaptive system cannot correct for the present poor response but can only prepare for the response to the next upset, assuming that the control settings presently generated will also be valid then. On the other hand, the gain scheduling system should always have the correct settings because it responds automatically to changes in process variables in a manner analogous to feedforward control. It does not have to “learn” the new process dynamics via adaptive loops. MMAC with supervised switching offers the possibility of good performance and robustness over a wider range of operating conditions than do the traditional approaches, but it will take some time for it to mature. Its potential is best for those processes that are poorly understood and nonlinear with time-varying dynamics (dead times and time constants). There is a great deal of activity in the evolving field of AI14,15 based model adaptive control. Some believe that they will eventually eliminate the need for tuning and re-tuning of controllers, but we will have to wait for the fifth edition of this handbook to be sure.

References 1. 2.

3.

4. 5. 6.

7. 8.

Astrom, K., and Wittenmark, B., Adaptive Control, Reading, MA: Addison-Wesley, 1995. Sastry, S., and Bodson, M., Adaptive Control: Stability, Convergence, and Robustness, Englewood Cliffs, NJ: Prentice Hall, 1989. www.ece. utah.edu/~bodson Ioannou, P.A., and Sun, J., Robust Adaptive Control. Englewood Cliffs, NJ: Prentice Hall, 1996. http://www-ref.usc.edu/~ioannou/ Robust_Adaptive_Control.htm Tao, G., Adaptive Control Design and Analysis. New York: John Wiley & Sons, 2003. Shunta, J. P., and Fehervari, W., “Nonlinear Control of Level,” Instrumentation Technology, January 1976. Soderstrom, T., Ljung, L., and Gustausson, I., A Comparative Study of Recursive Identification Methods, Division of Auto. Cont., Lund Institute of Technology, Report 7427, 1974. Goodwin, G. C., Adaptive Filtering, Prediction, and Control, New York: Academic Press, 1981. Nikolaou, M. “Identification and Adaptive Control,” Comp. Chem. Eng., special issue on NSF/NIST Vision 2020 Workshop, 23/2, pp. 215–225 (1998).

© 2006 by Béla Lipták

12. 13. 14.

15.

Fu, M., “Minimum Switching Adaptive Control for Model Following,” 35th IEEE Conference on Decision and Control, Kobe, December, 1996. Anderson, B., Brinsmead, T., De Bruyne, F., Hespanha, J., Liberzon, D., and Morse, A., “Multiple Model Adaptive Control. Part 1: Finite Controller Coverings,” International Journal of Robust and Nonlinear Control, 10, pp. 909–929, 2000. Hespanha, J., Liberzon, D., Morse, A., Anderson B., Brinsmead, T., and De Bruyne, F., “Multiple Model Adaptive Control. Part 2: Safe Switching,” International Journal of Robust and Nonlinear Control, 11, pp. 479–496, 2001. Vandoren, V., Techniques for Adaptive Control, Oxford: ButterworthHeinemann, 2002. Vandoren, V., “Adaptive Controllers Work Smarter, Not Harder” Journal of Control Engineering, October 1, 2002. Conradie, A., Miikkulainenm, R., and Aldrich, C., “Adaptive Control Utilising Neural Swarming,” Proceedings of the Genetic and Evolutionary Computation Conference, New York, 2002. Miller, D., “A New Approach to Adaptive Control: No Nonlinearities,” Systems & Control Letters, 49, adaptive control special issue, 67–79, 2003.

Bibliography Abonyi, J., Andersen, H., Nagy, L., and Szeifert, F., “Inverse Fuzzy-ProcessModel Based Direct Adaptive Control,” Mathematics and Computers in Simulation, 51(1–2):119–132, 1999. Anderson, B., Brinsmead, T., Liberzon, D., and Morse, A., “Multiple Model Adaptive Control with Safe Switching,” International Journal of Adaptive Control and Signal Processing, 15, 445–470, 2001. Balchen, J. G., and Mummè, K. I., Process Control Structures and Applications, New York: Van Nostrand Reinhold, 1988. Borisson, U., “Self-Tuning Regulators for a Class of Multivariable Systems,” Automation, July 1977. Box, G. E. P., and Jenkins, G. M., Time Series Analysis: Forecasting and Control, rev. ed., San Francisco: Holden-Day, 1976. Bristol, E. H., “Adaptive Process Control by Pattern Recognition,” Instruments and Control Systems, March 1970. Bristol, E. H., “Pattern Recognition Adaptive Control,” IEEE Transactions, January 1962. Buckley, P. S., Techniques of Process Control, New York: John Wiley & Sons, 1964. Cutler, C. R., and Ramaker, B. L., “Dynamic Matrix Control — A Computer Control Algorithm,” ACC Conference Proceedings, San Francisco, 1980. Fabri, S. G., and Kadirkamanathan, V., Functional Adaptive Control: An Intelligent Systems Approach, Heidelberg: Springer-Verlag, 2001. Harriott, P., Process Control, New York: McGraw-Hill, 1964. Hausman, J. F., “Applying Adaptive Gain Controllers,” Instruments and Control Systems, April 1979. Hespanha J. P., Liberzon, D., and Morse, A., “Overcoming the Limitations of Adaptive Control by Means of Logic-Based Switching,” Systems & Control Letters, 49, adaptive control special issue, 49–65, 2003. Kelly, K. P., “Should You Be Training Your Operators with Simulation Software?” Control, June 1992, pp. 40–43. Keviczky, L., and Hetthessy, J., “Self-Tuning Minimum Variance Control of MIMO Discrete Time Systems,” Automatic Control Theory and Applications, May 1977. Koenig, D. M., Control and Analysis of Noisy Processes, Englewood Cliffs, NJ: Prentice Hall, 1991. Luyben, W. L., Process Modeling, Simulation and Control for Chemical Engineers, 2nd ed., New York: McGraw-Hill, 1990. Magdalena, L., and Monasterio, F., “Evolutionary-Based Learning Applied to Fuzzy Controllers,” Proceedings of Fourth IEEE International Conference on Fuzzy Systems and Second International Fuzzy Engineering Symposium (FUZZ-IEEE/IFES’95), pp. 1111–1118, Yokohama, March 1995.

2.19 Nonlinear and Adaptive Control McCauley, A. P., “From Linear to Gain-Adaptive Control,” Instrumentation Technology, November 1977. McMillan, G., Tuning and Control Loop Performance, 2nd ed., Research Triangle Park, NC: ISA, 1990, pp. 45–55. Morely, R., “Faster Than Real Time,” Manufacturing Systems, May 1992, p. 58. Roffel, B., and Chin, P., Computer Control in the Process Industries, Chelsea, MI: Lewis Publishers, 1989. Shinskey, F. G., Process-Control Systems, New York: McGraw-Hill, 1967.

© 2006 by Béla Lipták

273

Shinskey, F. G., “A Self-Adjusting System for Effluent pH Control,” paper presented at the 1973 ISA Joint Spring Conference, St. Louis, April 1973. Shunta, J. P., “Nonlinear Control of Liquid Level,” Instrumentation Technology, January 1976. Trevathan, V. L., “pH Control in Waste Streams,” ISA Paper #72-725, 1979. Vervalin, C. H., “Training by Simulation,” Hydrocarbon Processing, December 1984, pp. 42–49.