Process Control and Optimization, VOLUME II - Unicauca

advanced multivariable control with bad equipment or with con- trol loops that .... Control Theory ... Optimization provides a management tool for achieving the.
444KB taille 4 téléchargements 278 vues
2.20

Optimizing Control F. G. SHINSKEY

(1970)

C. G. LASPE

INTRODUCTION Optimization of an industrial process usually means an increase in profitability while maintaining safety and product quality. The criteria for optimization vary with the process. Optimization in materials transportation systems (pumps, compressors, fans) usually means the full opening of all throttling devices so that all the energy introduced is used to transport materials and none of it is spent on overcoming artificially introduced friction elements, such as throttling valves and dampers. The global trend is for higher product quality at a lower cost. The link between product quality and performance is not clearly defined in all cases. An in-depth analysis might be needed to determine the optimum performance for each piece of equipment, then each process, and finally for each control loop. What we do know is that for higher product quality, users must tighten their specifications. The reader will discover that optimizing a process is similar to a conductor directing an orchestra: coordinate, accelerate here, slow down there, synchronize other parts, etc. The “conductor” must know about control but must also know about the process itself. We should move to multivariable systems when simple single-loop techniques fail, even if properly tuned. On the other hand, we should not make the common mistake of using advanced multivariable control with bad equipment or with control loops that are not tuned to deliver the expected performance. When optimizing, it is important to consider design, equipment performance, control strategies, operation procedures, performance monitoring, logic, and special control strategies for startup, shutdown, grade changes, and abnormal conditions. Defining Optimum Performance The optimization of a batch reactor might mean to operate it at minimum cycle period, which in turn would imply a maximum but safe rate of heat-up, accurate determination of endpoint, and maximum safe rate of stripping. In continuous chemical reactors, optimization might mean maximized rate of conversion (maximum exothermic heat) and therefore an operation where the chilled water valve opening is “pushed” up to and maintained at a safe maximum opening. The optimization of distillation towers might include the goal of minimizing the heat of separation. As the heat 274 © 2006 by Béla Lipták

(1985)

B. G. LIPTÁK

M. RUEL

(1995)

(2005)

of separation is reduced when the liquids are boiled at lower temperatures, this aspect of optimization usually means that the columns are operated at a safe minimum pressure level. The optimization of other processes is achieved by operating them at the minimum or maximum point of a cost or efficiency curve. For example, the point where one would like to operate all combustion processes is their minimum total loss point (Figure 2.20a). By plotting all the different forms of losses in a boiler or similar combustion process and adding up these losses to arrive at a total loss curve, it becomes possible to search for the minimum point on that curve. This minimum provides the optimum excess air set point for this process. Similar total operating cost curves can be constructed for chillers and other types of cooling systems. Figure 2.20b shows the method of arriving at a total cost curve for a cooling tower (CT) system and using it to continuously determine the optimized set point for the approach controller. As the approach to the ambient wet bulb temperature increases (the cooling tower water gets warmer), the pumping costs increase (because more water is needed to provide the same cooling). At the same time, the cost of operating the CT fans drops (because less air circulation is needed as the CT water gets warmer).

Loss BTU/HR minimum loss To ta

l lo

sse s I co nc m om bu p sti let on e lo s s

Air deficiency

Poor mixing Good mixing Optimum

Flue

gas lo

ss

Radiation and wall losses

Excess air

Optimum excess air setting

Chemically correct air-fuel ratio

FIG. 2.20a The optimum operating point for a combustion system is the mini1 mum point of the total loss curve.

2.20 Optimizing Control

Total operating cost($) Total tower operating cost (M1 + M2)

SPI

Pumping cost (M2) Cooling tower fan cost (M1)

Optimized set point

Approach °F(°C)

6 4 8 10 12 14 16 18 (2.2) (3.3) (4.4) (5.6) (6.7) (7.8) (8.9) (10) Optimum Approach Set Point

FIG. 2.20b The set point for the optimum approach controller of a cooling tower is the point where the total system operating cost is the minimum.

Therefore, the sum of these two curves has a minimum point, which corresponds to the minimum cost of operation for the system. Finding Minimum and Maximum Points Any process where the operating cost has two components with opposite slopes relative to the load will have a total operating cost curve, with a minimum point. Chillers and heat

pumps, for example, are in this category. The optimization of such devices always involves the searching for that minimum point and then making that point the set point of the associated controller. The slope of the cost curve is zero at the minimum point on the curve. Because the optimization is often performed by differentiation and because the slope of the tangent at the minimum point is zero, one must be very careful to protect such optimization systems from noise and to take into account the low accuracy of differentiators. The methods of searching for minimum and maximum points include both continuous and sampling techniques. This section will also discuss multidimensional optimization, for use when the optimum condition of a process cannot be described by a curve (two dimensions) but instead requires the climbing and evaluation of multiple peaks. Optimizing control is essential in industrial processing today if an enterprise expects to retain viability and competitive vigor. The growing scarcity of raw materials aggravated by the dwindling of energy supplies has created a need to achieve the utmost in productivity and efficiency (Figure 2.20c). “Optimization” means different things to different people. To some, it means the application of advanced regulatory control. Others have obtained impressive results from applying multivariable noninteracting control. To be sure, these latter techniques have merit and may actually comprise a subset of optimization. It is the purpose of this section to guide the practicing process control engineer in a suitable course of action when

30 Economies

Moisture content (%)

25

20 CO 15 Before optimization

After optimization Maximum allowable moisture content

10 SP 5

Economies PV

0 400

600

800

1000 Time

1200

FIG. 2.20c Minimizing energy consumption and increasing set point closer to the specification in a drying process.

© 2006 by Béla Lipták

275

1400

1600

276

Control Theory

confronted with a problem involving optimization. Topics emphasized include: 1. Defining the problem scope 2. Choosing between feedback or feedforward optimization 3. Mathematical tools available 4. Multivariable non-interacting control 5. Identifying and handling constraints An extensive reference list and bibliography will be useful in amplifying the concepts and supplying details for the topics covered.

OPTIMIZATION CONSIDERATIONS Optimization provides a management tool for achieving the greatest possible efficiency or profitability in the operation of any given production process. Changes in the operational environment, consisting of the current constraints and values for the disturbance variables, will inevitably alter the optimal position. Hence, optimizing control must be able to cope with change. Perhaps the most difficult task in the design of an optimization control system is in the definition of the problem scope and in the subsequent choice of optimizing tactics. The need for an online optimizing system can only be ascertained following an in-depth feasibility study. It is the purpose of this section to provide some insights into the resolution of the above problems. Processes that are the best candidates for optimizing control have these characteristics: 1. Multiple independent control parameters 2. Frequent and sizeable changes in disturbance variables affecting plant profitability 3. A required and necessary excess in the number of degrees of freedom in the independent control parameters Figure 2.20d attempts to illustrate, in a simplistic fashion, the interrelationships that exist among the process system

Disturbance variables Y(i) Independent control variables

Performance variables X(i)

Z(i) W(i) Intermediate variables

FIG. 2.20d Process variable interrelationships.

© 2006 by Béla Lipták

variables. The independent variables, X(i), represent those variables that can be controlled by conventional regulatory systems. The disturbance variables, Y(i), represent variables over which little or no control can be exercised, such as ambient conditions, feedstock quality, and possible changes in market demand. The intermediate variables, W(i ), are those that describe certain complex and calculable process conditions, such as internal reflux rates, rates of conversion in reactors, and tube metal temperatures. The intermediate variables can be, and often are, constraining factors in the process operation. The performance variables, Z(i), represent the objective or target values for the process, such as yields, production rates, and quality of product. Once the scope of the problem has been defined, a choice between the strategies of either local or global optimization can be made. The local optimization problems, such as optimal boiler fuel/air ratio control, can be implemented using simpler control algorithms and hardware. On the other hand, for complex assemblages of equipment with a high degree of interaction in their operation, global optimization may be required. Feedback or Feedforward Optimization The feasibility study will normally indicate whether feedforward or feedback control will be more effective in optimizing the process operation. As a general rule, feedback optimization is used only in connection with local optimization problems. Feedforward, on the other hand, is eminently suited for larger, more global control problems. Feedback optimization control strategies are always applied to transient and dynamic situations. Evolutionary optimization is a good example. Steady-state optimization, on the other hand, is widely used on complex processes, if they have long time constants and infrequently changing disturbance variables. Hybrid strategies are also employed in situations involving both long-term and short-term dynamics. Obviously, hybrid algorithms are more complex and require custom tailoring for a truly effective implementation. Feedback control implies transitory, and the response time is mostly related to the dead time. Also, in feedback systems, when a disturbance occurs, during the time while the process is being brought back to set point, the errors could become unacceptable. Feedforward control, on the other hand, requires modeling and advanced mathematical tools, but the resulting control is better and tighter. Reducing Set Point Variability Variability is a measure of the range of variance in the controlled variable. It is expressed as a percentage of the mean value of the controlled variable, and this percentage allows one to compare the different processes. (For further information refer to the section on Statistical Process Control in this chapter.)

2.20 Optimizing Control

The equations for determining mean and standard deviation and to calculate variability based on them are given below: i=n

Mean:

µ=

∑n

xi

2.20(1)

i =1

i=n

Variance:

σ2 =

∑ i =1

( µ − xi )2 n −1 i=n

Standard deviation: σ = σ 2 =

∑ i =1

Variability:

var =

( µ − xi )2 n −1

2σ ∗ 100 µ

2.20(2)

2.20(3)

2.20(4)

Based on a normal distribution, the variability of data represents the area, which includes 95% of all points, the average ±2 standard deviations. In a process control system, the goal is not to eliminate the variability of the controlled variable but to move it so that it can be directed. For example, consider a continuous drying process, where the energy used is steam. The goal is to deliver the end product at constant moisture content using the minimum energy and operating within the limits specified. If the production rate varies, the total amount of drying energy needed also has to change. The total amount of drying energy needed also has to change if the temperature of the process material varies, or if disturbances occur in ambient conditions, steam quality (energy content depends on steam pressure and temperature), etc. The tuning of the moisture controller must consider all of these disturbances in order to ensure that they will not cause excessive variations in the moisture content of the product. As shown in Figure 2.20c, if the variability of the product moisture content is reduced by optimization, the moisture controller’s set point can be increased and the product quality will still be acceptable. As the variability is reduced, not only the product uniformity but also the amount of energy needed to produce it will improve. As illustrated in Figure 2.20c, if the price per ton of product remains the same, the profitability of the operation will be improved by reducing variability.

changes in the major independent variable. Second, it is necessary to apply EVOP or feedback control to perturb a single independent variable at a time. If in a particular process, two independent variables have major influences on the objective of optimization, then it may be possible to configure the controller to examine each variable sequentially in alternate sampled-data periods. This latter approach is feasible only if the process dynamics are rapid in comparison with the frequency of expected changes in the disturbance variables. EVOP has been successfully used to maximize the thermal efficiency of industrial boilers when the fuel/air ratio was controlled by an oxygen-trim control system. In that application, EVOP was used to perturb the oxygen set point in evaluating the actual operation. The objective function consisted of an online calculation of the thermal efficiency (Figure 2.20a). Feedforward Optimization If numerous independent variables that affect the process performance of multivariable processes exist, they can best be optimized by the use of feedforward optimization. An absolute necessity for doing this is an adequate predictive mathematical model of the process. Such models are not used to perturb the process, but only to evaluate the consequences of changes in process variables. In order to serve process optimization, the mathematical model must be an accurate representation of the process. To ensure a close to one-to-one correspondence with the process, the model must be updated prior to each use. Model updating is a specialized form of feedback operation in which the predictions of the model are compared with the actual operation of the plant. Any variances noted are then used to adjust certain key coefficients in the model to make it more representative of the actual process. Figure 2.20e is a signal-flow block diagram of a computer-based feedforward optimizing control system. Process

Process Model

© 2006 by Béla Lipták

Optimizer Computer set points

Model Update Routine

Evolutionary Optimization (EVOP) Feedback control in certain situations can achieve optimal plant performance. Evolutionary optimization, or EVOP, is one such technique using feedback as the basis for its strategy. EVOP is an online experimentor. No extensive mathematical model is required, since small perturbations of the independent control variable are made directly upon the process itself. As in all optimizers, EVOP also requires an objective function. EVOP also suffers from certain limitations. These include the need for the process to be able to tolerate some small

277

Operator set points

Regulatory system

Computer data base Inputs

Outputs Process

FIG. 2.20e Block diagram of a feedforward optimizer.

Control Theory

variables are measured, checked for reliability, filtered, averaged, and stored in the computer database. A regulatory system is provided as a front-line control to keep the process variables at a prescribed and desired slate of values. The conditioned set of measured variables is compared in the regulatory system with the desired set points. Errors detected are then used to generate control actions that are transmitted to final control elements in the process. The set points for the regulatory system are derived either from operator inputs or from the outputs of the optimization routine. Note that the optimizer operates directly upon the model in arriving at its optimal set-point slate. Also note that the model is updated by means of a special routine just prior to use by the optimizer. The feedback update feature ensures adequate mathematical process description in spite of minor sensor or instrumentation errors and, in addition, will compensate for discrepancies arising from simplifying assumptions incorporated into the model. The mathematical techniques and operating characteristics of some feedback optimization systems will be discussed more fully in the paragraphs that follow.

h0

(S0, h0)

Boiler Thermal Efficiency, h

278

h = h0 − K (S − S0)2 S0

Boiler steam load, S, M LB/HR

FIG. 2.20f Typical boiler efficiency curve.

If the steam can be generated by multiple sources, use of calculus indicates an optimum when the incremental costs are equal for all boilers involved. Mathematically, dC1 dC2 dC3 = = K dS1 dS2 dS3

2.20(7)

OPTIMIZING TOOLS Calculus of variations is a classical mathematical approach that can be used to optimize the operation of dynamically changing processes. Though this technique has not found widespread use in industry, it should be considered in any batch chemical reactor problem in which the time–temperature– concentration relationship must be optimized. Theoretical considerations and some application data may be found in References 2 through 5. Economic dispatch or optimal load allocation is a technique directed to the most effective use of the capabilities of parallel multiple resources to satisfy a given production requirement. As an example, the problem of steam load allocation among several boilers in an industrial utilities plant is quite frequently encountered. A typical boiler efficiency curve is shown in Figure 2.20f. For all practical purposes, the curve may be represented by a quadratic equation.

η = η0 − K (S − S0 )2

2.20(5)

where η = efficiency at given steam load, S; η 0 = maximum efficiency at S0; K = constant; and S, S0 = steam loads (any consistent units, such as tons/hour). The cost of producing a given amount of steam is defined by C=

F×S η

where F = cost per unit of steam.

© 2006 by Béla Lipták

2.20(6)

Also, to satisfy the total demand in the plant for steam, the following constraint equation must be satisfied: S1 + S2 + S3 + L = S

2.20(8)

Substituting Equation 2.20(1) into Equation 2.20(2) and then differentiating with respect to steam load S gives, for a given boiler,

(

)

FK  η0 K − S02 + S 2    IC = [η0 K − (S − S0 )2 ]2

2.20(9)

where IC = incremental cost. In the above equation, the parameters η 0, K, and S0 must be evaluated on the basis of experimental test data for each boiler in the system. Figure 2.20g is a plot describing a hypothetical three-boiler system showing a graphical solution to the load allocation problem. The horizontal line intersecting the incremental cost curves satisfies the plant load demand expressed in Equation 2.20(8). Linear Programming Linear programming is a mathematical technique that can be applied provided that all the equations describing the system are linear. In the field of industrial process control, the number of truly linear systems is limited, but one can occasionally be encountered. Simple physical ingredient blending problems such as in cement, glass, and certain alloy manufacturing plants are examples of these. A linear program can be

2.20 Optimizing Control

1.

Incremental production cost IC (i)

3. S1 + S2 + S3 = SDEMAND

2.

Equation 2.20 (8)

2.

3. S1

S3

S2

Unit load, S (i)

FIG. 2.20g Graphical solution of an optimal load allocation problem.

used as an optimizing method if the problem can be reduced to the following set of relationships: C = C1X1 + C2 X 2 + L Cn X n

2.20(10)

a11X1 + a12 X 2 + L a1n X n = b1 a21X1 + a22 X 2 + L a2 n X n = b2 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ am1X1 + am 2 X 2 + L amn X n = bm

2.20(11)

Subject to: Xi ≥ 0 bi ≥ 0

The sectionalized linear program is especially useful for those processes that exhibit slight-to-moderate nonlinearities in their variable relationships. By restricting the permissible range of excursion for each independent variable, the small amount of nonlinearity can be assumed to be zero. In essence, the coefficients c and a in Equation 2.20(10) and Equation 2.20(11) are replaced by their partial derivatives. Thus, in general, the following approximations are made:  δ b(i)  a(i, j) =   δ x ( j)  x ( j )

1.

i = 1, 2, 3, L n i = 1, 2, 3, L n

2.20(12)

where X = independent variables; C= associated cost factors; A = linear coefficients; and b = specified variables. The b variables can represent specific desired concentrations, mass balance, or heat balance requirements. Note that the constants a or C can assume any value, including zero. The objective function will either be maximized or minimized depending upon whether it represents profit or costs. Many excellent textbooks outline the procedures and algo6–9 rithms used in the solution of a linear program.

279

2.20(13)

Note that the partial derivatives are evaluated at the current value for the independent parameter x( j ). The resultant matrix or tableau of partials is used in the linear program to arrive at an interim solution. If the current or interim optimum is greater than the last value obtained, the whole procedure is repeated by “relinearizing” the process at the newly defined operating point. Examples utilizing the above technique may be found in References 10, 11, and 12. An alternative to the sectionalized linear program is the use of a gradient or “hill-climbing” approach. The technical literature is replete with descriptions of the mathematical bases 13–16 Likewise, for numerous gradient optimization methods. many successful industrial applications have been reported and 17–19 reviewed. In contrast to the linear program approach, which considers only one variable at a time in its serial and sequential search for the optimum, the gradient methods generally simultaneously perturb all the independent variables. The magnitude of perturbation applied to each variable is directly proportional to the direction cosine of that variable. Mathematically, U⋅ i= n

∑ i =1

δc δ x (i )

 δc   δ x (i) 

2

2.20(14)

where U = unit step magnitude; c = value of objective function, such as profit; and x(i) = the ith independent control variable. Most gradient methods, once a path of steepest ascent has been established, continue to move along that path until no further improvement in the objective function is obtained. At this juncture, another gradient is established, and the entire system proceeds in this new direction until another ridge is encountered.

Nonlinear Programming Most industrial processes, especially in the chemical and petroleum industries, cannot be described by linear equations over their complete operating range. These processes therefore require some type of nonlinear optimizer to achieve maximum profitability. Two types of nonlinear optimizers — the sectionalized linear program and the gradient search — have been successfully implemented in advanced computer control schemes.

© 2006 by Béla Lipták

CONSTRAINT HANDLING METHODS Optimization of a constrained nonlinear process requires a mathematical algorithm for searching along a constraint boundary. The two successful approaches that have been implemented are “hemstitching” and the use of penalty functions. Figure 2.20h is a hypothetical representation of a twodimensional process whose permissible operating range is

280

Control Theory

W (2)

W(2)

True optimum

Constra

Constra

int W (1 )

int W (1 )

First pass optimum

Pro fit c ont ou rs

Low

W (3)

Process variable X (2)

Process variable X(2)

Optimum

W(3) True profit contour

Start point

W(2)

Process variable X(1)

Process variable X(1)

FIG. 2.20h Gradient search using hemstitching for constraint handling.

constrained by three variables w(1), w(2), and w(3). Parametric lines of the objective function (profit) are also plotted. Although the lines of constraint are portrayed here as linear, in most circumstances they will be markedly curved. Gradient Search Method Beginning at the start point, the gradient search method proceeds up the hill in the direction of the steepest ascent. As soon as a constraint function w(1) is violated, the algorithm must return the computed operating position to some point within the feasible region. Generally, this is accomplished by moving perpendicularly or normal to the constraint function. The slope of the constraint is evaluated from the model by two perturbations of x(1) and x(2). Thus:  δ x (2)  slope =   δ x (1)  w (1)

2.20(15)

FIG. 2.20i Gradient search using created response surface (first pass).

of the objective function is greater than the value at the starting point, then a recentering is employed. The profit contours passing through the peak are assigned a value of zero. This, along with the zero-valued constraints, permits the creation of a second hill. See Figure 2.20j. Note that the summit of the second hill is much closer to the true optimum. The foregoing recentering is repeated until the improvement in the true objective function is less than a desired and arbitrary value. MULTIVARIABLE NONINTERACTING CONTROL Many industrial processes exhibit considerable interaction among the control variables when attempting to regulate the values of the dependent variables. In general, a marked interaction problem will exist if the process can be described by the following set of mathematical relationships: Z (i) = fi( Xj, Yj) W (i) = gi( Xj, Yj)

where the partials are evaluated at the particular constraint boundary. Now the quickest way to return to the feasible region is in the direction of the normal: 2.20(16)

Once the constraint has been recognized, the algorithm must attempt to move along this boundary. The method of a created-response surface suggested by 20,21 C. W. Carroll avoids many of the constraint searching problems. In this approach, the objective function is multiplied by a series of penalty terms, one for each active constraint. Thus, along each constraint boundary, the profit contour assumes a value of zero. A typical response surface is shown in Figure 2.20i. Note that a fictitious hill is created whose summit is within the feasible operating range. Standard gradient search methods are then used to find the peak. Obviously, the peak is not at the true optimum as defined by the intersection of the constraints. If the true value

© 2006 by Béla Lipták

2.20(17)

for j = 1, 2, … n and i = 1, 2, …, m.

W(2) Constraint

Process variable X(2)

−1 normal = slope

W(2)

True optimum

W (1)

True profit contours

W(3)

First pass optimum Original start point

Second pass optimum

W(2)

Process variable X(1)

FIG. 2.20j Gradient search using created response surface (second pass).

2.20 Optimizing Control

× εW2 =







εW3

∆X 2 ∆X 3 ⋅

2.20(19)

⋅ ∆X n

The error terms of the variable W are indicated by ε Wi and the required changes in control variables X are given by ∆Xj.

Coil outlet temperature

COV

Pr ofi t

co nto u

rs

Q BET THR, F MAX

FIG. 2.20k Typical profit surface for ethylene plant furnace cracking naphtha.

provided to achieve the stated objective, such as maximizing profit. The optimizer will at the time identify which of the control variables should be at their limiting values. A practical example of the multi-constraint situation is depicted in Figure 2.20k, which shows the profit contours bounded by six possible constraints. For an ethylene plant pyrolysis furnace, where TMT = tube metal temperature; FBT = fire box temperature; COV = coil outlet velocity; THR = total heat release; QBET = quench boiler exit temperature; and FMAX = maximum feed availability. As shown in Figure 2.20k, there are four active constraints that define the feasible operating region. In addition, several more constraints may become active if process loading or some other disturbance variable undergoes a change. For the above example, visual inspection of Figure 2.20k indicates the optimum occurring at the intersection of the THR and COV constraints. To keep the process at optimum conditions in the interval between optimization calculations, a dual constraint–follower regulation scheme would be set in motion. The set points for these two controllers would be the maximum permitted values for THR and COV. Since interaction between these two constraints will occur, the multivariable noninteraction techniques discussed in the previous paragraph would be employed.

References 1. 2. 3.

Constraint Following Unfortunately for plant managers, but fortunately for computers, many nonlinear constrained processes can be subject to two or more constraints at the same time. As a result of the latest optimization calculation, a slate of set points is

© 2006 by Béla Lipták

0



14



0



12

2.20(18)

∆X1

εW1

0

TMT Current optimum

0

δ X1 δ X1 δ X1 δ W1 δ W2 δ W3

16

Hydrocarbon feed rate, F

W1

In the above equation set, the Xs are the independent control parameters and the Ws represent the dependent controlled variables. The matrix containing the partial derivatives is known as the sensitivity matrix. The noninteracting control problem arises when it is desirable to change the value of one dependent variable without affecting the current values of the remaining dependents. Or, alternatively, how should the independent X variables be set in order to achieve a desired slate of W values? Using matrix algebra, the sensitivity matrix can be inverted, giving rise to the so-called control matrix:

δ X3 δ X3 δ X3 δ W1 δ W2 δ W3

FBT

Steam rate constant

δ W2 δ W2 δ W2 ⋅⋅ × X 3 = W2 δ X1 δ X 2 δ X n W3 M δ W3 δ W3 δ W3 ⋅⋅ Xn δ X1 δ X 2 δ X n

δ X2 δ X2 δ X2 δ W1 δ W2 δ W3

TMT

X1 X2

THR QBET

COV

10

δ W1 δ W1 δ W1 ⋅⋅ δ X1 δ X 2 δ X n

FBT

80

In the general case, the functions fi and gi are nonlinear in form and may be implicit in the independent variables Xj and Yj. This situation is very complex and requires a computer iterative solution in order to decouple the interaction effects. However, for many applications the process can be linearized around the current operating point, thus providing a much more tractable solution to the control problem. Linearization reduces the variable interrelationships to the following mathematical form:

281

4. 5. 6.

Lipták, B. G., Optimization of Unit Operations, Radnor, PA: Chilton, 1987. Akhiezer, N. I., The Calculus of Variations, New York: Blaisdell, 1962. Elsgole, L.E., Calculus of Variation, Reading, MA: Addison-Wesley, 1962. Savas, E. S., Computer Control of Industrial Processes, New York: McGraw-Hill, 1965. Bolza, O., Lectures on the Calculus of Variations, New York: Dover, 1961. Dorfman, R., et al., Linear Programming and Economic Analysis, New York: McGraw-Hill, 1958.

282

7. 8. 9.

10.

11.

12. 13.

14.

15.

16.

17. 18.

19.

20. 21.

Control Theory

Garvin, W. W., Introduction to Linear Programming, New York: McGraw-Hill, 1960. Gass, S. I., Linear Programming, 2nd ed., New York: McGraw-Hill, 1964. Kuehn, D. R., and Porter, J., “The Application of Linear Programming Techniques in Process Control,” IEEE General Meeting Paper CP631153, Toronto, Ontario, 1963. Corbin, R. L., and Smith, F. B., “Mini-Computer System for Advanced Control of a Fluid Catalytic Cracking Unit,” ISA Paper No. 73-516, presented in Houston, October 1973. Laspe, C. G., Smith, F. B., and Krall, R. A., “An Examination of Optimal Operation of an Industrial Utilities Plant,” ISA Paper No. 76-524, presented at ISA 76 International Conference and Exhibit, Houston, October 1976. Horn, B. C., “On-Line Optimization of Plant Utilities,” Chemical Engineering Progress, June 1978, pp. 76–79. Gabriel, G. A., and Ragsdell, K. M., “The Generalized Reduced Gradient Method: A Reliable Tool for Optimal Design,” Journal of Engineering for Industry, Trans. ASME, series B, Vol. 99, No. 2, May 1977, pp. 394–400. Kelley, H. J., “Method of Gradients,” in Leitman, G., Ed., Optimization Techniques with Applications to Aerospace Systems, New York: Academic Press, 1962, pp. 205–254. Rosenbloom, P. C., “The Method of Steepest Descent,” Numerical Analysis, Proceedings of the 6th Symposium on Applied Mathematics, New York: McGraw-Hill, 1956. Schuldt, S. B., et al., “Application of a New Penalty Function Method to Design Optimization,” Journal of Engineering for Industry, Trans. ASME, series B, Vol. 99, No. 1, February 1977, pp. 31–36. Roberts, S. M., and Lyvers, H. I., “The Gradient Method in Process Control,” Industrial Engineering Chemistry, November 1961. Schrage, R. W., “Optimizing a Catalytic Cracking Operation by the Method of Steepest Ascents and Linear Programming,” Operations/Research, July-August 1958. Laspe, C. G., “Recent Experiences in On-Line Optimizing Control of Industrial Processes,” paper presented at the 5th Annual Control Conference, Purdue Laboratory, Lafayette, IN, April 1979. Carroll, C. W., “The Created Response Surface Technique for Optimizing Restrained Systems,” Operations Research, 1961. Carroll, C. W., “An Approach to Optimizing Control of a Restrained System by a Dynamic Gradient Technique,” Preprint 138-LA-61, ISA Proceedings of the Instrumentation–Automation Conference, Fall, 1961.

Bibliography Badgwell, T. A., and Qin, S. J., “Review of Nonlinear Model Predictive Control Applications,” in Nonlinear Predictive Control: Theory and Practice, Piscataway, NJ: IEE Publishing, 2001. Balakrishnan, A. V., and Neustadt, L. W. (Eds.), Computing Methods in Optimization Problems, New York: Academic Press, 1964. Balchen, J. G., and Mummè, K. I., Process Control Structures and Applications, New York: Van Nostrand Reinhold Co., 1988. Bellman, R., Dynamic Programming, Princeton, NJ: Princeton University Press, 1957. Blevins, T. L., Brown, M., McMillan, G., and Wojsznis, W. K., Advanced Control Unleashed, Research Triangle Park, NC: ISA, 2003. Bryson, A. E., and Denham, W. F., “A Steepest-Ascent Method for Solving Optimum Programming Problems,” Trans. ASME, Journal of Applied Mechanics, February 1962. Chatterjee, H. K., “Multivariable Process Control, Proceedings of the First IFAC Congress,” Moscow, Vol. 1, Butterworth, 1963. Chestnut, H., Duersch, R. R., and Gaines, W. M., “Automatic Optimizing of a Poorly Defined Process,” Proceedings of the Joint Automation Control Conference, Paper 8-1, 1962.

© 2006 by Béla Lipták

Chien, G. K. L., “Computer Control in Process Industries,” in C. T. Leondes (Ed.), Computer Control Systems Technology, New York: McGrawHill, 1961. Cutler, C. R., and Ramaker, B. L., “Dynamic Matrix Control — A Computer Control Algorithm,” ACC Conference Proceedings, San Francisco, 1980. Douglas, J. M., and Denn, M. M., “Optimal Design and Control by Variational Methods,” Industrial Engineering Chemistry, November 1965. Forsythe, G. E., and Motzkin, T. S., “Acceleration of the Optimum Gradient Method,” Preliminary report (abstract), Bulletin of the American Mathematical Society, 1951. Gore, F., “Add Compensation for Feedforward Control,” Instruments and Control Systems, March 1979. Harriott, P., Process Control, New York: McGraw-Hill, 1964. Hestenes, M. R., Calculus of Variations and Optimal Control Theory, New York: John Wiley & Sons, 1966. Hestenes, M. R., and Stiefel, E., “Method of Conjugate Gradients for Solving Linear Systems,” Journal of Research of the National Bureau of Standards, 1959. Himmelblau, D. M., Applied Nonlinear Programming, New York: McGrawHill, 1972. Horowitz, I. M., “Synthesis of Multivariable Feedback Control Systems,” IRE Trans. Autom. Control, AC-5, 1960. Huang, H., and Shah, S. L., Performance Assessment of Control Loops, London: Springer-Verlag, 1999. Kane, E. D., et al., “Computer Control of an FCC Unit,” National Petroleum Refiners’ Association Computer Conference, Technical paper 62-38, 1962. Kelly, K. P., “Should You Be Training Your Operators with Simulation Software?” Control, June 1992. Koenig, D. M., Control and Analysis of Noisy Processes, Englewood Cliffs, NJ: Prentice Hall, 1991. Laspe, C. G., “Optimal Operation of Ethylene Plants,” Instrumentation Technology, May 1978. Lefkowitz, I., “Computer Control,” in E. M. Grabbe, S. Ramo, and D. E. Wooldridge, Eds., Handbook of Automation, Computation, and Control, Vol. 3, New York: John Wiley & Sons, 1961. Leitmann, G. (Ed.), Optimization Techniques, New York: Academic Press, 1962. Lipták, B. G., “Envelope Optimization for Coal Gasification,” InTech, December 1975. Lipták, B. G., “Optimizing Plant Chiller Systems,” InTech, September 1977. Lipták, B. G., “Save Energy by Optimizing Boilers, Chillers and Pumps,” InTech, March 1981. Lipták, B. G., “Envelope Optimization for Clean Rooms,” Instruments and Control Systems, September 1982. Liptak, B. G. (Ed.), Instrument Engineers’ Handbook: Process Software and Digital Networks, Boca Raton, FL: CRC Press, 2002. Lloyd, S. G., “Basic Concepts of Multivariable Control,” Instrumentation Technology, December 1973. Luntz, R., Munro, N., and McLeod, R. S., “Computer-Aided Design of Multivariable Control Systems,” IEE 4th UKAC Convention, Manchester, 1971. Luyben, W. L., Process Modeling, Simulation and Control for Chemical Engineers, 2nd ed., New York: McGraw-Hill, 1990. Luyben, W. L., Luyben, M. L., and Bjorn, T. D., Plantwide Process Control, New York: McGraw-Hill, 1998. Mayne, D. Q., “Design of Linear Multivariable Systems,” Automatica, Vol. 9, 1973. McMillan, G., Tuning and Control Loop Performance, 2nd ed., 1990, ISA, pp. 45–55. Merriam, C. W. III, Optimization Theory and the Design of Feedback Control Systems, New York: McGraw-Hill, 1964. Morely, R., “Faster Than Real Time,” Manufacturing Systems, May 1992. Munro, N., and Ibrahim, A., “Computer-Aided Design of Multivariable Sampled-Data Systems,” IEE Conference on Computer Aided Control System Design, Cambridge, 1973.

2.20 Optimizing Control

Roberts, S. M., and Mahoney, J. D., “Dynamic Programming Control of Batch Reaction,” Chemical Engineering Progress Symposium Series, Vol. 58, No. 37, 1962. Roffel, B., and Chin, P., Computer Control in the Process Industries, Chelsea, MI: Lewis Publishers, 1989. Ruel, M., “PID Tuning and Process Optimization Increased Performance and Efficiency of a Paper Machine,” 87th Annual Meeting, PAPTAC, Book C (February 2001), pp. C63-C66. Sandgren, E., and Ragsdell, K. M., “The Utility of Nonlinear Programming Algorithms: A Comparative Study—Part 1 and 2,” ASME Journal of Mechanical Design, July 1980. Shinskey, F. G., “Feedforward Control Applied,” ISA Journal, November 1963. Shinskey, F. G., “Adaptive Nonlinear Control System,” U.S. Patent 3,794,817, February 26, 1971. Shinskey, F. G., “Process Control Systems with Variable Structure,” Control Engineering, August 1971.

© 2006 by Béla Lipták

283

Shinskey, F. G., “Adaptive pH Controller Monitors Nonlinear Process,” Control Engineering, February 1974. Van Doren, V. (Ed.), Techniques for Adaptive Control, Boston: Butterworth Heinemann, 2003. Vervalin, C. H., “Training by Simulation,” Hydrocarbon Processing, December 1984. Wells, C. H., “Industrial Process Applications of Modern Control Theory,” Instrumentation Technology, April 1971. Wellstead, P. E., and Zarrop, M. B., Self-Tuning Systems, New York: Wiley, 1991. Wilde, D. J., Optimum Seeking Methods, Englewood Cliffs, NJ: Prentice Hall, 1964. Zahradnik, R. L., Archer, D. H., and Rothfus, R. R., “Dynamic Optimization of a Distillation Column,” Proceedings of the Joint Automation Control Conference, Paper 13-3, 1962. Zellnik, H. E., Sondak, N. E., and Davis, R. S., “Gradient Search Optimization,” Chemical Engineering Progress, August 1962.